From 0e503e3e086487c2f1f38f9948874959b421cf6b Mon Sep 17 00:00:00 2001 From: kodster28 Date: Tue, 20 Aug 2024 12:21:03 -0500 Subject: [PATCH 1/3] Pages through Style guide --- .../index.mdx | 49 +-- .../pub-sub/examples/connect-javascript.mdx | 57 ++- .../docs/pub-sub/examples/connect-python.mdx | 9 +- src/content/docs/pub-sub/guide.mdx | 174 ++++---- .../learning/command-line-wrangler.mdx | 35 +- .../pub-sub/learning/integrate-workers.mdx | 211 +++++----- .../platform/authentication-authorization.mdx | 71 ++-- src/content/docs/pulumi/installing.mdx | 13 +- src/content/docs/pulumi/tutorial/add-site.mdx | 222 +++++----- .../docs/pulumi/tutorial/hello-world.mdx | 176 +++----- .../queues/configuration/batching-retries.mdx | 129 +++--- .../configuration/consumer-concurrency.mdx | 31 +- .../configuration/dead-letter-queues.mdx | 3 +- .../configuration/local-development.mdx | 23 +- .../queues/configuration/pull-consumers.mdx | 140 ++++--- .../examples/publish-to-a-queue-over-http.mdx | 48 ++- src/content/docs/queues/get-started.mdx | 35 +- .../index.mdx | 396 +++++++++--------- .../docs/r2/api/workers/workers-api-usage.mdx | 154 +++---- src/content/docs/r2/buckets/cors.mdx | 43 +- .../docs/r2/buckets/create-buckets.mdx | 17 +- .../docs/r2/buckets/event-notifications.mdx | 363 ++++++++-------- src/content/docs/r2/data-migration/sippy.mdx | 180 ++++---- src/content/docs/r2/examples/aws/aws-cli.mdx | 19 +- .../docs/r2/examples/aws/aws-sdk-js-v3.mdx | 58 +-- .../docs/r2/examples/aws/aws-sdk-js.mdx | 42 +- .../docs/r2/examples/aws/aws-sdk-php.mdx | 8 +- src/content/docs/r2/examples/aws/boto3.mdx | 23 +- src/content/docs/r2/examples/rclone.mdx | 28 +- .../upload-logs-event-notifications.mdx | 59 +-- .../docs/r2/objects/delete-objects.mdx | 8 +- .../docs/r2/objects/download-objects.mdx | 5 +- .../docs/r2/objects/upload-objects.mdx | 7 +- .../docs/radar/investigate/bgp-anomalies.mdx | 203 ++++----- .../other/signed-exchanges/reference.mdx | 8 +- ...onfigure-your-mobile-app-or-iot-device.mdx | 129 +++--- .../label-client-certificate.mdx | 3 +- .../client-certificates/troubleshooting.mdx | 13 +- .../additional-options/minimum-tls.mdx | 2 +- .../methods/delegated-dcv.mdx | 32 +- .../remove-file-key-password.mdx | 15 +- .../custom-certificates/troubleshooting.mdx | 9 +- .../aws-cloud-hsm.mdx | 17 +- .../azure-dedicated-hsm.mdx | 19 +- .../azure-managed-hsm.mdx | 29 +- .../configuration.mdx | 13 +- .../entrust-nshield-connect.mdx | 12 +- .../google-cloud-hsm.mdx | 67 ++- .../ibm-cloud-hsm.mdx | 19 +- .../hardware-security-modules/softhsmv2.mdx | 59 +-- .../docs/ssl/keyless-ssl/troubleshooting.mdx | 15 +- .../ssl/reference/certificate-statuses.mdx | 21 +- .../troubleshooting/general-ssl-errors.mdx | 57 ++- .../docs/stream/examples/rtmps_playback.mdx | 7 +- .../docs/stream/examples/srt_playback.mdx | 5 +- .../stream/stream-live/start-stream-live.mdx | 96 +++-- .../uploading-videos/upload-video-file.mdx | 117 +++--- .../stream/viewing-videos/download-videos.mdx | 43 +- .../content-types/tutorial.mdx | 29 +- .../formatting/code-block-guidelines.mdx | 106 +++-- .../wordpress.com-and-cloudflare.mdx | 24 +- ...igure-cloudflare-and-heroku-over-https.mdx | 23 +- 62 files changed, 1979 insertions(+), 2049 deletions(-) diff --git a/src/content/docs/pages/tutorials/use-r2-as-static-asset-storage-for-pages/index.mdx b/src/content/docs/pages/tutorials/use-r2-as-static-asset-storage-for-pages/index.mdx index fc7398bba0f873..7f78b491db2a1d 100644 --- a/src/content/docs/pages/tutorials/use-r2-as-static-asset-storage-for-pages/index.mdx +++ b/src/content/docs/pages/tutorials/use-r2-as-static-asset-storage-for-pages/index.mdx @@ -10,11 +10,8 @@ tags: - Hono languages: - JavaScript - --- - - This tutorial will teach you how to use [R2](/r2/) as a static asset bucket for your [Pages](/pages/) app. This is especially helpful if you're hitting the [file limit](/pages/platform/limits/#files) or the [max file size limit](/pages/platform/limits/#file-size) on Pages. To illustrate how this is done, we will use R2 as a static asset storage for a fictional cat blog. @@ -42,7 +39,7 @@ Adding more videos and images to the blog would be great, but our asset size is The first step is creating an R2 bucket to store the static assets. A new bucket can be created with the dashboard or via Wrangler. -Using the dashboard, navigate to the R2 tab, then click on *Create bucket.* We will name the bucket for our blog *cat-media*. Always remember to give your buckets descriptive names: +Using the dashboard, navigate to the R2 tab, then click on *Create bucket.* We will name the bucket for our blog _cat-media_. Always remember to give your buckets descriptive names: ![Dashboard](~/assets/images/pages/tutorials/pages-r2/dash.png) @@ -80,18 +77,18 @@ bucket_name = "cat-media" :::note -Note: The keyword `ASSETS` is reserved and cannot be used as a resource binding. +Note: The keyword `ASSETS` is reserved and cannot be used as a resource binding. ::: Save `wrangler.toml` and we are ready to move on to the last step. -Alternatively, you can add a binding to your Pages project on the dashboard by navigating to the project’s *Settings* tab > *Functions* > *R2 bucket bindings*. +Alternatively, you can add a binding to your Pages project on the dashboard by navigating to the project’s _Settings_ tab > _Functions_ > _R2 bucket bindings_. ## Serve R2 Assets From Pages The last step involves serving media assets from R2 on the blog. To do that, we will create a function to handle requests for media files. -In the project folder, create a *functions* directory. Then, create a *media* subdirectory and a file named `[[all]].js` in it. All HTTP requests to `/media` will be routed to this file. +In the project folder, create a _functions_ directory. Then, create a _media_ subdirectory and a file named `[[all]].js` in it. All HTTP requests to `/media` will be routed to this file. After creating the folders and JavaScript file, the blog directory structure should look like: @@ -114,12 +111,12 @@ Finally, we will add a handler function to `[[all]].js`. This function receives ```js export async function onRequestGet(ctx) { - const path = new URL(ctx.request.url).pathname.replace("/media/", ""); - const file = await ctx.env.MEDIA.get(path); - if (!file) return new Response(null, { status: 404 }); - return new Response(file.body, { - headers: { "Content-Type": file.httpMetadata.contentType }, - }); + const path = new URL(ctx.request.url).pathname.replace("/media/", ""); + const file = await ctx.env.MEDIA.get(path); + if (!file) return new Response(null, { status: 404 }); + return new Response(file.body, { + headers: { "Content-Type": file.httpMetadata.contentType }, + }); } ``` @@ -130,22 +127,22 @@ Before deploying the changes made so far to our cat blog, let us add a few new p ```html - -

Awesome Cat Blog! 😺

-

Today's post:

- -

Yesterday's post:

- - + +

Awesome Cat Blog! 😺

+

Today's post:

+ +

Yesterday's post:

+ + ``` With all the files saved, open a new terminal window to deploy the app: ```sh -$ npm run deploy +npm run deploy ``` Once deployed, media assets are fetched and served from the R2 bucket. @@ -154,6 +151,6 @@ Once deployed, media assets are fetched and served from the R2 bucket. ## **Related resources** -* [Learn how function routing works in Pages.](/pages/functions/routing/) -* [Learn how to create public R2 buckets](/r2/buckets/public-buckets/). -* [Learn how to use R2 from Workers](/r2/api/workers/workers-api-usage/). +- [Learn how function routing works in Pages.](/pages/functions/routing/) +- [Learn how to create public R2 buckets](/r2/buckets/public-buckets/). +- [Learn how to use R2 from Workers](/r2/api/workers/workers-api-usage/). diff --git a/src/content/docs/pub-sub/examples/connect-javascript.mdx b/src/content/docs/pub-sub/examples/connect-javascript.mdx index 3155c30e439c38..1ca5e38bdb5dc8 100644 --- a/src/content/docs/pub-sub/examples/connect-javascript.mdx +++ b/src/content/docs/pub-sub/examples/connect-javascript.mdx @@ -4,7 +4,6 @@ pcx_content_type: reference type: example summary: Use MQTT.js with the token authentication mode configured on a broker. description: Use MQTT.js with the token authentication mode configured on a broker. - --- Below is an example using [MQTT.js](https://github.com/mqttjs/MQTT.js#mqttclientstreambuilder-options) with the TOKEN authentication mode configured on a broker. The example assumes you have [Node.js](https://nodejs.org/en/) v16 or higher installed on your system. @@ -19,7 +18,7 @@ Before running the example, make sure to install the MQTT library: ```sh # Pre-requisite: install MQTT.js -$ npm install mqtt --save +npm install mqtt --save ``` Copy the following example as `example.js` and run it with `node example.js`. @@ -40,50 +39,50 @@ let topic = check_env(process.env.BROKER_TOPIC); // Configure and create the MQTT client const client = mqtt.connect(uri, { - protocolVersion: 5, - port: 8883, - clean: true, - connectTimeout: 2000, // 2 seconds - clientId: "", - username, - password, + protocolVersion: 5, + port: 8883, + clean: true, + connectTimeout: 2000, // 2 seconds + clientId: "", + username, + password, }); // Emit errors and exit client.on("error", function (err) { - console.log(`⚠️ error: ${err}`); - client.end(); - process.exit(); + console.log(`⚠️ error: ${err}`); + client.end(); + process.exit(); }); // Connect to your broker client.on("connect", function () { - console.log(`🌎 connected to ${process.env.BROKER_URI}!`); - // Subscribe to a topic - client.subscribe(topic, function (err) { - if (!err) { - console.log(`✅ subscribed to ${topic}`); - // Publish a message! - client.publish(topic, "My first MQTT message"); - } - }); + console.log(`🌎 connected to ${process.env.BROKER_URI}!`); + // Subscribe to a topic + client.subscribe(topic, function (err) { + if (!err) { + console.log(`✅ subscribed to ${topic}`); + // Publish a message! + client.publish(topic, "My first MQTT message"); + } + }); }); // Start waiting for messages client.on("message", async function (topic, message) { - console.log(`received a message: ${message.toString()}`); + console.log(`received a message: ${message.toString()}`); - // Goodbye! - client.end(); - process.exit(); + // Goodbye! + client.end(); + process.exit(); }); // Return variable or throw error function check_env(env) { - if (!env) { - throw "BROKER_URI, BROKER_TOKEN and BROKER_TOPIC must be set."; - } + if (!env) { + throw "BROKER_URI, BROKER_TOKEN and BROKER_TOPIC must be set."; + } - return env; + return env; } ``` diff --git a/src/content/docs/pub-sub/examples/connect-python.mdx b/src/content/docs/pub-sub/examples/connect-python.mdx index 7e11e6a72d49fe..65167e5eba7c2e 100644 --- a/src/content/docs/pub-sub/examples/connect-python.mdx +++ b/src/content/docs/pub-sub/examples/connect-python.mdx @@ -4,7 +4,6 @@ pcx_content_type: reference type: example summary: Connect to a Broker using Python 3 description: Connect to a Broker using Python 3 - --- Below is an example using the [paho.mqtt.python](https://github.com/eclipse/paho.mqtt.python) package with the TOKEN authentication mode configured on a Broker. @@ -13,15 +12,15 @@ The example below creates a simple subscriber, sends a message to the configured Make sure to set environmental variables for the following before running the example: -* `BROKER_FQDN` - e.g. `YOUR-BROKER.YOUR-NAMESPACE.cloudflarepubsub.com` without the port or `mqtts://` scheme -* `BROKER_TOKEN` (a valid auth token) -* `BROKER_TOPIC` - e.g. `test/topic` or `hello/world` +- `BROKER_FQDN` - e.g. `YOUR-BROKER.YOUR-NAMESPACE.cloudflarepubsub.com` without the port or `mqtts://` scheme +- `BROKER_TOKEN` (a valid auth token) +- `BROKER_TOPIC` - e.g. `test/topic` or `hello/world` The example below uses Python 3.8, but should run on Python 3.6 and above. ```sh # Ensure you have paho-mqtt installed -$ pip3 install paho-mqtt +pip3 install paho-mqtt ``` Create a file called `pubsub.py` with the following content, and use `python3 pubsub.py` to run the example: diff --git a/src/content/docs/pub-sub/guide.mdx b/src/content/docs/pub-sub/guide.mdx index 28331a1c081e8e..35f2f24a4e9891 100644 --- a/src/content/docs/pub-sub/guide.mdx +++ b/src/content/docs/pub-sub/guide.mdx @@ -3,26 +3,23 @@ title: Get started pcx_content_type: get-started sidebar: order: 1 - --- -import { Render } from "~/components" +import { Render } from "~/components"; :::note - Pub/Sub is currently in private beta. You can [sign up for the waitlist](https://www.cloudflare.com/cloudflare-pub-sub-lightweight-messaging-private-beta/) to register your interest. - ::: Pub/Sub is a flexible, scalable messaging service built on top of the MQTT messaging standard, allowing you to publish messages from tens of thousands of devices (or more), deploy code to filter, aggregate and transform messages using Cloudflare Workers, and/or subscribe to topics for fan-out messaging use cases. This guide will: -* Instruct you through creating your first Pub/Sub Broker using the Cloudflare API. -* Create a `..cloudflarepubsub.com` endpoint ready to publish and subscribe to using any MQTT v5.0 compatible client. -* Help you send your first message to the Pub/Sub Broker. +- Instruct you through creating your first Pub/Sub Broker using the Cloudflare API. +- Create a `..cloudflarepubsub.com` endpoint ready to publish and subscribe to using any MQTT v5.0 compatible client. +- Help you send your first message to the Pub/Sub Broker. Before you begin, you should be familiar with using the command line and running basic terminal commands. @@ -38,10 +35,8 @@ During the Private Beta, your account will need to be explicitly granted access. :::note - Pub/Sub support in Wrangler requires wrangler `2.0.16` or above. If you're using an older version of Wrangler, ensure you [update the installed version](/workers/wrangler/install-and-update/#update-wrangler). - ::: Installing `wrangler`, the Workers command-line interface (CLI), allows you to [`init`](/workers/wrangler/commands/#init), [`dev`](/workers/wrangler/commands/#dev), and [`publish`](/workers/wrangler/commands/#publish) your Workers projects. @@ -53,7 +48,10 @@ To install [`wrangler`](https://github.com/cloudflare/workers-sdk/tree/main/pack Validate that you have a version of `wrangler` that supports Pub/Sub: ```sh -$ wrangler --version +wrangler --version +``` + +```sh output 2.0.16 # should show 2.0.16 or greater - e.g. 2.0.17 or 2.1.0 ``` @@ -65,10 +63,8 @@ To use Wrangler with Pub/Sub, you'll need an API Token that has permissions to b :::note - This API token requirement will be lifted prior to Pub/Sub becoming Generally Available. - ::: 1. From the [Cloudflare dashboard](https://dash.cloudflare.com), click on the profile icon and select **My Profile**. @@ -78,21 +74,19 @@ This API token requirement will be lifted prior to Pub/Sub becoming Generally Av 5. Name the token - e.g. "Pub/Sub Write Access" 6. Under the **Permissions** heading, choose **Account**, select **Pub/Sub** from the first drop-down, and **Edit** as the permission. 7. Select **Add More** below the newly created permission. Choose **User** > **Memberships** from the first dropdown and **Read** as the permission. -8. Select **Continue to Summary** at the bottom of the page, where you should see *All accounts - Pub/Sub:Edit* as the permission. +8. Select **Continue to Summary** at the bottom of the page, where you should see _All accounts - Pub/Sub:Edit_ as the permission. 9. Select **Create Token** and copy the token value. In your terminal, configure a `CLOUDFLARE_API_TOKEN` environmental variable with your Pub/Sub token. When this variable is set, `wrangler` will use it to authenticate against the Cloudflare API. ```sh -$ export CLOUDFLARE_API_TOKEN="pasteyourtokenhere" +export CLOUDFLARE_API_TOKEN="pasteyourtokenhere" ``` :::caution[Warning] - This token should be kept secret and not committed to source code or placed in any client-side code. - ::: With this environmental variable configured, you can now create your first Pub/Sub Broker! @@ -103,27 +97,25 @@ A namespace represents a collection of Pub/Sub Brokers, and they can be used to Before you begin, consider the following: -* **Choose your namespace carefully**. Although it can be changed later, it will be used as part of the hostname for your Brokers. You should not use secrets or other data that cannot be exposed on the Internet. -* Namespace names are global; they are globally unique. -* Namespaces must be valid DNS names per RFC 1035. In most cases, this means only a-z, 0-9, and hyphens are allowed. Names are case-insensitive. +- **Choose your namespace carefully**. Although it can be changed later, it will be used as part of the hostname for your Brokers. You should not use secrets or other data that cannot be exposed on the Internet. +- Namespace names are global; they are globally unique. +- Namespaces must be valid DNS names per RFC 1035. In most cases, this means only a-z, 0-9, and hyphens are allowed. Names are case-insensitive. For example, a namespace of `my-namespace` and a broker of `staging` would create a hostname of `staging.my-namespace.cloudflarepubsub.com` for clients to connect to. With this in mind, create a new namespace. This example will use `my-namespace` as a placeholder: ```sh -$ wrangler pubsub namespace create my-namespace +wrangler pubsub namespace create my-namespace ``` -You should receive a success response that resembles the following: - -```json +```json output { - "id": "817170399d784d4ea8b6b90ae558c611", - "name": "my-namespace", - "description": "", - "created_on": "2022-05-11T23:13:08.383232Z", - "modified_on": "2022-05-11T23:13:08.383232Z" + "id": "817170399d784d4ea8b6b90ae558c611", + "name": "my-namespace", + "description": "", + "created_on": "2022-05-11T23:13:08.383232Z", + "modified_on": "2022-05-11T23:13:08.383232Z" } ``` @@ -137,35 +129,33 @@ This broker will be configured to accept `TOKEN` authentication. In MQTT terms, Broker names must be: -* Chosen carefully. Although it can be changed later, the name will be used as part of the hostname for your brokers. Do not use secrets or other data that cannot be exposed on the Internet. -* Valid DNS names (per RFC 1035). In most cases, this means only `a-z`, `0-9` and hyphens are allowed. Names are case-insensitive. -* Unique per namespace. +- Chosen carefully. Although it can be changed later, the name will be used as part of the hostname for your brokers. Do not use secrets or other data that cannot be exposed on the Internet. +- Valid DNS names (per RFC 1035). In most cases, this means only `a-z`, `0-9` and hyphens are allowed. Names are case-insensitive. +- Unique per namespace. To create a new MQTT Broker called `example-broker` in the `my-namespace` namespace from the example above: ```sh -$ wrangler pubsub broker create example-broker --namespace=my-namespace +wrangler pubsub broker create example-broker --namespace=my-namespace ``` -You should receive a success response that resembles the following example: - -```json +```json output { - "id": "4c63fa30ee13414ba95be5b56d896fea", - "name": "example-broker", - "authType": "TOKEN", - "created_on": "2022-05-11T23:19:24.356324Z", - "modified_on": "2022-05-11T23:19:24.356324Z", - "expiration": null, - "endpoint": "mqtts://example-broker.namespace.cloudflarepubsub.com:8883" + "id": "4c63fa30ee13414ba95be5b56d896fea", + "name": "example-broker", + "authType": "TOKEN", + "created_on": "2022-05-11T23:19:24.356324Z", + "modified_on": "2022-05-11T23:19:24.356324Z", + "expiration": null, + "endpoint": "mqtts://example-broker.namespace.cloudflarepubsub.com:8883" } ``` In the example above, a broker is created with an endpoint of `mqtts://example-broker.my-namespace.cloudflarepubsub.com`. This means: -* Our Pub/Sub (MQTT) Broker is reachable over MQTTS (MQTT over TLS) - port 8883 -* The hostname is `example-broker.my-namespace.cloudflarepubsub.com` -* [Token authentication](/pub-sub/platform/authentication-authorization/) is required to clients to connect. +- Our Pub/Sub (MQTT) Broker is reachable over MQTTS (MQTT over TLS) - port 8883 +- The hostname is `example-broker.my-namespace.cloudflarepubsub.com` +- [Token authentication](/pub-sub/platform/authentication-authorization/) is required to clients to connect. ## 6. Create credentials for your broker @@ -173,23 +163,21 @@ In order to connect to a Pub/Sub Broker, you need to securely authenticate. Cred Note that: -* You can generate multiple credentials at once (up to 100 per API call), which can be useful when configuring multiple clients (such as IoT devices). -* Credentials are associated with a specific Client ID and encoded as a signed JSON Web Token (JWT). -* Each token has a unique identifier (a `jti` - or `JWT ID`) that you can use to revoke a specific token. -* Tokens are prefixed with the broker name they are associate with (for example, `my-broker`) to make identifying tokens across multiple Pub/Sub brokers easier. +- You can generate multiple credentials at once (up to 100 per API call), which can be useful when configuring multiple clients (such as IoT devices). +- Credentials are associated with a specific Client ID and encoded as a signed JSON Web Token (JWT). +- Each token has a unique identifier (a `jti` - or `JWT ID`) that you can use to revoke a specific token. +- Tokens are prefixed with the broker name they are associate with (for example, `my-broker`) to make identifying tokens across multiple Pub/Sub brokers easier. :::note - Ensure you do not commit your credentials to source control, such as GitHub. A valid token allows anyone to connect to your broker and publish or subscribe to messages. Treat credentials as secrets. - ::: To generate two tokens for a broker called `example-broker` with a 48 hour expiry: ```sh -$ wrangler pubsub broker issue example-broker --namespace=NAMESPACE_NAME --number=2 --expiration=48h +wrangler pubsub broker issue example-broker --namespace=NAMESPACE_NAME --number=2 --expiration=48h ``` You should receive a success response that resembles the example below, which is a map of Client IDs and their associated tokens. @@ -210,89 +198,93 @@ Your broker is now created and ready to accept messages from authenticated clien :::note -You can view a live demo available at [demo.mqtt.dev](http://demo.mqtt.dev) that allows you to use your own Pub/Sub Broker and a valid token to subscribe to a topic and publish messages to it. The `JWT` field in the demo accepts a valid token from your Broker. +You can view a live demo available at [demo.mqtt.dev](http://demo.mqtt.dev) that allows you to use your own Pub/Sub Broker and a valid token to subscribe to a topic and publish messages to it. The `JWT` field in the demo accepts a valid token from your Broker. ::: The example below uses [MQTT.js](https://github.com/mqttjs/MQTT.js) with Node.js to subscribe to a topic on a broker and publish a very basic "hello world" style message. You will need to have a [supported Node.js](https://nodejs.org/en/download/current/) version installed. ```sh # Check that Node.js is installed -$ which node +which node # Install MQTT.js -$ npm i mqtt --save +npm i mqtt --save ``` Set your environment variables. ```sh -$ export CLOUDFLARE_API_TOKEN="YourAPIToken" -$ export CLOUDFLARE_ACCOUNT_ID="YourAccountID" -$ export DEFAULT_NAMESPACE="TheNamespaceYouCreated" -$ export BROKER_NAME="TheBrokerYouCreated" +export CLOUDFLARE_API_TOKEN="YourAPIToken" +export CLOUDFLARE_ACCOUNT_ID="YourAccountID" +export DEFAULT_NAMESPACE="TheNamespaceYouCreated" +export BROKER_NAME="TheBrokerYouCreated" ``` We can now generate an access token for Pub/Sub. We will need both the client ID and the token (a JSON Web Token) itself to authenticate from our MQTT client: ```sh -$ curl -s -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}" -H "Content-Type: application/json" "https://api.cloudflare.com/client/v4/accounts/${CLOUDFLARE_ACCOUNT_ID}/pubsub/namespaces/namespace/brokers/is-it-broken/credentials?type=TOKEN&topicAcl=#" | jq '.result | to_entries | .[0]' +curl -s -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}" -H "Content-Type: application/json" "https://api.cloudflare.com/client/v4/accounts/${CLOUDFLARE_ACCOUNT_ID}/pubsub/namespaces/namespace/brokers/is-it-broken/credentials?type=TOKEN&topicAcl=#" | jq '.result | to_entries | .[0]' ``` This will output a `key` representing the `clientId`, and a `value` representing our (secret) access token, resembling the following: ```json { - "key": "01HDQFD5Y8HWBFGFBBZPSWQ22M", - "value": "eyJhbGciOiJFZERTQSIsImtpZCI6IjU1X29UODVqQndJbjlFYnY0V3dzanRucG9ycTBtalFlb1VvbFZRZDIxeEUifQ....NVpToBedVYGGhzHJZmpEG1aG_xPBWrE-PgG1AFYcTPEBpZ_wtN6ApeAUM0JIuJdVMkoIC9mUg4vPtXM8jLGgBw" + "key": "01HDQFD5Y8HWBFGFBBZPSWQ22M", + "value": "eyJhbGciOiJFZERTQSIsImtpZCI6IjU1X29UODVqQndJbjlFYnY0V3dzanRucG9ycTBtalFlb1VvbFZRZDIxeEUifQ....NVpToBedVYGGhzHJZmpEG1aG_xPBWrE-PgG1AFYcTPEBpZ_wtN6ApeAUM0JIuJdVMkoIC9mUg4vPtXM8jLGgBw" } ``` Copy the `value` field and set it as the `BROKER_TOKEN` environmental variable: ```sh -$ export BROKER_TOKEN="" +export BROKER_TOKEN="" ``` Create a file called `index.js `, making sure that: -* `brokerEndpoint` is set to the address of your Pub/Sub broker. -* `clientId` is the `key` from your newly created access token -* The `BROKER_TOKEN` environmental variable populated with your access token. +- `brokerEndpoint` is set to the address of your Pub/Sub broker. +- `clientId` is the `key` from your newly created access token +- The `BROKER_TOKEN` environmental variable populated with your access token. :::note - Your `BROKER_TOKEN` is sensitive, and should be kept secret to avoid unintended access to your Pub/Sub broker. Avoid committing it to source code. - ::: ```js -const mqtt = require('mqtt') +const mqtt = require("mqtt"); -const brokerEndpoint = "mqtts://my-broker.my-namespace.cloudflarepubsub.com" -const clientId = "01HDQFD5Y8HWBFGFBBZPSWQ22M" // Replace this with your client ID +const brokerEndpoint = "mqtts://my-broker.my-namespace.cloudflarepubsub.com"; +const clientId = "01HDQFD5Y8HWBFGFBBZPSWQ22M"; // Replace this with your client ID const options = { - port: 8883, - username: clientId, // MQTT.js requires this, but Pub/Sub does not - clientId: clientId, // Required by Pub/Sub - password: process.env.BROKER_TOKEN, - protocolVersion: 5, // MQTT 5 -} - -const client = mqtt.connect(brokerEndpoint, options) - -client.subscribe("example-topic") -client.publish("example-topic", `message from ${client.options.clientId}: hello at ${Date.now()}`) + port: 8883, + username: clientId, // MQTT.js requires this, but Pub/Sub does not + clientId: clientId, // Required by Pub/Sub + password: process.env.BROKER_TOKEN, + protocolVersion: 5, // MQTT 5 +}; + +const client = mqtt.connect(brokerEndpoint, options); + +client.subscribe("example-topic"); +client.publish( + "example-topic", + `message from ${client.options.clientId}: hello at ${Date.now()}`, +); client.on("message", function (topic, message) { - console.log(`received message on ${topic}: ${message}`) -}) + console.log(`received message on ${topic}: ${message}`); +}); ``` Run the example. You should see the output written to your terminal (stdout). ```sh -$ node index.js +node index.js +``` + +```sh output > received message on example-topic: hello from 01HDQFD5Y8HWBFGFBBZPSWQ22M at 1652102228 ``` @@ -300,14 +292,14 @@ Your client ID and timestamp will be different from above, but you should see a If you do not see the message you published, or you are receiving error messages, ensure that: -* The `BROKER_TOKEN` environmental variable is not empty. Try echo `$BROKER_TOKEN` in your terminal. -* You updated the `brokerEndpoint` to match the broker you created. The **Endpoint** field of your broker will show this address and port. -* You correctly [installed MQTT.js](https://github.com/mqttjs/MQTT.js#install). +- The `BROKER_TOKEN` environmental variable is not empty. Try echo `$BROKER_TOKEN` in your terminal. +- You updated the `brokerEndpoint` to match the broker you created. The **Endpoint** field of your broker will show this address and port. +- You correctly [installed MQTT.js](https://github.com/mqttjs/MQTT.js#install). ## Next Steps What's next? -* [Connect a worker to your broker](/pub-sub/learning/integrate-workers/) to programmatically read, parse, and filter messages as they are published to a broker -* [Learn how PubSub and the MQTT protocol work](/pub-sub/learning/how-pubsub-works) -* [See example client code](/pub-sub/examples) for publishing or subscribing to a PubSub broker +- [Connect a worker to your broker](/pub-sub/learning/integrate-workers/) to programmatically read, parse, and filter messages as they are published to a broker +- [Learn how PubSub and the MQTT protocol work](/pub-sub/learning/how-pubsub-works) +- [See example client code](/pub-sub/examples) for publishing or subscribing to a PubSub broker diff --git a/src/content/docs/pub-sub/learning/command-line-wrangler.mdx b/src/content/docs/pub-sub/learning/command-line-wrangler.mdx index 24aa29c24ef434..e49318738e47dd 100644 --- a/src/content/docs/pub-sub/learning/command-line-wrangler.mdx +++ b/src/content/docs/pub-sub/learning/command-line-wrangler.mdx @@ -4,17 +4,14 @@ pcx_content_type: reference type: example summary: How to manage Pub/Sub with Wrangler, the Cloudflare CLI. description: How to manage Pub/Sub with Wrangler, the Cloudflare CLI. - --- Wrangler is a command-line tool for building and managing Cloudflare's Developer Platform, including [Cloudflare Workers](https://workers.cloudflare.com/), [R2 Storage](/r2/) and [Cloudflare Pub/Sub](/pub-sub/). :::note - Pub/Sub support in Wrangler requires wrangler `2.0.16` or above. If you're using an older version of Wrangler, ensure you [update the installed version](/workers/wrangler/install-and-update/#update-wrangler). - ::: ## Authenticating Wrangler @@ -23,10 +20,8 @@ To use Wrangler with Pub/Sub, you'll need an API Token that has permissions to b :::note - This API token requirement will be lifted prior to Pub/Sub becoming Generally Available. - ::: To create an API Token that Wrangler can use: @@ -37,21 +32,19 @@ To create an API Token that Wrangler can use: 4. Choose **Get Started** next to **Create Custom Token** 5. Name the token - e.g. "Pub/Sub Write Access" 6. Under the **Permissions** heading, choose **Account**, select **Pub/Sub** from the first drop-down, and **Edit** as the permission. -7. Click **Continue to Summary** at the bottom of the page, where you should see *All accounts - Pub/Sub:Edit* as the permission +7. Click **Continue to Summary** at the bottom of the page, where you should see _All accounts - Pub/Sub:Edit_ as the permission 8. Click **Create Token**, and copy the token value. In your terminal, configure a `CLOUDFLARE_API_TOKEN` environmental variable with your Pub/Sub token. When this variable is set, `wrangler` will use it to authenticate against the Cloudflare API. ```sh -$ export CLOUDFLARE_API_TOKEN="pasteyourtokenhere" +export CLOUDFLARE_API_TOKEN="pasteyourtokenhere" ``` :::caution[Warning] - This token should be kept secret and not committed to source code or placed in any client-side code. - ::: ## Pub/Sub Commands @@ -64,7 +57,10 @@ Wrangler exposes two groups of commands for managing your Pub/Sub configurations The available `wrangler pubsub namespace` sub-commands include: ```sh -$ wrangler pubsub namespace --help +wrangler pubsub namespace --help +``` + +```sh output Manage your Pub/Sub Namespaces @@ -78,7 +74,10 @@ Commands: The available `wrangler pubsub broker` sub-commands include: ```sh -$ wrangler pubsub broker --help +wrangler pubsub broker --help +``` + +```sh output Interact with your Pub/Sub Brokers @@ -100,7 +99,7 @@ Commands: To create a [Namespace](/pub-sub/learning/how-pubsub-works/#brokers-and-namespaces): ```sh -$ wrangler pubsub namespace create NAMESPACE_NAME +wrangler pubsub namespace create NAMESPACE_NAME ``` ### Create a Broker @@ -108,21 +107,21 @@ $ wrangler pubsub namespace create NAMESPACE_NAME To create a [Broker](/pub-sub/learning/how-pubsub-works/#brokers-and-namespaces) within a Namespace: ```sh -$ wrangler pubsub broker create BROKER_NAME --namespace=NAMESPACE_NAME +wrangler pubsub broker create BROKER_NAME --namespace=NAMESPACE_NAME ``` ### Issue an Auth Token You can issue client credentials for a Pub/Sub Broker directly via Wrangler. Note that: -* Tokens are scoped per Broker -* You can issue multiple tokens at once -* Tokens currently allow a client to publish and/or subscribe to *any* topic on the Broker. +- Tokens are scoped per Broker +- You can issue multiple tokens at once +- Tokens currently allow a client to publish and/or subscribe to _any_ topic on the Broker. To issue a single token: ```sh -$ wrangler pubsub broker issue BROKER_NAME --namespace=NAMESPACE_NAME +wrangler pubsub broker issue BROKER_NAME --namespace=NAMESPACE_NAME ``` You can use `--number=` to issue multiple tokens at once, and `--expiration=` to set an expiry (e.g. `4h` or `30d`) on the issued tokens. @@ -132,7 +131,7 @@ You can use `--number=` to issue multiple tokens at once, and `--expiration To revoke one or more tokens—which will immediately prevent that token from being used to authenticate—use the `revoke` sub-command and pass the unique token ID (or `JTI`): ```sh -$ wrangler pubsub broker revoke BROKER_NAME --namespace=NAMESPACE_NAME --jti=JTI_ONE --jti=JTI_TWO +wrangler pubsub broker revoke BROKER_NAME --namespace=NAMESPACE_NAME --jti=JTI_ONE --jti=JTI_TWO ``` ## Filing Bugs diff --git a/src/content/docs/pub-sub/learning/integrate-workers.mdx b/src/content/docs/pub-sub/learning/integrate-workers.mdx index 2101e37819db92..c17534198f1395 100644 --- a/src/content/docs/pub-sub/learning/integrate-workers.mdx +++ b/src/content/docs/pub-sub/learning/integrate-workers.mdx @@ -3,7 +3,6 @@ title: Integrate with Workers pcx_content_type: tutorial sidebar: order: 2 - --- Once of the most powerful features of Pub/Sub is the ability to connect [Cloudflare Workers](/workers/) — powerful serverless functions that run on the edge — and filter, aggregate and mutate every message published to that broker. Workers can also mirror those messages to other sources, including writing to [Cloudflare R2 storage](/r2/), external databases, or other cloud services beyond Cloudflare, making it easy to persist or analyze incoming message payloads and data at scale. @@ -20,18 +19,16 @@ You can use one, many or all of these integrations as needed. "On-Publish" hooks are a powerful way to filter and modify messages as they are published to your Pub/Sub Broker. -* The Worker runs as a "post-publish" hook where messages are accepted by the broker, passed to the Worker, and messages are only sent to clients who subscribed to the topic after the Worker returns a valid HTTP response. -* If the Worker does not return a response (intentionally or not), or returns an HTTP status code other than HTTP 200, the message is dropped. -* All `PUBLISH` messages (packets) published to your Broker are sent to the Worker. Other MQTT packets, such as CONNECT or AUTH packets, are automatically handled for you by Pub/Sub. +- The Worker runs as a "post-publish" hook where messages are accepted by the broker, passed to the Worker, and messages are only sent to clients who subscribed to the topic after the Worker returns a valid HTTP response. +- If the Worker does not return a response (intentionally or not), or returns an HTTP status code other than HTTP 200, the message is dropped. +- All `PUBLISH` messages (packets) published to your Broker are sent to the Worker. Other MQTT packets, such as CONNECT or AUTH packets, are automatically handled for you by Pub/Sub. ### Connect a Worker to a Broker :::note - You must validate the signature of every incoming message to ensure it comes from Cloudflare and not an untrusted third-party. - ::: To connect a Worker to a Pub/Sub Broker as an on-publish hook, you'll need to: @@ -44,22 +41,20 @@ To connect a Worker to a Pub/Sub Broker as an on-publish hook, you'll need to: The following is an end-to-end example showing how to: -* Authenticate incoming requests from Pub/Sub (and reject those not from Pub/Sub) -* Replace the payload of a message on a specific topic -* Return the message to the Broker so that it can forward it to subscribers +- Authenticate incoming requests from Pub/Sub (and reject those not from Pub/Sub) +- Replace the payload of a message on a specific topic +- Return the message to the Broker so that it can forward it to subscribers :::note - You should be familiar with setting up a [Worker](/workers/get-started/guide/) before continuing with this example. - ::: To ensure your Worker can validate incoming requests, you must make the public keys available to your Worker via an [environmental variable](/workers/configuration/environment-variables/). To do so, we can fetch the public keys from our Broker: ```sh -$ wrangler pubsub broker public-keys YOUR_BROKER --namespace=NAMESPACE_NAME +wrangler pubsub broker public-keys YOUR_BROKER --namespace=NAMESPACE_NAME ``` You should receive a success response that resembles the example below, with the public key set from your Worker: @@ -89,10 +84,8 @@ Copy the array of public keys into your `wrangler.toml` as an environmental vari :::note - Your public keys will be unique to your own Pub/Sub Broker: you should ensure you're copying the keys associated with your own Broker. - ::: ```toml @@ -136,7 +129,7 @@ public keys. To install `@cloudflare/pubsub`, you can use `npm` or `yarn`: ```sh -$ npm i @cloudflare/pubsub +npm i @cloudflare/pubsub ``` With `@cloudflare/pubsub` installed, we can now import both the `isValidBrokerRequest` function and our `PubSubMessage` types into @@ -147,85 +140,83 @@ our Worker code directly: /// -import { isValidBrokerRequest, PubSubMessage } from "@cloudflare/pubsub" +import { isValidBrokerRequest, PubSubMessage } from "@cloudflare/pubsub"; async function pubsub( - messages: Array, - env: any, - ctx: ExecutionContext + messages: Array, + env: any, + ctx: ExecutionContext, ): Promise> { - // Messages may be batched at higher throughputs, so we should loop over - // the incoming messages and process them as needed. - for (let msg of messages) { - console.log(msg); - // Replace the message contents in our topic - named "test/topic" - // as a simple example - if (msg.topic.startsWith("test/topic")) { - msg.payload = `replaced text payload at ${Date.now()}`; - } - } - - return messages; + // Messages may be batched at higher throughputs, so we should loop over + // the incoming messages and process them as needed. + for (let msg of messages) { + console.log(msg); + // Replace the message contents in our topic - named "test/topic" + // as a simple example + if (msg.topic.startsWith("test/topic")) { + msg.payload = `replaced text payload at ${Date.now()}`; + } + } + + return messages; } const worker = { - async fetch(req, env, ctx): Promise { - // Retrieve this from your Broker's "publicKey" field. - // - // Each Broker has a unique key to distinguish between your Broker vs. others - // We store these keys in environmental variables (/workers/configuration/environment-variables/) - // to avoid needing to fetch them on every request. - let publicKeys = env.BROKER_PUBLIC_KEYS; - - // Critical: you must validate the incoming request is from your Broker. - // - // In the future, Workers will be able to do this on your behalf for Workers - // in the same account as your Pub/Sub Broker. - if (await isValidBrokerRequest(req, publicKeys)) { - // Parse the PubSub message - let incomingMessages: Array = await req.json(); - - // Pass the messages to our pubsub handler, and capture the returned - // message. - let outgoingMessages = await pubsub(incomingMessages, env, ctx); - - // Re-serialize the messages and return a HTTP 200. - // The Content-Type is optional, but must either by - // "application/octet-stream" or left empty. - return new Response(JSON.stringify(outgoingMessages), { status: 200 }); - } - - return new Response("not a valid Broker request", { status: 403 }); - }, + async fetch(req, env, ctx): Promise { + // Retrieve this from your Broker's "publicKey" field. + // + // Each Broker has a unique key to distinguish between your Broker vs. others + // We store these keys in environmental variables (/workers/configuration/environment-variables/) + // to avoid needing to fetch them on every request. + let publicKeys = env.BROKER_PUBLIC_KEYS; + + // Critical: you must validate the incoming request is from your Broker. + // + // In the future, Workers will be able to do this on your behalf for Workers + // in the same account as your Pub/Sub Broker. + if (await isValidBrokerRequest(req, publicKeys)) { + // Parse the PubSub message + let incomingMessages: Array = await req.json(); + + // Pass the messages to our pubsub handler, and capture the returned + // message. + let outgoingMessages = await pubsub(incomingMessages, env, ctx); + + // Re-serialize the messages and return a HTTP 200. + // The Content-Type is optional, but must either by + // "application/octet-stream" or left empty. + return new Response(JSON.stringify(outgoingMessages), { status: 200 }); + } + + return new Response("not a valid Broker request", { status: 403 }); + }, } satisfies ExportedHandler; export default worker; ``` -Once you have deployed your Worker using `npx wrangler deploy`, you will need to configure your Broker to invoke the Worker. This is done by setting the `--on-publish-url` value of your Broker to the *publicly accessible* URL of your Worker: +Once you have deployed your Worker using `npx wrangler deploy`, you will need to configure your Broker to invoke the Worker. This is done by setting the `--on-publish-url` value of your Broker to the _publicly accessible_ URL of your Worker: ```sh -$ wrangler pubsub broker update YOUR_BROKER --namespace=NAMESPACE_NAME --on-publish-url="https://your.worker.workers.dev" +wrangler pubsub broker update YOUR_BROKER --namespace=NAMESPACE_NAME --on-publish-url="https://your.worker.workers.dev" ``` -You should receive a success response that resembles the example below, with the URL of your Worker: - -```json +```json {11} output { - "id": "4c63fa30ee13414ba95be5b56d896fea", - "name": "example-broker", - "authType": "TOKEN", - "created_on": "2022-05-11T23:19:24.356324Z", - "modified_on": "2022-05-11T23:19:24.356324Z", - "expiration": null, - "endpoint": "mqtts://example-broker.namespace.cloudflarepubsub.com:8883", - "on_publish": { - "url": "https://your-worker.your-account.workers.dev" - } + "id": "4c63fa30ee13414ba95be5b56d896fea", + "name": "example-broker", + "authType": "TOKEN", + "created_on": "2022-05-11T23:19:24.356324Z", + "modified_on": "2022-05-11T23:19:24.356324Z", + "expiration": null, + "endpoint": "mqtts://example-broker.namespace.cloudflarepubsub.com:8883", + "on_publish": { + "url": "https://your-worker.your-account.workers.dev" + } } ``` -Once you set this, *all* MQTT `PUBLISH` messages sent to your Broker from clients will be delivered to your Worker for further processing. You can use our [web-based live demo](https://demo.mqtt.dev) to test that your Worker is correctly validating requests and intercepting messages. +Once you set this, _all_ MQTT `PUBLISH` messages sent to your Broker from clients will be delivered to your Worker for further processing. You can use our [web-based live demo](https://demo.mqtt.dev) to test that your Worker is correctly validating requests and intercepting messages. Note that other HTTPS-enabled endpoints are valid destinations to forward messages to, but may incur latency and/or reduce message delivery success rates as messages will necessarily need to traverse the public Internet. @@ -235,26 +226,26 @@ Below is an example of a PubSub message sent over HTTP to a Worker: ```json [ - { - "mid": 0, - "broker": "my-broker.my-namespace.cloudflarepubsub.com", - "topic": "us/external/metrics/abc-456-def-123/request_count", - "clientId": "broker01G24VP1T3B51JJ0WJQJWCSY61", - "receivedAt": 1651578191, - "contentType": null, - "payloadFormatIndicator": 1, - "payload": "" - }, - { - "mid": 1, - "broker": "my-broker.my-namespace.cloudflarepubsub.com", - "topic": "ap/external/metrics/abc-456-def-123/transactions_processed", - "clientId": "broker01G24VS053KYGNBBX8RH3T7CY5", - "receivedAt": 1651578193, - "contentType": null, - "payloadFormatIndicator": 1, - "payload": "" - } + { + "mid": 0, + "broker": "my-broker.my-namespace.cloudflarepubsub.com", + "topic": "us/external/metrics/abc-456-def-123/request_count", + "clientId": "broker01G24VP1T3B51JJ0WJQJWCSY61", + "receivedAt": 1651578191, + "contentType": null, + "payloadFormatIndicator": 1, + "payload": "" + }, + { + "mid": 1, + "broker": "my-broker.my-namespace.cloudflarepubsub.com", + "topic": "ap/external/metrics/abc-456-def-123/transactions_processed", + "clientId": "broker01G24VS053KYGNBBX8RH3T7CY5", + "receivedAt": 1651578193, + "contentType": null, + "payloadFormatIndicator": 1, + "payload": "" + } ] ``` @@ -264,10 +255,10 @@ Messages delivered to a Worker, or sent from a Worker, are wrapped with addition This metadata includes: -* the `broker` the message was associated with, so that your code can distinguish between messages from multiple Brokers -* the `topic` the message was published to by the client. **Note that this is readonly: attempting to change the topic in the Worker is invalid and will result in that message being dropped**. -* a `receivedTimestamp`, set when Pub/Sub first parses and deserializes the message -* the `mid` ("message id") of the message. This is a unique ID allowing Pub/Sub to track messages sent to your Worker, including which messages were dropped (if any). The `mid` field is immutable and returning a modified or missing `mid` will likely cause messages to be dropped. +- the `broker` the message was associated with, so that your code can distinguish between messages from multiple Brokers +- the `topic` the message was published to by the client. **Note that this is readonly: attempting to change the topic in the Worker is invalid and will result in that message being dropped**. +- a `receivedTimestamp`, set when Pub/Sub first parses and deserializes the message +- the `mid` ("message id") of the message. This is a unique ID allowing Pub/Sub to track messages sent to your Worker, including which messages were dropped (if any). The `mid` field is immutable and returning a modified or missing `mid` will likely cause messages to be dropped. This metadata, including their JavaScript types and whether they are immutable ("`readonly`"), are expressed as the `PubSubMessage` interface in the [@cloudflare/pubsub](https://github.com/cloudflare/pubsub) library. @@ -277,24 +268,24 @@ The `PubSubMessage` type may grow to include additional fields over time, and we Messages sent to your on-publish Worker may be batched: each batch is an array of 1 or more `PubSubMessage`. -* Batching helps to reduce the number of invocations against your Worker, and can allow you to better aggregate messages when writing them to upstream services. -* Pub/Sub’s batching mechanism is designed to batch messages arriving simultaneously from publishers, and not wait several seconds. -* It does **not** measurably increase the latency of message delivery. +- Batching helps to reduce the number of invocations against your Worker, and can allow you to better aggregate messages when writing them to upstream services. +- Pub/Sub’s batching mechanism is designed to batch messages arriving simultaneously from publishers, and not wait several seconds. +- It does **not** measurably increase the latency of message delivery. ### On-Publish Best Practices -* Only inspect the topics you need to reduce the compute your Worker needs to do. -* Use `ctx.waitUntil` if you need to write to storage or communicate with remote services and avoid increasing message delivery latency while waiting on those operations to complete. -* Catch exceptions using [try-catch](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/try...catch) - if your on-publish hook is able to “fail open”, you should use the `catch` block to return messages to the Broker in the event of an exception so that messages aren’t dropped. +- Only inspect the topics you need to reduce the compute your Worker needs to do. +- Use `ctx.waitUntil` if you need to write to storage or communicate with remote services and avoid increasing message delivery latency while waiting on those operations to complete. +- Catch exceptions using [try-catch](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/try...catch) - if your on-publish hook is able to “fail open”, you should use the `catch` block to return messages to the Broker in the event of an exception so that messages aren’t dropped. ## Troubleshoot Workers integrations Some common failure modes can result in messages not being sent to subscribed clients when a Worker is processing messages, including: -* Failing to correctly validate incoming requests. This can happen if you are not using the correct public keys (keys are unique to each of your Brokers), if the keys are malformed, and/or if you have not populated the keys in the Worker via environmental variables. -* Not returning a HTTP 200 response. Any other HTTP status code is interpreted as an error and the message is dropped. -* Not returning a valid Content-Type. The Content-Type in the HTTP response header must be `application/octet-stream` -* Taking too long to return a response (more than 10 seconds). You can use [`ctx.waitUntil`](/workers/runtime-apis/context/#waituntil) if you need to write messages to other destinations after returning the message to the broker. -* Returning an invalid or unstructured body, a body or payload that exceeds size limits, or returning no body at all. +- Failing to correctly validate incoming requests. This can happen if you are not using the correct public keys (keys are unique to each of your Brokers), if the keys are malformed, and/or if you have not populated the keys in the Worker via environmental variables. +- Not returning a HTTP 200 response. Any other HTTP status code is interpreted as an error and the message is dropped. +- Not returning a valid Content-Type. The Content-Type in the HTTP response header must be `application/octet-stream` +- Taking too long to return a response (more than 10 seconds). You can use [`ctx.waitUntil`](/workers/runtime-apis/context/#waituntil) if you need to write messages to other destinations after returning the message to the broker. +- Returning an invalid or unstructured body, a body or payload that exceeds size limits, or returning no body at all. Because the Worker is acting as the "server" in the HTTP request-response lifecycle, invalid responses from your Worker can fail silently, as the Broker can no longer return an error response. diff --git a/src/content/docs/pub-sub/platform/authentication-authorization.mdx b/src/content/docs/pub-sub/platform/authentication-authorization.mdx index 8a3f2a57b22843..1d3dc4cc383a67 100644 --- a/src/content/docs/pub-sub/platform/authentication-authorization.mdx +++ b/src/content/docs/pub-sub/platform/authentication-authorization.mdx @@ -3,21 +3,20 @@ title: Authentication and authorization pcx_content_type: concept sidebar: order: 1 - --- Pub/Sub supports two authentication modes. A broker may allow one or both, but never none as authentication is always required. -| Mode | Details | -| --------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `TOKEN` | Accepts a Client ID and a password (represented by a signed JSON Web Token) in the CONNECT packet. The MQTT User Name field is optional. If provided, it must match the Client ID. | -| `MTLS` | **Not yet supported.** Accepts an mTLS keypair (TLS client credentials) scoped to that broker. Keypairs are issued from a Cloudflare root CA unless otherwise configured. | -| `MTLS_AND_TOKEN` | **Not yet supported.** Allows clients to use both MTLS and/or Token auth for a broker. | +| Mode | Details | +| ---------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `TOKEN` | Accepts a Client ID and a password (represented by a signed JSON Web Token) in the CONNECT packet. The MQTT User Name field is optional. If provided, it must match the Client ID. | +| `MTLS` | **Not yet supported.** Accepts an mTLS keypair (TLS client credentials) scoped to that broker. Keypairs are issued from a Cloudflare root CA unless otherwise configured. | +| `MTLS_AND_TOKEN` | **Not yet supported.** Allows clients to use both MTLS and/or Token auth for a broker. | To generate credentials scoped to a specific broker, you have two options: -* Allow Pub/Sub to generate Client IDs for you. -* Supply a list of Client IDs that Pub/Sub will use to generate tokens. +- Allow Pub/Sub to generate Client IDs for you. +- Supply a list of Client IDs that Pub/Sub will use to generate tokens. The recommended and simplest approach if you are starting from scratch is to have Pub/Sub generate Client IDs for you, which ensures they are sufficiently random and that there are not conflicting Client IDs. Duplicate Client IDs can cause issues with clients because only one instance of a Client ID is allowed to connect to a broker. @@ -25,21 +24,19 @@ The recommended and simplest approach if you are starting from scratch is to hav :::note - Ensure you do not commit your credentials to source control, such as GitHub. A valid token allows anyone to connect to your broker and publish or subscribe to messages. Treat credentials as secrets. - ::: To generate a single token for a broker named `example-broker` in `your-namespace`, issue a request to the Pub/Sub API. -* By default, the API returns one valid` ` pair but can return up to 100 per API call to simplify issuance for larger deployments. -* You must specify a Topic ACL (Access Control List) for the tokens. This defines what topics clients authenticating with these tokens can PUBLISH or SUBSCRIBE to. Currently, the Topic ACL must be `#` all topics — finer-grained ACLs are not yet supported. +- By default, the API returns one valid` ` pair but can return up to 100 per API call to simplify issuance for larger deployments. +- You must specify a Topic ACL (Access Control List) for the tokens. This defines what topics clients authenticating with these tokens can PUBLISH or SUBSCRIBE to. Currently, the Topic ACL must be `#` all topics — finer-grained ACLs are not yet supported. For example, to generate five valid tokens with an automatically generated Client ID for each token: ```sh -$ wrangler pubsub broker issue example-broker --number=5 --expiration=48h +wrangler pubsub broker issue example-broker --number=5 --expiration=48h ``` You should receive a scucess response that resembles the example below, which is a map of Client IDs and their associated tokens. @@ -56,10 +53,10 @@ You should receive a scucess response that resembles the example below, which is To configure an MQTT client to connect to Pub/Sub, you need: -* Your Broker hostname - e.g. `your-broker.your-namespace.cloudflarepubsub.com` and port (`8883` for MQTT) -* A Client ID - this must be either the Client ID associated with your token, or left empty. Some clients require a Client ID, and others generate a random Client ID. **You will not be able to connect if the Client ID is mismatched**. -* A username - Pub/Sub does not require you to specify a username. You can leave this empty, or for clients that require one to be set, the text `PubSub` is typically sufficient. -* A "password" - this is a valid JSON Web Token (JWT) received from the API, *specific to the Broker you are trying to connect to*. +- Your Broker hostname - e.g. `your-broker.your-namespace.cloudflarepubsub.com` and port (`8883` for MQTT) +- A Client ID - this must be either the Client ID associated with your token, or left empty. Some clients require a Client ID, and others generate a random Client ID. **You will not be able to connect if the Client ID is mismatched**. +- A username - Pub/Sub does not require you to specify a username. You can leave this empty, or for clients that require one to be set, the text `PubSub` is typically sufficient. +- A "password" - this is a valid JSON Web Token (JWT) received from the API, _specific to the Broker you are trying to connect to_. The most common failure case is supplying a Client ID that does not match your token. Ensure you are setting this correctly in your client, or (recommended) leaving it empty if your client supports auto-assigning the Client ID when it connects to Pub/Sub. @@ -67,13 +64,13 @@ The most common failure case is supplying a Client ID that does not match your t An JSON Web Token (JWT) issued by Pub/Sub will include the following claims. -| Claims | Details | -| ----------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| iat | A Unix timestamp representing the token's creation time. | -| exp | A Unix timestamp representing the token's expiry time. Only included when the JWT has an optional expiry timestamp. | -| sub | The "subject" - the MQTT Client Identifier associated with this token. This is the source of truth for the Client ID. If a Client ID is provided in the CONNECT packet, it must match this ID. Clients that do not specify a Client ID in the CONNECT packet will see this Client ID as the "Assigned Client Identifier" in the CONNACK packet when connecting. | -| jti | JWT ID. An identifier that uniquely identifies this JWT. Used to distinguish multiple JWTs for the same (broker, clientId) apart, and allows revocation of specific tokens. | -| topicAcl | Must be `#` (matches all topics). In the future, ACLs will allow you to express what topics the client can PUBLISH to, SUBSCRIBE to, or both. | +| Claims | Details | +| -------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| iat | A Unix timestamp representing the token's creation time. | +| exp | A Unix timestamp representing the token's expiry time. Only included when the JWT has an optional expiry timestamp. | +| sub | The "subject" - the MQTT Client Identifier associated with this token. This is the source of truth for the Client ID. If a Client ID is provided in the CONNECT packet, it must match this ID. Clients that do not specify a Client ID in the CONNECT packet will see this Client ID as the "Assigned Client Identifier" in the CONNACK packet when connecting. | +| jti | JWT ID. An identifier that uniquely identifies this JWT. Used to distinguish multiple JWTs for the same (broker, clientId) apart, and allows revocation of specific tokens. | +| topicAcl | Must be `#` (matches all topics). In the future, ACLs will allow you to express what topics the client can PUBLISH to, SUBSCRIBE to, or both. | ## Revoking Credentials @@ -82,24 +79,24 @@ To revoke a credential, which immediately invalidates it and prevents any client This will add the token to a revocation list. When using JWTs, you can revoke the JWT based on its unique `jti` claim. To revoke multiple tokens at once, provide a list of token identifiers. ```sh -$ wrangler pubsub broker revoke example-broker --namespace=NAMESPACE_NAME --jti=JTI_ONE --jti=JTI_TWO +wrangler pubsub broker revoke example-broker --namespace=NAMESPACE_NAME --jti=JTI_ONE --jti=JTI_TWO ``` You can also list all currently revoked tokens by using `wrangler pubsub broker show-revocations [...]` or by making a GET request to the `/revocations` endpoint. -You can *unrevoke* a token by using `wrangler pubsub broker unrevoke [...]` or by issuing a DELETE request to the `/revocations` endpoint with the `jti` as a query parameter. +You can _unrevoke_ a token by using `wrangler pubsub broker unrevoke [...]` or by issuing a DELETE request to the `/revocations` endpoint with the `jti` as a query parameter. ## Credential Lifetime and Expiration Credentials can be set to expire at a Broker-level that applies to all credentials, and/or at a per-credential level. -* By default, credentials do not expire, in order to simplify credential management. -* Credentials will inherit the shortest of the expirations set, if both the Broker and the issued credential have an expiration set. +- By default, credentials do not expire, in order to simplify credential management. +- Credentials will inherit the shortest of the expirations set, if both the Broker and the issued credential have an expiration set. To set an expiry for each set of credentials issued by setting the `expiration` value when requesting credentials: in this case, we specify 1 day (`1d`): ```sh -$ wrangler pubsub broker issue example-broker --namespace=NAMESPACE_NAME --expiration=1d +wrangler pubsub broker issue example-broker --namespace=NAMESPACE_NAME --expiration=1d ``` This will return a token that expires 1 day (24 hours) from issuance: @@ -114,28 +111,26 @@ This will return a token that expires 1 day (24 hours) from issuance: To set a Broker-level global expiration on an existing Pub/Sub Broker, set the `expiration` field on the Broker to the seconds any credentials issued should inherit: ```sh -$ wrangler pubsub broker update YOUR_BROKER --namespace=NAMESPACE_NAME --expiration=7d +wrangler pubsub broker update YOUR_BROKER --namespace=NAMESPACE_NAME --expiration=7d ``` -This will cause any token issued by the Broker to have a default expiration of 7 days. You can make this *shorter* by passing the `--expiration` flag to `wrangler pubsub broker issue [...]`. For example: +This will cause any token issued by the Broker to have a default expiration of 7 days. You can make this _shorter_ by passing the `--expiration` flag to `wrangler pubsub broker issue [...]`. For example: -* If you set a longer `--expiration` than the Broker itself has, the Broker's expiration will be used instead (shortest wins). -* Using `wrangler pubsub broker issue [...] --expiration -1` will remove the `exp` claim from the token - essentially returning a non-expiring token - even if a Broker-level expiration has been set. +- If you set a longer `--expiration` than the Broker itself has, the Broker's expiration will be used instead (shortest wins). +- Using `wrangler pubsub broker issue [...] --expiration -1` will remove the `exp` claim from the token - essentially returning a non-expiring token - even if a Broker-level expiration has been set. ### Best Practices -* We strongly recommend setting a per-broker expiration configuration via the **expiration** (integer seconds) field, which will implicitly set an expiration timestamp for all credentials generated for that broker via the `exp` JWT claim. -* Using short-lived credentials – for example, 7 to 30 days – with an automatic rotation policy can reduce the risk of credential compromise and the need to actively revoke credentials after-the-fact. -* You can use Pub/Sub itself to issue fresh credentials to clients using [Cron Triggers](/workers/configuration/cron-triggers/) or a separate HTTP endpoint that clients can use to refresh their local token store. +- We strongly recommend setting a per-broker expiration configuration via the **expiration** (integer seconds) field, which will implicitly set an expiration timestamp for all credentials generated for that broker via the `exp` JWT claim. +- Using short-lived credentials – for example, 7 to 30 days – with an automatic rotation policy can reduce the risk of credential compromise and the need to actively revoke credentials after-the-fact. +- You can use Pub/Sub itself to issue fresh credentials to clients using [Cron Triggers](/workers/configuration/cron-triggers/) or a separate HTTP endpoint that clients can use to refresh their local token store. ## Authorization and Access Control :::note - Pub/Sub currently supports `#` (all topics) as an ACL. Finer-grained ACL support is on the roadmap. - ::: In order to limit what topics a client can PUBLISH or SUBSCRIBE to, you can define an ACL (Access Control List). Topic ACLs are defined in the signed credentials issued to a client and determined when the client connects. diff --git a/src/content/docs/pulumi/installing.mdx b/src/content/docs/pulumi/installing.mdx index 5c1cd55cd42fa6..5c0723b2543682 100644 --- a/src/content/docs/pulumi/installing.mdx +++ b/src/content/docs/pulumi/installing.mdx @@ -3,25 +3,20 @@ title: Get started pcx_content_type: how-to sidebar: order: 1 - --- Follow the recommended steps for your operating system below. For official instructions on installing Pulumi and other install options, refer to [Install Pulumi](https://www.pulumi.com/docs/install/). :::note - Pulumi is free, open source, and optionally pairs with the [Pulumi Cloud](https://www.pulumi.com/product/pulumi-cloud/) to make managing infrastructure secure, reliable, and hassle-free. - ::: :::caution - To avoid resource management conflicts, it’s **always** recommended to manage Pulumi-controlled resources via Pulumi. - ::: ## Installation @@ -31,7 +26,7 @@ To avoid resource management conflicts, it’s **always** recommended to manage Install via Homebrew package manager. ```sh -$ brew install pulumi/tap/pulumi +brew install pulumi/tap/pulumi ``` ### Linux @@ -39,7 +34,7 @@ $ brew install pulumi/tap/pulumi Use the installation script. ```sh -$ curl -fsSL https://get.pulumi.com | sh +curl -fsSL https://get.pulumi.com | sh ``` ### Windows @@ -52,15 +47,13 @@ $ curl -fsSL https://get.pulumi.com | sh To verify your installation, run the following in the terminal: ```sh -$ pulumi version +pulumi version ``` :::note[Note] - For upgrades and installation alternatives, refer to [Install Pulumi](https://www.pulumi.com/docs/install/). - ::: ## Next steps diff --git a/src/content/docs/pulumi/tutorial/add-site.mdx b/src/content/docs/pulumi/tutorial/add-site.mdx index 36f8af4e7b6488..187262dfd6756d 100644 --- a/src/content/docs/pulumi/tutorial/add-site.mdx +++ b/src/content/docs/pulumi/tutorial/add-site.mdx @@ -11,44 +11,35 @@ sidebar: head: - tag: title content: Add a site - --- - - -import { TabItem, Tabs } from "~/components" +import { TabItem, Tabs } from "~/components"; In this tutorial, you will go through step-by-step instructions to bring an existing site to Cloudflare using Pulumi Infrastructure as Code (IaC) so that you can become familiar with the resource management lifecycle. In particular, you will create a Zone and a DNS record to resolve your newly added site. This tutorial adopts the IaC principle to complete the steps listed in the [Add site tutorial](/fundamentals/setup/manage-domains/add-site/). :::note -You will provision resources that qualify under free tier offerings for both Pulumi Cloud and Cloudflare. +You will provision resources that qualify under free tier offerings for both Pulumi Cloud and Cloudflare. ::: - - - Ensure you have: -* A Cloudflare account and API Token with permission to edit the resources in this tutorial. If you need to, sign up for a [Cloudflare account](https://dash.cloudflare.com/sign-up/workers-and-pages) before continuing. Your token must have: - * `Zone-Zone-Edit` permission - * `Zone-DNS-Edit` permission - * `include-All zones from an account-` zone resource -* A Pulumi Cloud account. You can sign up for an [always-free, individual tier](https://app.pulumi.com/signup). -* The [Pulumi CLI](/pulumi/installing/) is installed on your machine. -* A [Pulumi-supported programming language](https://github.com/pulumi/pulumi?tab=readme-ov-file#languages) is configured. (TypeScript, JavaScript, Python, Go, .NET, Java, or use YAML) -* A domain name. You may use `example.com` to complete the tutorial. - - - +- A Cloudflare account and API Token with permission to edit the resources in this tutorial. If you need to, sign up for a [Cloudflare account](https://dash.cloudflare.com/sign-up/workers-and-pages) before continuing. Your token must have: + - `Zone-Zone-Edit` permission + - `Zone-DNS-Edit` permission + - `include-All zones from an account-` zone resource +- A Pulumi Cloud account. You can sign up for an [always-free, individual tier](https://app.pulumi.com/signup). +- The [Pulumi CLI](/pulumi/installing/) is installed on your machine. +- A [Pulumi-supported programming language](https://github.com/pulumi/pulumi?tab=readme-ov-file#languages) is configured. (TypeScript, JavaScript, Python, Go, .NET, Java, or use YAML) +- A domain name. You may use `example.com` to complete the tutorial. ### a. Create a directory Use a new and empty directory for this tutorial. ```sh -$ mkdir addsite-cloudflare -$ cd addsite-cloudflare +mkdir addsite-cloudflare +cd addsite-cloudflare ``` ### b. Login @@ -56,14 +47,14 @@ $ cd addsite-cloudflare At the prompt, press Enter to log into your Pulumi Cloud account via the browser. Alternatively, you may provide a [Pulumi Cloud access token](https://www.pulumi.com/docs/pulumi-cloud/access-management/access-tokens/). ```sh -$ pulumi login +pulumi login ``` ### c. Create a program :::note -A Pulumi program is code written in a [supported programming language](https://github.com/pulumi/pulumi?tab=readme-ov-file#languages) that defines infrastructure resources. +A Pulumi program is code written in a [supported programming language](https://github.com/pulumi/pulumi?tab=readme-ov-file#languages) that defines infrastructure resources. ::: To create a program, select your language of choice and run the `pulumi` command: @@ -71,49 +62,49 @@ To create a program, select your language of choice and run the `pulumi` command ```sh -$ pulumi new javascript --name addsite-cloudflare --yes +pulumi new javascript --name addsite-cloudflare --yes # wait a few seconds while the project is initialized ``` ```sh -$ pulumi new typescript --name addsite-cloudflare --yes +pulumi new typescript --name addsite-cloudflare --yes # wait a few seconds while the project is initialized ``` ```sh -$ pulumi new python --name addsite-cloudflare --yes +pulumi new python --name addsite-cloudflare --yes # wait a few seconds while the project is initialized ``` ```sh -$ pulumi new go --name addsite-cloudflare --yes +pulumi new go --name addsite-cloudflare --yes # wait a few seconds while the project is initialized ``` ```sh -$ pulumi new java --name addsite-cloudflare --yes +pulumi new java --name addsite-cloudflare --yes # wait a few seconds while the project is initialized ``` ```sh -$ pulumi new csharp --name addsite-cloudflare --yes +pulumi new csharp --name addsite-cloudflare --yes # wait a few seconds while the project is initialized ``` ```sh -$ pulumi new yaml --name addsite-cloudflare --yes +pulumi new yaml --name addsite-cloudflare --yes ``` @@ -122,33 +113,33 @@ $ pulumi new yaml --name addsite-cloudflare --yes You will need: -* Your Cloudflare [account ID](/fundamentals/setup/find-account-and-zone-ids/). -* A valid Cloudflare API [token](/fundamentals/api/get-started/create-token/). -* A domain. For instance, `example.com`. +- Your Cloudflare [account ID](/fundamentals/setup/find-account-and-zone-ids/). +- A valid Cloudflare API [token](/fundamentals/api/get-started/create-token/). +- A domain. For instance, `example.com`. :::note -A Pulumi [ESC Environment](https://www.pulumi.com/docs/esc/environments/) is a YAML file containing configurations and secrets that pertain to your application and infrastructure. These can be accessed in several ways, including a Pulumi program. All ESC Environments reside in your Pulumi Cloud account. +A Pulumi [ESC Environment](https://www.pulumi.com/docs/esc/environments/) is a YAML file containing configurations and secrets that pertain to your application and infrastructure. These can be accessed in several ways, including a Pulumi program. All ESC Environments reside in your Pulumi Cloud account. ::: ```sh # Define an ESC Environment name -$ E=my-dev-env +E=my-dev-env # Create a new Pulumi ESC Environment -$ pulumi config env init --env $E --yes --stack dev +pulumi config env init --env $E --yes --stack dev # Replace API_TOKEN with your Cloudflare API Token -$ pulumi env set $E --secret pulumiConfig.cloudflare:apiToken API_TOKEN +pulumi env set $E --secret pulumiConfig.cloudflare:apiToken API_TOKEN # Replace abc123 with your Cloudflare Account ID -$ pulumi env set $E --plaintext pulumiConfig.accountId abc123 +pulumi env set $E --plaintext pulumiConfig.accountId abc123 # Replace example.com with your registered domain, or leave as is -$ pulumi env set $E --plaintext pulumiConfig.domain example.com +pulumi env set $E --plaintext pulumiConfig.domain example.com # Review your ESC Environment -$ pulumi env open $E +pulumi env open $E { "pulumiConfig": { "accountId": "111222333", @@ -162,22 +153,21 @@ $ pulumi env open $E :::note -A Pulumi [stack](https://www.pulumi.com/docs/concepts/stack/) is an instance of a Pulumi program. Stacks are independently configurable and may represent different environments (development, staging, production) or feature branches. For this tutorial, you'll use the `dev` stack. +A Pulumi [stack](https://www.pulumi.com/docs/concepts/stack/) is an instance of a Pulumi program. Stacks are independently configurable and may represent different environments (development, staging, production) or feature branches. For this tutorial, you'll use the `dev` stack. ::: To instantiate your `dev` stack, run: ```sh -$ pulumi up --yes --stack dev +pulumi up --yes --stack dev # wait a few seconds for the stack to be instantiated. ``` At this point, you have not defined any resources so you'll have an empty stack. +:::note - :::note - -A domain, or site, is known as a Zone in Cloudflare. +A domain, or site, is known as a Zone in Cloudflare. ::: You will now add the Pulumi Cloudflare package and a Cloudflare Zone resource to your Pulumi program. @@ -187,23 +177,32 @@ You will now add the Pulumi Cloudflare package and a Cloudflare Zone resource to ```sh -$ npm install @pulumi/cloudflare +npm install @pulumi/cloudflare +``` + +```sh output added 1 package ... ``` ```sh -$ npm install @pulumi/cloudflare +npm install @pulumi/cloudflare +``` + +```sh output added 1 package ... ``` ```sh -$ echo "pulumi_cloudflare>=5.35,<6.0.0" >> requirements.txt -$ source venv/bin/activate -$ pip install -r requirements.txt +echo "pulumi_cloudflare>=5.35,<6.0.0" >> requirements.txt +source venv/bin/activate +pip install -r requirements.txt +``` + +```sh output ... Collecting pulumi-cloudflare ... @@ -212,7 +211,10 @@ Collecting pulumi-cloudflare ```sh -$ go get github.com/pulumi/pulumi-cloudflare/sdk/v3/go/cloudflare +go get github.com/pulumi/pulumi-cloudflare/sdk/v3/go/cloudflare +``` + +```sh output go: downloading github.com/pulumi/pulumi-cloudflare ... ``` @@ -234,7 +236,10 @@ Below are Apache Maven instructions. For other Java project managers such as Gra 1. Run: ```sh -$ mvn clean install +mvn clean install +``` + +```sh output ... [INFO] BUILD SUCCESS ... @@ -243,7 +248,10 @@ $ mvn clean install ```sh -$ dotnet add package Pulumi.Cloudflare +dotnet add package Pulumi.Cloudflare +``` + +```sh output ... info : Adding PackageReference for package 'Pulumi.Cloudflare' into project ... @@ -272,10 +280,10 @@ const domain = config.require("domain"); // Create a Cloudflare resource (Zone) const zone = new cloudflare.Zone("my-zone", { - zone: domain, - accountId: accountId, - plan: "free", - jumpStart: true, + zone: domain, + accountId: accountId, + plan: "free", + jumpStart: true, }); // Export the zone ID @@ -290,14 +298,14 @@ import * as cloudflare from "@pulumi/cloudflare"; const config = new pulumi.Config(); const accountId = config.require("accountId"); -const domain = config.require("domain") +const domain = config.require("domain"); // Create a Cloudflare resource (Zone) const zone = new cloudflare.Zone("my-zone", { - zone: domain, - accountId: accountId, - plan: "free", - jumpStart: true, + zone: domain, + accountId: accountId, + plan: "free", + jumpStart: true, }); // Export the zone ID @@ -449,7 +457,7 @@ outputs: ### c. Apply the changes ```sh -$ pulumi up --yes --stack dev +pulumi up --yes --stack dev # wait a few seconds while the changes take effect ``` @@ -458,12 +466,12 @@ $ pulumi up --yes --stack dev Review the value of `zoneId` to confirm the Zone creation. ```sh -$ pulumi stack output zoneId -d8fcb6d731fe1c2d75e2e8d6ad63fad5 +pulumi stack output zoneId ``` - - +```sh output +d8fcb6d731fe1c2d75e2e8d6ad63fad5 +``` Once you have added a domain to Cloudflare, that domain will receive two assigned authoritative nameservers. @@ -471,7 +479,7 @@ Once you have added a domain to Cloudflare, that domain will receive two assigne This process makes Cloudflare your authoritative DNS provider, allowing your DNS queries and web traffic to be served from and protected by the Cloudflare network. -[Learn more about pending domains](/dns/zone-setups/reference/domain-status/) +[Learn more about pending domains](/dns/zone-setups/reference/domain-status/) ::: ### a. Update the program @@ -542,7 +550,7 @@ status: ${exampleZone.status} ### b. Apply the changes ```sh -$ pulumi up --yes --stack dev +pulumi up --yes --stack dev ``` ### c. Obtain the nameservers @@ -550,21 +558,21 @@ $ pulumi up --yes --stack dev Review the value of `nameservers` to retrieve the assigned nameservers: ```sh -$ pulumi stack output --stack dev +pulumi stack output --stack dev ``` ### d. Update your registrar :::note -If you use `example.com` as your site, skip ahead to [Add a DNS record](#add-a-dns-record). +If you use `example.com` as your site, skip ahead to [Add a DNS record](#add-a-dns-record). ::: Update the nameservers at your registrar to activate Cloudflare services for your domain. Instructions are registrar-specific. You may be able to find guidance under [this consolidated list of common registrars](/dns/zone-setups/full-setup/setup/#update-your-registrar). :::caution -Registrars take up to 24 hours to process nameserver changes. +Registrars take up to 24 hours to process nameserver changes. ::: ### e. Check your domain status @@ -572,12 +580,9 @@ Registrars take up to 24 hours to process nameserver changes. Once successfully registered, your domain `status` will change to `active`. ```sh -$ pulumi stack output +pulumi stack output ``` - - - You will now add a DNS record to your domain. ### a. Modify your program @@ -598,10 +603,10 @@ const domain = config.require("domain"); // Create a Cloudflare resource (Zone) const zone = new cloudflare.Zone("my-zone", { - zone: domain, - accountId: accountId, - plan: "free", - jumpStart: true, + zone: domain, + accountId: accountId, + plan: "free", + jumpStart: true, }); // Export the zone ID @@ -610,11 +615,11 @@ exports.nameservers = zone.nameServers; exports.status = zone.status; const record = new cloudflare.Record("my-record", { - zoneId: zone.id, - name: domain, - value: "192.0.2.1", - type: "A", - proxied: true, + zoneId: zone.id, + name: domain, + value: "192.0.2.1", + type: "A", + proxied: true, }); ``` @@ -626,14 +631,14 @@ import * as cloudflare from "@pulumi/cloudflare"; const config = new pulumi.Config(); const accountId = config.require("accountId"); -const domain = config.require("domain") +const domain = config.require("domain"); // Create a Cloudflare resource (Zone) const zone = new cloudflare.Zone("my-zone", { - zone: domain, - accountId: accountId, - plan: "free", // Choose the desired plan, e.g., "free", "pro", "business", etc. - jumpStart: true, + zone: domain, + accountId: accountId, + plan: "free", // Choose the desired plan, e.g., "free", "pro", "business", etc. + jumpStart: true, }); // Export the zone ID @@ -647,11 +652,11 @@ export const status = zone.status; // Set up a Record for your site const record = new cloudflare.Record("my-record", { - zoneId: zoneId, - name: domain, - value: "192.0.2.1", - type: "A", - proxied: true, + zoneId: zoneId, + name: domain, + value: "192.0.2.1", + type: "A", + proxied: true, }); ``` @@ -861,50 +866,39 @@ outputs: ### b. Apply the changes ```sh -$ pulumi up --yes --stack dev +pulumi up --yes --stack dev ``` - - - You will run two `nslookup` commands against the Cloudflare-assigned nameservers. To test your site, run: ```sh -$ DOMAIN=$(pulumi config get domain) -$ NS1=$(pulumi stack output nameservers | jq '.[0]' -r) -$ NS2=$(pulumi stack output nameservers | jq '.[1]' -r) -$ nslookup $DOMAIN $NS1 -$ nslookup $DOMAIN $NS2 +DOMAIN=$(pulumi config get domain) +NS1=$(pulumi stack output nameservers | jq '.[0]' -r) +NS2=$(pulumi stack output nameservers | jq '.[1]' -r) +nslookup $DOMAIN $NS1 +nslookup $DOMAIN $NS2 ``` For .NET use `Nameservers` as the Output. Confirm your response returns the IP address(es) for your site. - - - In this last step, you will remove the resources and stack used throughout the tutorial. ### a. Delete the resources ```sh -$ pulumi destroy --yes +pulumi destroy --yes ``` ### b. Remove the stack ```sh -$ pulumi stack rm dev +pulumi stack rm dev ``` - - - You have incrementally defined Cloudflare resources needed to add a site to Cloudflare. After each new resource, you apply the changes to your `dev` stack via the `pulumi up` command. You declare the resources in your programming language of choice and let Pulumi handle the rest. Follow the [Hello World tutorial](/pulumi/tutorial/hello-world/) to deploy a serverless app with Pulumi. - - diff --git a/src/content/docs/pulumi/tutorial/hello-world.mdx b/src/content/docs/pulumi/tutorial/hello-world.mdx index 42c47b1395efd9..365950b2f938c5 100644 --- a/src/content/docs/pulumi/tutorial/hello-world.mdx +++ b/src/content/docs/pulumi/tutorial/hello-world.mdx @@ -13,60 +13,45 @@ sidebar: head: - tag: title content: Deploy a Hello World app - --- - - In this tutorial, you will go through step-by-step instructions to deploy a Hello World web application using Cloudflare Workers and Pulumi Infrastructure as Code (IaC) so that you can become familiar with the resource management lifecycle. In particular, you will create a Worker, a Route, and a DNS Record to access the application before cleaning up all the resources. -![alt\_text](~/assets/images/pulumi/hello-world-tutorial/sn2.png "Running Cloudflare Workers application deployed with Pulumi") +![alt_text](~/assets/images/pulumi/hello-world-tutorial/sn2.png "Running Cloudflare Workers application deployed with Pulumi") :::note - You will provision resources that qualify under free tier offerings for both Pulumi Cloud and Cloudflare. - ::: - - - - Ensure you have: -* A Cloudflare account and API Token with permission to edit the resources in this tutorial. If you need to, sign up for a [Cloudflare account](https://dash.cloudflare.com/sign-up/workers-and-pages) before continuing. -* A Pulumi Cloud account. You can sign up for an [always-free, individual tier](https://app.pulumi.com/signup). -* [npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) and the [Pulumi CLI](/pulumi/installing/) installed on your machine. -* A Cloudflare Zone. Complete the [Add a Site tutorial](/pulumi/tutorial/add-site/) to create one. - - +- A Cloudflare account and API Token with permission to edit the resources in this tutorial. If you need to, sign up for a [Cloudflare account](https://dash.cloudflare.com/sign-up/workers-and-pages) before continuing. +- A Pulumi Cloud account. You can sign up for an [always-free, individual tier](https://app.pulumi.com/signup). +- [npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) and the [Pulumi CLI](/pulumi/installing/) installed on your machine. +- A Cloudflare Zone. Complete the [Add a Site tutorial](/pulumi/tutorial/add-site/) to create one. :::note[Link to the full solution] - You can find the complete solution of this tutorial under [this Pulumi repo and branch](https://github.com/pulumi/tutorials/tree/cloudflare-typescript-hello-world-end). To deploy the final version, run the following: ```sh -$ mkdir serverless-cloudflare && cd serverless-cloudflare -$ pulumi new https://github.com/pulumi/tutorials/tree/cloudflare-typescript-hello-world-end -$ npm install -$ pulumi up --yes +mkdir serverless-cloudflare && cd serverless-cloudflare +pulumi new https://github.com/pulumi/tutorials/tree/cloudflare-typescript-hello-world-end +npm install +pulumi up --yes ``` - ::: - - ### a. Create a directory You'll use a new and empty directory for this tutorial. ```sh -$ mkdir serverless-cloudflare -$ cd serverless-cloudflare +mkdir serverless-cloudflare +cd serverless-cloudflare ``` ### b. Login @@ -74,60 +59,54 @@ $ cd serverless-cloudflare At the prompt, press Enter to log into your Pulumi Cloud account via the browser. Alternatively, you may provide a [Pulumi Cloud access token](https://www.pulumi.com/docs/pulumi-cloud/access-management/access-tokens/). ```sh -$ pulumi login +pulumi login ``` ### c. Create a program :::note - A Pulumi program is code written in a [supported programming language](https://www.pulumi.com/docs/languages-sdks/) that defines infrastructure resources. We'll use TypeScript. - ::: To create a program, run: ```sh -$ pulumi new https://github.com/pulumi/tutorials/tree/cloudflare-typescript-hello-world-begin +pulumi new https://github.com/pulumi/tutorials/tree/cloudflare-typescript-hello-world-begin ``` Complete the prompts with defaults where available; otherwise, provide the requested information. You will need: -* Your Cloudflare [account ID](/fundamentals/setup/find-account-and-zone-ids/). -* Your Cloudflare [Zone ID](/fundamentals/setup/find-account-and-zone-ids/). -* A registered domain. For instance, `example.com` -* A valid Cloudflare API [token](/fundamentals/api/get-started/create-token/). +- Your Cloudflare [account ID](/fundamentals/setup/find-account-and-zone-ids/). +- Your Cloudflare [Zone ID](/fundamentals/setup/find-account-and-zone-ids/). +- A registered domain. For instance, `example.com` +- A valid Cloudflare API [token](/fundamentals/api/get-started/create-token/). ### d. Create a stack :::note - A Pulumi stack is an instance of a Pulumi program. Stacks are independently configurable and may represent different environments (development, staging, production) or feature branches. - ::: To create a stack, run: ```sh -$ pulumi up --yes +pulumi up --yes ``` After the above command completes, review the value of `myFirstOutput` for correctness. ### e. (Optional) Review the stack -From the output above, follow **your** *View in Browser* link to get familiar with the Pulumi stack. +From the output above, follow **your** _View in Browser_ link to get familiar with the Pulumi stack. :::note - You have not yet created any Cloudflare resources but have defined a variable, `myFirstOutput`, and the Pulumi stack. - ::: Example: @@ -137,11 +116,7 @@ View in Browser (Ctrl+O): https://app.pulumi.com/diana-pulumi-corp/serverless-cloudflare/dev/updates/1 ``` -![alt\_text](~/assets/images/pulumi/hello-world-tutorial/sn3.png "Pulumi Cloud stack") - - - - +![alt_text](~/assets/images/pulumi/hello-world-tutorial/sn3.png "Pulumi Cloud stack") You will now add a Cloudflare Worker to the Pulumi stack, `dev`. @@ -159,23 +134,23 @@ const accountId = config.require("accountId"); // A Worker script to invoke export const script = new cloudflare.WorkerScript("hello-world-script", { - accountId: accountId, - name: "hello-world", - // Read the content of the worker from a file - content: fs.readFileSync("./app/worker.ts", "utf8"), + accountId: accountId, + name: "hello-world", + // Read the content of the worker from a file + content: fs.readFileSync("./app/worker.ts", "utf8"), }); ``` ### b. Install dependencies ```sh -$ npm install @pulumi/cloudflare +npm install @pulumi/cloudflare ``` ### c. Apply the changes ```sh -$ pulumi up --yes +pulumi up --yes ``` ### d. (Optional) View the Cloudflare Dashboard @@ -186,11 +161,7 @@ You can view your Cloudflare resource directly in the Cloudflare Dashboard to va 2. Select your account. 3. Go to **Workers & Pages**. 4. Open the "hello-world" application. Example: - ![alt\_text](~/assets/images/pulumi/hello-world-tutorial/sn4.png) - - - - + ![alt_text](~/assets/images/pulumi/hello-world-tutorial/sn4.png) You will now add a Worker Route to the Pulumi stack, `dev` so the script can have an endpoint. @@ -206,28 +177,28 @@ import * as fs from "fs"; const config = new pulumi.Config(); const accountId = config.require("accountId"); const zoneId = config.require("zoneId"); -const domain = config.require("domain") +const domain = config.require("domain"); // A Worker script to invoke export const script = new cloudflare.WorkerScript("hello-world-script", { - accountId: accountId, - name: "hello-world", - // Read the content of the worker from a file - content: fs.readFileSync("./app/worker.ts", "utf8"), + accountId: accountId, + name: "hello-world", + // Read the content of the worker from a file + content: fs.readFileSync("./app/worker.ts", "utf8"), }); // A Worker route to serve requests and the Worker script export const route = new cloudflare.WorkerRoute("hello-world-route", { - zoneId: zoneId, - pattern: "hello-world." + domain, - scriptName: script.name, + zoneId: zoneId, + pattern: "hello-world." + domain, + scriptName: script.name, }); ``` ### b. Apply changes ```sh -$ pulumi up --yes +pulumi up --yes ``` ### c. (Optional) View the Cloudflare Worker route in the dashboard @@ -239,11 +210,7 @@ In the Cloudflare Dashboard, the Worker application now contains the previously 3. Go to **Workers & Pages**. 4. Select your application. 5. For **Routes**, select **View** to verify the Worker Route details match your definition. - ![alt\_text](~/assets/images/pulumi/hello-world-tutorial/sn5.png "Cloudflare Dashboard - Worker Route") - - - - + ![alt_text](~/assets/images/pulumi/hello-world-tutorial/sn5.png "Cloudflare Dashboard - Worker Route") You will now add a DNS record to your domain so the previously configured route can be accessed via a URL. @@ -259,47 +226,45 @@ import * as fs from "fs"; const config = new pulumi.Config(); const accountId = config.require("accountId"); const zoneId = config.require("zoneId"); -const domain = config.require("domain") +const domain = config.require("domain"); // A Worker script to invoke export const script = new cloudflare.WorkerScript("hello-world-script", { - accountId: accountId, - name: "hello-world", - // Read the content of the worker from a file - content: fs.readFileSync("./app/worker.ts", "utf8"), + accountId: accountId, + name: "hello-world", + // Read the content of the worker from a file + content: fs.readFileSync("./app/worker.ts", "utf8"), }); // A Worker route to serve requests and the Worker script export const route = new cloudflare.WorkerRoute("hello-world-route", { - zoneId: zoneId, - pattern: "hello-world." + domain, - scriptName: script.name, + zoneId: zoneId, + pattern: "hello-world." + domain, + scriptName: script.name, }); // A DNS record to access the route from the domain export const record = new cloudflare.Record("hello-world-record", { - zoneId: zoneId, - name: script.name, - value: "192.0.2.1", - type: "A", - proxied: true + zoneId: zoneId, + name: script.name, + value: "192.0.2.1", + type: "A", + proxied: true, }); -export const url = route.pattern +export const url = route.pattern; ``` :::note - The last line in the code will create an output with the endpoint for the Hello World app. - ::: ### b. Apply the changes ```sh -$ pulumi up --yes +pulumi up --yes ``` ### c. (Optional) View all the resources in Pulumi Cloud @@ -308,22 +273,27 @@ $ pulumi up --yes 2. Navigate to your stack, `serverless-cloudflare/dev`. 3. Confirm all the defined resources are created and healthy. Example: -![alt\_text](~/assets/images/pulumi/hello-world-tutorial/sn6.png "Pulumi Cloud stack") - - - - +![alt_text](~/assets/images/pulumi/hello-world-tutorial/sn6.png "Pulumi Cloud stack") You have incrementally added all the Cloudflare resources needed to run and access your Hello World application. This was done by defining the resources in TypeScript and letting Pulumi handle the rest. You can test your application via the terminal or browser. -* In the terminal +- In the terminal ```sh -$ pulumi stack output url +pulumi stack output url +``` + +```sh output hello-world.atxyall.com -$ curl "https://$(pulumi stack output url)" +``` + +```sh +curl "https://$(pulumi stack output url)" +``` + +```sh output @@ -339,34 +309,26 @@ $ curl "https://$(pulumi stack output url)" :::note - Depending on your domain settings, you may need to use "http" instead. - ::: -* In your browser, open `hello-world.YOUR_DOMAIN.com` +- In your browser, open `hello-world.YOUR_DOMAIN.com` Example: -![alt\_text](~/assets/images/pulumi/hello-world-tutorial/sn2.png "Hello World app browser screenshot") - - - - +![alt_text](~/assets/images/pulumi/hello-world-tutorial/sn2.png "Hello World app browser screenshot") In this last step, you will run a couple of commands to clean up the resources and stack you used throughout the tutorial. ### a. Delete the Cloudflare resources ```sh -$ pulumi destroy +pulumi destroy ``` ### b. Remove the Pulumi stack ```sh -$ pulumi stack rm dev +pulumi stack rm dev ``` - - diff --git a/src/content/docs/queues/configuration/batching-retries.mdx b/src/content/docs/queues/configuration/batching-retries.mdx index 3d36984bb6d4b6..0497d8890af115 100644 --- a/src/content/docs/queues/configuration/batching-retries.mdx +++ b/src/content/docs/queues/configuration/batching-retries.mdx @@ -3,7 +3,6 @@ title: Batching, Retries and Delays pcx_content_type: concept sidebar: order: 2 - --- ## Batching @@ -18,8 +17,8 @@ Batching can: There are two ways to configure how messages are batched. You configure batching when connecting your consumer Worker to a queue. -* `max_batch_size` - The maximum size of a batch delivered to a consumer (defaults to 10 messages). -* `max_batch_timeout` - the *maximum* amount of time the queue will wait before delivering a batch to a consumer (defaults to 5 seconds) +- `max_batch_size` - The maximum size of a batch delivered to a consumer (defaults to 10 messages). +- `max_batch_timeout` - the _maximum_ amount of time the queue will wait before delivering a batch to a consumer (defaults to 5 seconds) Both `max_batch_size` and `max_batch_timeout` work together. Whichever limit is reached first will trigger the delivery of a batch. @@ -27,12 +26,10 @@ For example, a `max_batch_size = 30` and a `max_batch_timeout = 10` means that i :::info[Empty queues] - When a queue is empty, a push-based (Worker) consumer's `queue` handler will not be invoked until there are messages to deliver. A queue does not attempt to push empty batches to a consumer and thus does not invoke unnecessary reads. [Pull-based consumers](/queues/configuration/pull-consumers/) that attempt to pull from a queue, even when empty, will incur a read operation. - ::: When determining what size and timeout settings to configure, you will want to consider latency (how long can you wait to receive messages?), overall batch size (when writing to external systems), and cost (fewer-but-larger batches). @@ -54,22 +51,20 @@ The following batch-level settings can be configured to adjust how Queues delive You can acknowledge individual messages within a batch by explicitly acknowledging each message as it is processed. Messages that are explicitly acknowledged will not be re-delivered, even if your queue consumer fails on a subsequent message and/or fails to return successfully when processing a batch. -* Each message can be acknowledged as you process it within a batch, and avoids the entire batch from being re-delivered if your consumer throws an error during batch processing. -* Acknowledging individual messages is useful when you are calling external APIs, writing messages to a database, or otherwise performing non-idempotent (state changing) actions on individual messages. +- Each message can be acknowledged as you process it within a batch, and avoids the entire batch from being re-delivered if your consumer throws an error during batch processing. +- Acknowledging individual messages is useful when you are calling external APIs, writing messages to a database, or otherwise performing non-idempotent (state changing) actions on individual messages. To explicitly acknowledge a message as delivered, call the `ack()` method on the message. ```ts title="index.js" export default { - async queue(batch: MessageBatch, env: Env, ctx: ExecutionContext) { - for (const msg of batch.messages) { - - // TODO: do something with the message - // Explicitly acknowledge the message as delivered - msg.ack() - - } - }, + async queue(batch: MessageBatch, env: Env, ctx: ExecutionContext) { + for (const msg of batch.messages) { + // TODO: do something with the message + // Explicitly acknowledge the message as delivered + msg.ack(); + } + }, }; ``` @@ -77,14 +72,12 @@ You can also call `retry()` to explicitly force a message to be redelivered in a ```ts title="index.ts" export default { - async queue(batch: MessageBatch, env: Env, ctx: ExecutionContext) { - for (const msg of batch.messages) { - - // TODO: do something with the message that fails - msg.retry() - - } - }, + async queue(batch: MessageBatch, env: Env, ctx: ExecutionContext) { + for (const msg of batch.messages) { + // TODO: do something with the message that fails + msg.retry(); + } + }, }; ``` @@ -92,9 +85,9 @@ You can also acknowledge or negatively acknowledge messages at a batch level wit Note that calls to `ack()`, `retry()` and their `ackAll()` / `retryAll` equivalents follow the below precedence rules: -* If you call `ack()` on a message, subsequent calls to `ack()` or `retry()` are silently ignored. -* If you call `retry()` on a message and then call `ack()`: the `ack()` is ignored. The first method call wins in all cases. -* If you call either `ack()` or `retry()` on a single message, and then either/any of `ackAll()` or `retryAll()` on the batch, the call on the single message takes precedence. That is, the batch-level call does not apply to that message (or messages, if multiple calls were made). +- If you call `ack()` on a message, subsequent calls to `ack()` or `retry()` are silently ignored. +- If you call `retry()` on a message and then call `ack()`: the `ack()` is ignored. The first method call wins in all cases. +- If you call either `ack()` or `retry()` on a single message, and then either/any of `ackAll()` or `retryAll()` on the batch, the call on the single message takes precedence. That is, the batch-level call does not apply to that message (or messages, if multiple calls were made). ## Delivery failure @@ -104,20 +97,16 @@ Messages that reach the configured maximum retries will be deleted from the queu :::note - Each retry counts as an additional read operation per [Queues pricing](/queues/platform/pricing/). - ::: When a single message within a batch fails to be delivered, the entire batch is retried, unless you have [explicitly acknowledged](#explicit-acknowledgement-and-retries) a message (or messages) within that batch. For example, if a batch of 10 messages is delivered, but the 8th message fails to be delivered, all 10 messages will be retried and thus redelivered to your consumer in full. :::caution[Retried messages and consumer concurrency] - Retrying messages with `retry()` or calling `retryAll()` on a batch will **not** cause the consumer to autoscale down if consumer concurrency is enabled. Refer to [Consumer concurrency](/queues/configuration/consumer-concurrency/) to learn more. - ::: ## Delay messages @@ -130,10 +119,8 @@ Messages can be delayed by upto 12 hours. :::note - Configuring delivery and retry delays via the `wrangler` CLI or when [developing locally](/queues/configuration/local-development/) requires `wrangler` version `3.38.0` or greater. Use `npx wrangler@latest` to always use the latest version of `wrangler`. - ::: ### Delay on send @@ -142,21 +129,21 @@ To delay a message or batch of messages when sending to a queue, you can provide ```ts // Delay a singular message by 600 seconds (10 minutes) -await env.YOUR_QUEUE.send(message, { delaySeconds: 600 }) +await env.YOUR_QUEUE.send(message, { delaySeconds: 600 }); // Delay a batch of messages by 300 seconds (5 minutes) -await env.YOUR_QUEUE.sendBatch(messages, { delaySeconds: 300 }) +await env.YOUR_QUEUE.sendBatch(messages, { delaySeconds: 300 }); // Do not delay this message. // If there is a global delay configured on the queue, ignore it. -await env.YOUR_QUEUE.sendBatch(messages, { delaySeconds: 0 }) +await env.YOUR_QUEUE.sendBatch(messages, { delaySeconds: 0 }); ``` You can also configure a default, global delay on a per-queue basis by passing `--delivery-delay-secs` when creating a queue via the `wrangler` CLI: ```sh # Delay all messages by 5 minutes as a default -$ npx wrangler queues create $QUEUE-NAME --delivery-delay-secs=300 +npx wrangler queues create $QUEUE-NAME --delivery-delay-secs=300 ``` ### Delay on retry @@ -167,14 +154,13 @@ To delay an individual message within a batch: ```ts title="index.ts" export default { - async queue(batch: MessageBatch, env: Env, ctx: ExecutionContext) { - for (const msg of batch.messages) { - // Mark for retry and delay a singular message - // by 3600 seconds (1 hour) - msg.retry({delaySeconds: 3600}) - - } - }, + async queue(batch: MessageBatch, env: Env, ctx: ExecutionContext) { + for (const msg of batch.messages) { + // Mark for retry and delay a singular message + // by 3600 seconds (1 hour) + msg.retry({ delaySeconds: 3600 }); + } + }, }; ``` @@ -182,11 +168,11 @@ To delay a batch of messages: ```ts title="index.ts" export default { - async queue(batch: MessageBatch, env: Env, ctx: ExecutionContext) { - // Mark for retry and delay a batch of messages - // by 600 seconds (10 minutes) - batch.retryAll({ delaySeconds: 600 }) - }, + async queue(batch: MessageBatch, env: Env, ctx: ExecutionContext) { + // Mark for retry and delay a batch of messages + // by 600 seconds (10 minutes) + batch.retryAll({ delaySeconds: 600 }); + }, }; ``` @@ -197,11 +183,11 @@ Delays can be configured via the `wrangler` CLI: ```sh # Push-based consumers # Delay any messages that are retried by 60 seconds (1 minute) by default. -$ npx wrangler@latest queues consumer worker add $QUEUE-NAME $WORKER_SCRIPT_NAME --retry-delay-secs=60 +npx wrangler@latest queues consumer worker add $QUEUE-NAME $WORKER_SCRIPT_NAME --retry-delay-secs=60 # Pull-based consumers # Delay any messages that are retried by 60 seconds (1 minute) by default. -$ npx wrangler@latest queues consumer http add $QUEUE-NAME --retry-delay-secs=60 +npx wrangler@latest queues consumer http add $QUEUE-NAME --retry-delay-secs=60 ``` Delays can also be configured in [`wrangler.toml`](/workers/wrangler/configuration/#queues) with the `delivery_delay` setting for producers (when sending) and/or the `retry_delay` (when retrying) per-consumer: @@ -225,9 +211,9 @@ Refer to the [Queues REST API documentation](/api/operations/queue-v2-list-queue Messages can be delayed by default at the queue level, or per-message (or batch). -* Per-message/batch delay settings take precedence over queue-level settings. -* Setting `delaySeconds: 0` on a message when sending or retrying will ignore any queue-level delays and cause the message to be delivered in the next batch. -* A message sent or retried with `delaySeconds: ` to a queue with a shorter default delay will still respect the message-level setting. +- Per-message/batch delay settings take precedence over queue-level settings. +- Setting `delaySeconds: 0` on a message when sending or retrying will ignore any queue-level delays and cause the message to be delivered in the next batch. +- A message sent or retried with `delaySeconds: ` to a queue with a shorter default delay will still respect the message-level setting. ### Apply a backoff algorithm @@ -238,7 +224,12 @@ Each message delivered to a consumer includes an `attempts` property that tracks For example, to generate an [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) for a message, you can create a helper function that calculates this for you: ```ts -const calculateExponentialBackoff = (attempts: number, baseDelaySeconds: number) => { return baseDelaySeconds**attempts } +const calculateExponentialBackoff = ( + attempts: number, + baseDelaySeconds: number, +) => { + return baseDelaySeconds ** attempts; +}; ``` In your consumer, you then pass the value of `msg.attempts` and your desired delay factor as the argument to `delaySeconds` when calling `retry()` on an individual message: @@ -247,19 +238,23 @@ In your consumer, you then pass the value of `msg.attempts` and your desired del const BASE_DELAY_SECONDS = 30; export default { - async queue(batch: MessageBatch, env: Env, ctx: ExecutionContext) { - for (const msg of batch.messages) { - // Mark for retry and delay a singular message - // by 3600 seconds (1 hour) - msg.retry({delaySeconds: calculateExponentialBackoff(msg.attempts, BASE_DELAY_SECONDS)}) - - } - }, + async queue(batch: MessageBatch, env: Env, ctx: ExecutionContext) { + for (const msg of batch.messages) { + // Mark for retry and delay a singular message + // by 3600 seconds (1 hour) + msg.retry({ + delaySeconds: calculateExponentialBackoff( + msg.attempts, + BASE_DELAY_SECONDS, + ), + }); + } + }, }; ``` ## Related -* Review the [JavaScript API](/queues/configuration/javascript-apis/) documentation for Queues. -* Learn more about [How Queues Works](/queues/reference/how-queues-works/). -* Understand the [metrics available](/queues/observability/metrics/) for your queues, including backlog and delayed message counts. +- Review the [JavaScript API](/queues/configuration/javascript-apis/) documentation for Queues. +- Learn more about [How Queues Works](/queues/reference/how-queues-works/). +- Understand the [metrics available](/queues/observability/metrics/) for your queues, including backlog and delayed message counts. diff --git a/src/content/docs/queues/configuration/consumer-concurrency.mdx b/src/content/docs/queues/configuration/consumer-concurrency.mdx index 850976154bf059..6dd1cb4bd6b4eb 100644 --- a/src/content/docs/queues/configuration/consumer-concurrency.mdx +++ b/src/content/docs/queues/configuration/consumer-concurrency.mdx @@ -3,7 +3,6 @@ title: Consumer concurrency pcx_content_type: concept sidebar: order: 5 - --- Consumer concurrency allows a [consumer Worker](/queues/reference/how-queues-works/#consumers) processing messages from a queue to automatically scale out horizontally to keep up with the rate that messages are being written to a queue. @@ -20,18 +19,16 @@ By default, all queues have concurrency enabled. Queue consumers will automatica After processing a batch of messages, Queues will check to see if the number of concurrent consumers should be adjusted. The number of concurrent consumers invoked for a queue will autoscale based on several factors, including: -* The number of messages in the queue (backlog) and its rate of growth. -* The ratio of failed (versus successful) invocations. A failed invocation is when your `queue()` handler returns an uncaught exception instead of `void` (nothing). -* The value of `max_concurrency` set for that consumer. +- The number of messages in the queue (backlog) and its rate of growth. +- The ratio of failed (versus successful) invocations. A failed invocation is when your `queue()` handler returns an uncaught exception instead of `void` (nothing). +- The value of `max_concurrency` set for that consumer. Where possible, Queues will optimize for keeping your backlog from growing exponentially, in order to minimize scenarios where the backlog of messages in a queue grows to the point that they would reach the [message retention limit](/queues/platform/limits/) before being processed. :::note[Consumer concurrency and retried messages] - [Retrying messages with `retry()`](/queues/configuration/batching-retries/#explicit-acknowledgement-and-retries) or calling `retryAll()` on a batch will **not** count as a failed invocation. - ::: ### Example @@ -44,18 +41,16 @@ In this scenario, Queues will notice the growing backlog and will scale the numb If your consumers are not autoscaling, there are a few likely causes: -* `max_concurrency` has been set to 1. -* Your consumer Worker is returning errors rather than processing messages. Inspect your consumer to make sure it is healthy. -* A batch of messages is being processed. Queues checks if it should autoscale consumers only after processing an entire batch of messages, so it will not autoscale while a batch is being processed. Consider reducing batch sizes or refactoring your consumer to process messages faster. +- `max_concurrency` has been set to 1. +- Your consumer Worker is returning errors rather than processing messages. Inspect your consumer to make sure it is healthy. +- A batch of messages is being processed. Queues checks if it should autoscale consumers only after processing an entire batch of messages, so it will not autoscale while a batch is being processed. Consider reducing batch sizes or refactoring your consumer to process messages faster. ## Limit concurrency :::caution[Recommended concurrency setting] - Cloudflare recommends leaving the maximum concurrency unset, which will allow your queue consumer to scale up as much as possible. Setting a fixed number means that your consumer will only ever scale up to that maximum, even as Queues increases the maximum supported invocations over time. - ::: If you have a workflow that is limited by an upstream API and/or system, you may prefer for your backlog to grow, trading off increased overall latency in order to avoid overwhelming an upstream system. @@ -83,10 +78,8 @@ Note that if you are writing messages to a queue faster than you can process the :::note - Ensure you are using the latest version of [wrangler](/workers/wrangler/install-and-update/). Support for configuring the maximum concurrency of a queue consumer is only supported in wrangler [`2.13.0`](https://github.com/cloudflare/workers-sdk/releases/tag/wrangler%402.13.0) or greater. - ::: To set a fixed maximum number of concurrent consumer invocations for a given queue, configure a `max_concurrency` in your `wrangler.toml` file: @@ -99,7 +92,8 @@ To set a fixed maximum number of concurrent consumer invocations for a given que To remove the limit, remove the `max_concurrency` setting from the `[[queues.consumers]]` configuration for a given queue and call `npx wrangler deploy` to push your configuration update. -{/* */} + +--> \*/} ## Billing When multiple consumer Workers are invoked, each Worker invocation incurs [CPU time costs](/workers/platform/pricing/#workers). -* If you intend to process all messages written to a queue, *the effective overall cost is the same*, even with concurrency enabled. -* Enabling concurrency simply brings those costs forward, and can help prevent messages from reaching the [message retention limit](/queues/platform/limits/). +- If you intend to process all messages written to a queue, _the effective overall cost is the same_, even with concurrency enabled. +- Enabling concurrency simply brings those costs forward, and can help prevent messages from reaching the [message retention limit](/queues/platform/limits/). Billing for consumers follows the [Workers standard usage model](/workers/platform/pricing/#example-pricing-standard-usage-model) meaning a developer is billed for the request and for CPU time used in the request. diff --git a/src/content/docs/queues/configuration/dead-letter-queues.mdx b/src/content/docs/queues/configuration/dead-letter-queues.mdx index 61a1e340559ba4..ffb5899b29ddbb 100644 --- a/src/content/docs/queues/configuration/dead-letter-queues.mdx +++ b/src/content/docs/queues/configuration/dead-letter-queues.mdx @@ -3,7 +3,6 @@ title: Dead Letter Queues pcx_content_type: concept sidebar: order: 3 - --- A Dead Letter Queue (DLQ) is a common concept in a messaging system, and represents where messages are sent when a delivery failure occurs with a consumer after `max_retries` is reached. A Dead Letter Queue is like any other queue, and can be produced to and consumed from independently. @@ -21,7 +20,7 @@ For example, the following consumer configuration would send messages to our DLQ You can also configure a DLQ when creating a consumer from the command-line using `wrangler`: ```sh -$ wrangler queues consumer add $QUEUE_NAME $SCRIPT_NAME --dead-letter-queue=$NAME_OF_OTHER_QUEUE +wrangler queues consumer add $QUEUE_NAME $SCRIPT_NAME --dead-letter-queue=$NAME_OF_OTHER_QUEUE ``` To process messages placed on your DLQ, you need to [configure a consumer](/queues/configuration/configure-queues/) for that queue as you would with any other queue. diff --git a/src/content/docs/queues/configuration/local-development.mdx b/src/content/docs/queues/configuration/local-development.mdx index c95437dd5c7b86..2f583e0329178c 100644 --- a/src/content/docs/queues/configuration/local-development.mdx +++ b/src/content/docs/queues/configuration/local-development.mdx @@ -3,7 +3,6 @@ pcx_content_type: concept title: Local Development sidebar: order: 7 - --- Queues support local development workflows using [Wrangler](/workers/wrangler/install-and-update/), the command-line interface for Workers. Wrangler runs the same version of Queues as Cloudflare runs globally. @@ -12,11 +11,11 @@ Queues support local development workflows using [Wrangler](/workers/wrangler/in To develop locally with Queues, you will need: -* [Wrangler v3.1.0](https://blog.cloudflare.com/wrangler3/) or later. +- [Wrangler v3.1.0](https://blog.cloudflare.com/wrangler3/) or later. -* Node.js version of `18.0.0` or later. Consider using a Node version manager like [Volta](https://volta.sh/) or [nvm](https://github.com/nvm-sh/nvm) to avoid permission issues and change Node versions. +- Node.js version of `18.0.0` or later. Consider using a Node version manager like [Volta](https://volta.sh/) or [nvm](https://github.com/nvm-sh/nvm) to avoid permission issues and change Node versions. -* If you are new to Queues and/or Cloudflare Workers, refer to the [Queues tutorial](/queues/get-started/) to install `wrangler` and deploy their first Queue. +- If you are new to Queues and/or Cloudflare Workers, refer to the [Queues tutorial](/queues/get-started/) to install `wrangler` and deploy their first Queue. ## Start a local development session @@ -24,13 +23,21 @@ Open your terminal and run the following commands to start a local development s ```sh # Confirm we are using wrangler v3.1.0+ -$ wrangler --version +wrangler --version +``` + +```sh output ⛅️ wrangler 3.1.0 +``` +Start a local dev session + +```sh # Start a local dev session: -$ npx wrangler dev +npx wrangler dev +``` -# Outputs: +```sh output ------------------ wrangler dev now uses local mode by default, powered by 🔥 Miniflare and 👷 workerd. To run an edge preview session for your Worker, use wrangler dev --remote @@ -38,7 +45,7 @@ To run an edge preview session for your Worker, use wrangler dev --remote [mf:inf] Ready on http://127.0.0.1:8787/ ``` -Local development sessions create a standalone, local-only environment that mirrors the production environment Queues runs in so you can test your Workers *before* you deploy to production. +Local development sessions create a standalone, local-only environment that mirrors the production environment Queues runs in so you can test your Workers _before_ you deploy to production. Refer to the [`wrangler dev` documentation](/workers/wrangler/commands/#dev) to learn more about how to configure a local development session. diff --git a/src/content/docs/queues/configuration/pull-consumers.mdx b/src/content/docs/queues/configuration/pull-consumers.mdx index 082075ce0766d0..aea9ba4978ce34 100644 --- a/src/content/docs/queues/configuration/pull-consumers.mdx +++ b/src/content/docs/queues/configuration/pull-consumers.mdx @@ -6,7 +6,6 @@ sidebar: head: - tag: title content: Cloudflare Queues - Pull consumers - --- A pull-based consumer allows you to pull from a queue over HTTP from any environment and/or programming language outside of Cloudflare Workers. A pull-based consumer can be useful when your message consumption rate is limited by upstream infrastructure or long-running tasks. @@ -15,17 +14,15 @@ A pull-based consumer allows you to pull from a queue over HTTP from any environ Deciding whether to configure a push-based consumer or a pull-based consumer will depend on how you are using your queues, as well as the configuration of infrastructure upstream from your queue consumer. -* **Starting with a [push-based consumer](/queues/reference/how-queues-works/#consumers) is the easiest way to get started and consume from a queue**. A push-based consumer runs on Workers, and by default, will automatically scale up and consume messages as they are written to the queue. -* Use a pull-based consumer if you need to consume messages from existing infrastructure outside of Cloudflare Workers, and/or where you need to carefully control how fast messages are consumed. A pull-based consumer must explicitly make a call to pull (and then acknowledge) messages from the queue, only when it is ready to do so. +- **Starting with a [push-based consumer](/queues/reference/how-queues-works/#consumers) is the easiest way to get started and consume from a queue**. A push-based consumer runs on Workers, and by default, will automatically scale up and consume messages as they are written to the queue. +- Use a pull-based consumer if you need to consume messages from existing infrastructure outside of Cloudflare Workers, and/or where you need to carefully control how fast messages are consumed. A pull-based consumer must explicitly make a call to pull (and then acknowledge) messages from the queue, only when it is ready to do so. You can remove and attach a new consumer on a queue at any time, allowing you to change from a pull-based to a push-based consumer if your requirements change. :::note[Retrieve an API bearer token] - To configure a pull-based consumer, create [an API token](/fundamentals/api/get-started/create-token/) with both the `queues#read` and `queues#write` permissions. A consumer must be able to write to a queue to acknowledge messages. - ::: To configure a pull-based consumer and receive messages from a queue, you need to: @@ -61,30 +58,28 @@ Omitting the `type` property will default the queue to push-based. You can enable a pull-based consumer on any existing queue by using the `wrangler queues consumer http` sub-commands and providing a queue name. ```sh -$ npx wrangler queues consumer http add $QUEUE-NAME +npx wrangler queues consumer http add $QUEUE-NAME ``` If you have an existing push-based consumer, you will need to remove that first. `wrangler` will return an error if you attempt to call `consumer http add` on a queue with an existing consumer configuration: ```sh -$ wrangler queues consumer worker remove $QUEUE-NAME $SCRIPT_NAME +wrangler queues consumer worker remove $QUEUE-NAME $SCRIPT_NAME ``` :::note - If you remove the Worker consumer with `wrangler` but do not delete the `[[queues.consumer]]` configuration from `wrangler.toml`, subsequent deployments of your Worker will fail when they attempt to add a conflicting consumer configuration. Ensure you remove the consumer configuration first. - ::: ## 2. Consumer authentication HTTP Pull consumers require an [API token](/fundamentals/api/get-started/create-token/) with the `com.cloudflare.api.account.queues_read` and `com.cloudflare.api.account.queues_write` permissions. -Both read *and* write are required as a pull-based consumer needs to write to the queue state to acknowledge the messages it receives. Consuming messages mutates the queue. +Both read _and_ write are required as a pull-based consumer needs to write to the queue state to acknowledge the messages it receives. Consuming messages mutates the queue. API tokens are presented as Bearer tokens in the `Authorization` header of a HTTP request in the format `Authorization: Bearer $YOUR_TOKEN_HERE`. The following example shows how to pass an API token using the `curl` HTTP client: @@ -118,16 +113,16 @@ To pull a message, make a HTTP POST request to the [Queues REST API](/api/operat ```ts // POST /accounts/${CF_ACCOUNT_ID}/queues/${QUEUE_ID}/messages/pull with the timeout & batch size let resp = await fetch( - `https://api.cloudflare.com/client/v4/accounts/${CF_ACCOUNT_ID}/queues/${QUEUE_ID}/messages/pull`, - { - method: "POST", - headers: { - "content-type": "application/json", - authorization: `Bearer ${QUEUES_API_TOKEN}`, - }, - // Optional - you can provide an empty object '{}' and the defaults will apply. - body: JSON.stringify({ visibility_timeout: 6000, batch_size: 50 }), - } + `https://api.cloudflare.com/client/v4/accounts/${CF_ACCOUNT_ID}/queues/${QUEUE_ID}/messages/pull`, + { + method: "POST", + headers: { + "content-type": "application/json", + authorization: `Bearer ${QUEUES_API_TOKEN}`, + }, + // Optional - you can provide an empty object '{}' and the defaults will apply. + body: JSON.stringify({ visibility_timeout: 6000, batch_size: 50 }), + }, ); ``` @@ -135,27 +130,27 @@ This will return an array of messages (up to the specified `batch_size`) in the ```json { - "success": true, - "errors": [], - "messages": [], - "result": { - "messages": [ - { - "body": "hello", - "id": "1ad27d24c83de78953da635dc2ea208f", - "timestamp_ms": 1689615013586, - "attempts": 2, - "lease_id": "eyJhbGciOiJkaXIiLCJlbmMiOiJBMjU2Q0JDLUhTNTEyIn0..NXmbr8h6tnKLsxJ_AuexHQ.cDt8oBb_XTSoKUkVKRD_Jshz3PFXGIyu7H1psTO5UwI.smxSvQ8Ue3-ymfkV6cHp5Va7cyUFPIHuxFJA07i17sc" - }, - { - "body": "world", - "id": "95494c37bb89ba8987af80b5966b71a7", - "timestamp_ms": 1689615013586, - "attempts": 2, - "lease_id": "eyJhbGciOiJkaXIiLCJlbmMiOiJBMjU2Q0JDLUhTNTEyIn0..QXPgHfzETsxYQ1Vd-H0hNA.mFALS3lyouNtgJmGSkTzEo_imlur95EkSiH7fIRIn2U.PlwBk14CY_EWtzYB-_5CR1k30bGuPFPUx1Nk5WIipFU" - } - ] - } + "success": true, + "errors": [], + "messages": [], + "result": { + "messages": [ + { + "body": "hello", + "id": "1ad27d24c83de78953da635dc2ea208f", + "timestamp_ms": 1689615013586, + "attempts": 2, + "lease_id": "eyJhbGciOiJkaXIiLCJlbmMiOiJBMjU2Q0JDLUhTNTEyIn0..NXmbr8h6tnKLsxJ_AuexHQ.cDt8oBb_XTSoKUkVKRD_Jshz3PFXGIyu7H1psTO5UwI.smxSvQ8Ue3-ymfkV6cHp5Va7cyUFPIHuxFJA07i17sc" + }, + { + "body": "world", + "id": "95494c37bb89ba8987af80b5966b71a7", + "timestamp_ms": 1689615013586, + "attempts": 2, + "lease_id": "eyJhbGciOiJkaXIiLCJlbmMiOiJBMjU2Q0JDLUhTNTEyIn0..QXPgHfzETsxYQ1Vd-H0hNA.mFALS3lyouNtgJmGSkTzEo_imlur95EkSiH7fIRIn2U.PlwBk14CY_EWtzYB-_5CR1k30bGuPFPUx1Nk5WIipFU" + } + ] + } } ``` @@ -163,12 +158,10 @@ Pull consumers follow a "short polling" approach: if there are messages availabl :::note - The [`pull`](/api/operations/queue-v2-messages-pull) and [`ack`](/api/operations/queue-v2-messages-ack) endpoints use the new `/queues/queue_id/messages/{action}` API format, as defined in the Queues API documentation. The undocumented `/queues/queue_id/{action}` endpoints are not supported and will be deprecated as of June 30th, 2024. - ::: Each message object has five fields: @@ -183,14 +176,14 @@ The `lease_id` allows your pull consumer to explicitly acknowledge some, none or You can configure both `batch_size` and `visibility_timeout` when pulling from a queue: -* `batch_size` (defaults to 5; max 100) - how many messages are returned to the consumer in each pull. -* `visibility_timeout` (defaults to 30 second; max 12 hours) - defines how long the consumer has to explicitly acknowledge messages delivered in the batch based on their `lease_id`. Once this timeout expires, messages are assumed unacknowledged and queued for re-delivery again. +- `batch_size` (defaults to 5; max 100) - how many messages are returned to the consumer in each pull. +- `visibility_timeout` (defaults to 30 second; max 12 hours) - defines how long the consumer has to explicitly acknowledge messages delivered in the batch based on their `lease_id`. Once this timeout expires, messages are assumed unacknowledged and queued for re-delivery again. ### Concurrent consumers You may have multiple HTTP clients pulling from the same queue concurrently: each client will receive a unique batch of messages and retain the "lease" on those messages up until the `visibility_timeout` expires, or until those messages are marked for retry. -Messages marked for retry will be put back into the queue and can be delivered to any consumer. Messages are *not* tied to a specific consumer, as consumers do not have an identity and to avoid a slow or stuck consumer from holding up processing of messages in a queue. +Messages marked for retry will be put back into the queue and can be delivered to any consumer. Messages are _not_ tied to a specific consumer, as consumers do not have an identity and to avoid a slow or stuck consumer from holding up processing of messages in a queue. Multiple consumers can be useful in cases where you have multiple upstream resources (for example, GPU infrastructure), where you want to autoscale based on the [backlog](/queues/observability/metrics/) of a queue, and/or cost. @@ -203,16 +196,23 @@ To acknowledge and/or mark messages to be retried, make a HTTP `POST` request to ```ts // POST /accounts/${CF_ACCOUNT_ID}/queues/${QUEUE_ID}/messages/ack with the lease_ids let resp = await fetch( - `https://api.cloudflare.com/client/v4/accounts/${CF_ACCOUNT_ID}/queues/${QUEUE_ID}/messages/ack`, - { - method: "POST", - headers: { - "content-type": "application/json", - authorization: `Bearer ${QUEUES_API_TOKEN}`, - }, - // If you have no messages to retry, you can specify an empty array - retries: [] - body: JSON.stringify({ acks: [{ lease_id: "lease_id1" }, { lease_id: "lease_id2" }, { lease_id: "etc" }], retries: [{ lease_id: "lease_id4" }]}), - } + `https://api.cloudflare.com/client/v4/accounts/${CF_ACCOUNT_ID}/queues/${QUEUE_ID}/messages/ack`, + { + method: "POST", + headers: { + "content-type": "application/json", + authorization: `Bearer ${QUEUES_API_TOKEN}`, + }, + // If you have no messages to retry, you can specify an empty array - retries: [] + body: JSON.stringify({ + acks: [ + { lease_id: "lease_id1" }, + { lease_id: "lease_id2" }, + { lease_id: "etc" }, + ], + retries: [{ lease_id: "lease_id4" }], + }), + }, ); ``` @@ -220,20 +220,24 @@ You may optionally specify the number of seconds to delay a message for when mar ```json { - acks: [{ lease_id: "lease_id1" }, { lease_id: "lease_id2" }, { lease_id: "lease_id3" }], - retries: [{ lease_id: "lease_id4", delay_seconds: 600}] + "acks": [ + { "lease_id": "lease_id1" }, + { "lease_id": "lease_id2" }, + { "lease_id": "lease_id3" } + ], + "retries": [{ "lease_id": "lease_id4", "delay_seconds": 600 }] } ``` Additionally: -* You should provide every `lease_id` in the request to the `/ack` endpoint if you are processing those messages in your consumer. If you do not acknowledge a message, it will be marked for re-delivery (put back in the queue). -* You can optionally mark messages to be retried: for example, if there is an error processing the message or you have upstream resource pressure. Explicitly marking a message for retry will place it back into the queue immediately, instead of waiting for a (potentially long) `visibility_timeout` to be reached. -* You can make multiple calls to the `/ack` endpoint as you make progress through a batch of messages, but we recommend grouping acknowledgements to avoid hitting [API rate limits](/queues/platform/limits/). +- You should provide every `lease_id` in the request to the `/ack` endpoint if you are processing those messages in your consumer. If you do not acknowledge a message, it will be marked for re-delivery (put back in the queue). +- You can optionally mark messages to be retried: for example, if there is an error processing the message or you have upstream resource pressure. Explicitly marking a message for retry will place it back into the queue immediately, instead of waiting for a (potentially long) `visibility_timeout` to be reached. +- You can make multiple calls to the `/ack` endpoint as you make progress through a batch of messages, but we recommend grouping acknowledgements to avoid hitting [API rate limits](/queues/platform/limits/). -Queues aims to be permissive when it comes to lease IDs: if a consumer acknowledges a message by its lease ID *after* the visibility timeout is reached, Queues will still accept that acknowledgment. If the message was delivered to another consumer during the intervening period, it will also be able to acknowledge the message without an error. +Queues aims to be permissive when it comes to lease IDs: if a consumer acknowledges a message by its lease ID _after_ the visibility timeout is reached, Queues will still accept that acknowledgment. If the message was delivered to another consumer during the intervening period, it will also be able to acknowledge the message without an error. -{/* */} +--> \*/} ## Content types :::caution - When attaching a pull-based consumer to a queue, you should ensure that messages are sent with only a `text`, `bytes` or `json` [content type](/queues/configuration/javascript-apis/#queuescontenttype). The default content type is `json`. Pull-based consumers cannot decode the `v8` content type as it is specific to the Workers runtime. - ::: When publishing to a queue that has an external consumer, you should be aware that certain content types may be encoded in a way that allows them to be safely serialized within a JSON object. @@ -280,6 +282,6 @@ Your consumer will need to decode the `json` and `bytes` types before operating ## Next steps -* Review the [REST API documentation](/api/operations/queue-v2-create-queue-consumer) and schema for Queues. -* Learn more about [how to make API calls](/fundamentals/api/how-to/make-api-calls/) to the Cloudflare API. -* Understand [what limit apply](/queues/platform/limits/) when consuming and writing to a queue. +- Review the [REST API documentation](/api/operations/queue-v2-create-queue-consumer) and schema for Queues. +- Learn more about [how to make API calls](/fundamentals/api/how-to/make-api-calls/) to the Cloudflare API. +- Understand [what limit apply](/queues/platform/limits/) when consuming and writing to a queue. diff --git a/src/content/docs/queues/examples/publish-to-a-queue-over-http.mdx b/src/content/docs/queues/examples/publish-to-a-queue-over-http.mdx index 1c67634fbac4b2..bd906445d73957 100644 --- a/src/content/docs/queues/examples/publish-to-a-queue-over-http.mdx +++ b/src/content/docs/queues/examples/publish-to-a-queue-over-http.mdx @@ -8,7 +8,6 @@ head: - tag: title content: Queues - Publish Directly via HTTP description: Publish to a Queue directly via HTTP and Workers. - --- The following example shows you how to publish messages to a queue from any HTTP client, using a shared secret to securely authenticate the client. @@ -17,8 +16,8 @@ This allows you to write to a Queue from any service or programming language tha ### Prerequisites -* A [queue created](/queues/get-started/#3-create-a-queue) via the [Cloudflare dashboard](https://dash.cloudflare.com) or the [wrangler CLI](/workers/wrangler/install-and-update/). -* A [configured **producer** binding](/queues/configuration/configure-queues/#producer) in the Cloudflare dashboard or `wrangler.toml` file. +- A [queue created](/queues/get-started/#3-create-a-queue) via the [Cloudflare dashboard](https://dash.cloudflare.com) or the [wrangler CLI](/workers/wrangler/install-and-update/). +- A [configured **producer** binding](/queues/configuration/configure-queues/#producer) in the Cloudflare dashboard or `wrangler.toml` file. Configure your `wrangler.toml` file as follows: @@ -37,25 +36,24 @@ Before you deploy the Worker, you need to create a [secret](/workers/configurati :::caution - Do not commit secrets to source control. You should use [`wrangler secret`](/workers/configuration/secrets/) to store API keys and authentication tokens securely. - ::: To generate a cryptographically secure secret, you can use the `openssl` command-line tool and `wrangler secret` to create a hex-encoded string that can be used as the shared secret: ```sh -$ openssl rand -hex 32 +openssl rand -hex 32 # This will output a 65 character long hex string ``` Copy this string and paste it into the prompt for `wrangler secret`: ```sh -$ npx wrangler secret put QUEUE_AUTH_SECRET +npx wrangler secret put QUEUE_AUTH_SECRET +``` -# Outputs: +```sh output ✨ Success! Uploaded secret QUEUE_AUTH_SECRET ``` @@ -79,7 +77,10 @@ export default { async fetch(req, env): Promise { // Authenticate that the client has the correct auth key if (env.QUEUE_AUTH_SECRET == "") { - return Response.json({ err: "application not configured" }, { status: 500 }); + return Response.json( + { err: "application not configured" }, + { status: 500 }, + ); } // Return a HTTP 403 (Forbidden) if the auth key is invalid/incorrect/misconfigured @@ -87,11 +88,22 @@ export default { let encoder = new TextEncoder(); // Securely compare our secret with the auth token provided by the client try { - if (!crypto.subtle.timingSafeEqual(encoder.encode(env.QUEUE_AUTH_SECRET), encoder.encode(authToken))) { - return Response.json({ err: "invalid auth token provided" }, { status: 403 }); + if ( + !crypto.subtle.timingSafeEqual( + encoder.encode(env.QUEUE_AUTH_SECRET), + encoder.encode(authToken), + ) + ) { + return Response.json( + { err: "invalid auth token provided" }, + { status: 403 }, + ); } } catch (e) { - return Response.json({ err: "invalid auth token provided" }, { status: 403 }); + return Response.json( + { err: "invalid auth token provided" }, + { status: 403 }, + ); } // Optional: Validate the payload is JSON @@ -123,7 +135,7 @@ export default { To deploy this Worker: ```sh -$ npx wrangler deploy +npx wrangler deploy ``` ### 3. Send a test message @@ -132,14 +144,16 @@ To make sure you successfully authenticate and write a message to your queue, us ```sh # Make sure to replace the placeholder with your shared secret -$ curl -H "Authorization: pasteyourkeyhere" "https://YOUR_WORKER.YOUR_ACCOUNT.workers.dev" --data '{"messages": [{"msg":"hello world"}]}' -# Outputs: +curl -H "Authorization: pasteyourkeyhere" "https://YOUR_WORKER.YOUR_ACCOUNT.workers.dev" --data '{"messages": [{"msg":"hello world"}]}' +``` + +```sh output {"success":true} ``` This will issue a HTTP POST request, and if successful, return a HTTP 200 with a `success: true` response body. -* If you receive a HTTP 403, this is because the `Authorization` header is invalid, or you did not configure a secret. -* If you receive a HTTP 500, this is either because you did not correctly create a shared secret to your Worker, or you attempted to send an invalid message to your queue. +- If you receive a HTTP 403, this is because the `Authorization` header is invalid, or you did not configure a secret. +- If you receive a HTTP 500, this is either because you did not correctly create a shared secret to your Worker, or you attempted to send an invalid message to your queue. You can use [`wrangler tail`](/workers/observability/logging/real-time-logs/) to debug the output of `console.log`. diff --git a/src/content/docs/queues/get-started.mdx b/src/content/docs/queues/get-started.mdx index c715ef1342df80..c10bd529175999 100644 --- a/src/content/docs/queues/get-started.mdx +++ b/src/content/docs/queues/get-started.mdx @@ -6,10 +6,9 @@ sidebar: head: - tag: title content: Get started - --- -import { Render, PackageManagers } from "~/components" +import { Render, PackageManagers } from "~/components"; Cloudflare Queues is a flexible messaging queue that allows you to queue messages for asynchronous processing. By following this guide, you will create your first queue, a Worker to publish messages to that queue, and a consumer Worker to consume messages from that queue. @@ -29,16 +28,28 @@ You will access your queue from a Worker, the producer Worker. You must create a To create a producer Worker, run: - - - + + + This will create a new directory, which will include both a `src/index.ts` Worker script, and a [`wrangler.toml`](/workers/wrangler/configuration/) configuration file. After you create your Worker, you will create a Queue to access. Move into the newly created directory: ```sh -$ cd producer-worker +cd producer-worker ``` ## 3. Create a queue @@ -48,7 +59,7 @@ To use queues, you need to create at least one queue to publish messages to and To create a queue, run: ```sh -$ npx wrangler queues create +npx wrangler queues create ``` Choose a name that is descriptive and relates to the types of messages you intend to use this queue for. Descriptive queue names look like: `debug-logs`, `user-clickstream-data`, or `password-reset-prod`. @@ -114,7 +125,7 @@ In a production application, you would likely use a [`try...catch`](https://deve With your `wrangler.toml` file and `index.ts` file configured, you are ready to publish your producer Worker. To publish your producer Worker, run: ```sh -$ npx wrangler deploy +npx wrangler deploy ``` You should see output that resembles the below, with a `*.workers.dev` URL by default. @@ -137,10 +148,8 @@ In this guide, you will create a consumer Worker and use it to log and inspect t :::note - Queues also supports [pull-based consumers](/queues/configuration/pull-consumers/), which allows any HTTP-based client to consume messages from a queue. This guide creates a push-based consumer using Cloudflare Workers. - ::: To create a consumer Worker, open your `index.ts` file and add the following `queue` handler to your existing `fetch` handler: @@ -201,7 +210,7 @@ In your consumer Worker, you are using queues to auto batch messages using the ` With your `wrangler.toml` file and `index.ts` file configured, publish your consumer Worker by running: ```sh -$ npx wrangler deploy +npx wrangler deploy ``` ## 6. Read messages from your queue @@ -211,7 +220,7 @@ After you set up consumer Worker, you can read messages from the queue. Run `wrangler tail` to start waiting for our consumer to log the messages it receives: ```sh -$ npx wrangler tail +npx wrangler tail ``` With `wrangler tail` running, open the Worker URL you opened in [step 4](/queues/get-started/#4-set-up-your-producer-worker). @@ -230,4 +239,4 @@ By completing this guide, you have now created a queue, a producer Worker that p ## Related resources -* Learn more about [Cloudflare Workers](/workers/) and the applications you can build on Cloudflare. +- Learn more about [Cloudflare Workers](/workers/) and the applications you can build on Cloudflare. diff --git a/src/content/docs/queues/tutorials/web-crawler-with-browser-rendering/index.mdx b/src/content/docs/queues/tutorials/web-crawler-with-browser-rendering/index.mdx index 30c639fce435aa..17fc455b100d2c 100644 --- a/src/content/docs/queues/tutorials/web-crawler-with-browser-rendering/index.mdx +++ b/src/content/docs/queues/tutorials/web-crawler-with-browser-rendering/index.mdx @@ -17,12 +17,9 @@ head: - tag: title content: Cloudflare Queues - Queues & Browser Rendering description: Example of how to use Queues and Browser Rendering to power a web crawler. - --- - - -import { Render, PackageManagers } from "~/components" +import { Render, PackageManagers } from "~/components"; This tutorial explains how to build and deploy a web crawler with Queues, [Browser Rendering](/browser-rendering/), and [Puppeteer](/browser-rendering/platform/puppeteer/). @@ -44,14 +41,26 @@ Additionally, you will need access to Queues. To get started, create a Worker application using the [`create-cloudflare` CLI](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare). Open a terminal window and run the following command: - - - + + + Then, move into your newly created directory: ```sh -$ cd queues-web-crawler +cd queues-web-crawler ``` ## 2. Create KV namespace @@ -59,13 +68,11 @@ $ cd queues-web-crawler We need to create a KV store. This can be done through the Cloudflare dashboard or the Wrangler CLI. For this tutorial, we will use the Wrangler CLI. ```sh -$ npx wrangler kv namespace create crawler_links -$ npx wrangler kv namespace create crawler_screenshots +npx wrangler kv namespace create crawler_links +npx wrangler kv namespace create crawler_screenshots ``` -After running the `wrangler kv namespace create` subcommand, you will get the following output. - -```txt title="Output" +```sh output 🌀 Creating namespace with title "web-crawler-crawler-links" ✨ Success! Add the following to your configuration file in your kv_namespaces array: @@ -99,8 +106,8 @@ Now, you need to set up your Worker for Browser Rendering. In your current directory, install Cloudflare’s [fork of Puppeteer](/browser-rendering/platform/puppeteer/) and also [robots-parser](https://www.npmjs.com/package/robots-parser): ```sh -$ npm install @cloudflare/puppeteer --save-dev -$ npm install robots-parser +npm install @cloudflare/puppeteer --save-dev +npm install robots-parser ``` Then, add a Browser Rendering binding. Adding a Browser Rendering binding gives the Worker access to a headless Chromium instance you will control with Puppeteer. @@ -114,7 +121,7 @@ browser = { binding = "CRAWLER_BROWSER" } Now, we need to set up the Queue. ```sh -$ npx wrangler queues create queues-web-crawler +npx wrangler queues create queues-web-crawler ``` ```txt title="Output" @@ -171,10 +178,10 @@ Add the bindings to the environment interface in `src/index.ts`, so TypeScript c import { BrowserWorker } from "@cloudflare/puppeteer"; export interface Env { - CRAWLER_QUEUE: Queue; - CRAWLER_SCREENSHOTS_KV: KVNamespace; - CRAWLER_LINKS_KV: KVNamespace; - CRAWLER_BROWSER: BrowserWorker; + CRAWLER_QUEUE: Queue; + CRAWLER_SCREENSHOTS_KV: KVNamespace; + CRAWLER_LINKS_KV: KVNamespace; + CRAWLER_BROWSER: BrowserWorker; } ``` @@ -184,19 +191,19 @@ Add a `fetch()` handler to the Worker to submit links to crawl. ```ts type Message = { - url: string; + url: string; }; export interface Env { - CRAWLER_QUEUE: Queue; - // ... etc. + CRAWLER_QUEUE: Queue; + // ... etc. } export default { - async fetch(req, env): Promise { - await env.CRAWLER_QUEUE.send({ url: await req.text() }); - return new Response("Success!"); - }, + async fetch(req, env): Promise { + await env.CRAWLER_QUEUE.send({ url: await req.text() }); + return new Response("Success!"); + }, } satisfies ExportedHandler; ``` @@ -250,39 +257,38 @@ The `puppeteer.launch()` is wrapped in a `try...catch` to allow the whole batch ```ts type Result = { - numCloudflareLinks: number; - screenshot: ArrayBuffer; + numCloudflareLinks: number; + screenshot: ArrayBuffer; }; const crawlPage = async (url: string): Promise => { - const page = await (browser as puppeteer.Browser).newPage(); - - await page.goto(url, { - waitUntil: "load", - }); - - const numCloudflareLinks = await page.$$eval("a", (links) => { - links = links.filter((link) => { - try { - return new URL(link.href).hostname.includes("cloudflare.com"); - } catch { - return false; - } - }); - return links.length; - }); - - await page.setViewport({ - width: 1920, - height: 1080, - deviceScaleFactor: 1, - }); - - return { - numCloudflareLinks, - screenshot: ((await page.screenshot({ fullPage: true })) as Buffer) - .buffer, - }; + const page = await (browser as puppeteer.Browser).newPage(); + + await page.goto(url, { + waitUntil: "load", + }); + + const numCloudflareLinks = await page.$$eval("a", (links) => { + links = links.filter((link) => { + try { + return new URL(link.href).hostname.includes("cloudflare.com"); + } catch { + return false; + } + }); + return links.length; + }); + + await page.setViewport({ + width: 1920, + height: 1080, + deviceScaleFactor: 1, + }); + + return { + numCloudflareLinks, + screenshot: ((await page.screenshot({ fullPage: true })) as Buffer).buffer, + }; }; ``` @@ -296,16 +302,16 @@ To enable recursively crawling links, add a snippet after checking the number of // const numCloudflareLinks = await page.$$eval("a", (links) => { ... await page.$$eval("a", async (links) => { - const urls: MessageSendRequest[] = links.map((link) => { - return { - body: { - url: link.href, - }, - }; - }); - try { - await env.CRAWLER_QUEUE.sendBatch(urls); - } catch {} // do nothing, likely hit subrequest limit + const urls: MessageSendRequest[] = links.map((link) => { + return { + body: { + url: link.href, + }, + }; + }); + try { + await env.CRAWLER_QUEUE.sendBatch(urls); + } catch {} // do nothing, likely hit subrequest limit }); // await page.setViewport({ ... @@ -317,25 +323,23 @@ Then, in the `queue` handler, call `crawlPage` on the URL. // in the `queue` handler: // ... if (!isAllowed) { - message.ack(); - continue; + message.ack(); + continue; } try { - const { numCloudflareLinks, screenshot } = await crawlPage(url); - const timestamp = new Date().getTime(); - const resultKey = `${encodeURIComponent(url)}-${timestamp}`; - await env.CRAWLER_LINKS_KV.put( - resultKey, - numCloudflareLinks.toString(), - { metadata: { date: timestamp } } - ); - await env.CRAWLER_SCREENSHOTS_KV.put(resultKey, screenshot, { - metadata: { date: timestamp }, - }); - message.ack(); + const { numCloudflareLinks, screenshot } = await crawlPage(url); + const timestamp = new Date().getTime(); + const resultKey = `${encodeURIComponent(url)}-${timestamp}`; + await env.CRAWLER_LINKS_KV.put(resultKey, numCloudflareLinks.toString(), { + metadata: { date: timestamp }, + }); + await env.CRAWLER_SCREENSHOTS_KV.put(resultKey, screenshot, { + metadata: { date: timestamp }, + }); + message.ack(); } catch { - message.retry(); + message.retry(); } // ... @@ -383,52 +387,52 @@ import puppeteer, { BrowserWorker } from "@cloudflare/puppeteer"; import robotsParser from "robots-parser"; type Message = { - url: string; + url: string; }; export interface Env { - CRAWLER_QUEUE: Queue; - CRAWLER_SCREENSHOTS_KV: KVNamespace; - CRAWLER_LINKS_KV: KVNamespace; - CRAWLER_BROWSER: BrowserWorker; + CRAWLER_QUEUE: Queue; + CRAWLER_SCREENSHOTS_KV: KVNamespace; + CRAWLER_LINKS_KV: KVNamespace; + CRAWLER_BROWSER: BrowserWorker; } type Result = { - numCloudflareLinks: number; - screenshot: ArrayBuffer; + numCloudflareLinks: number; + screenshot: ArrayBuffer; }; type KeyMetadata = { - date: number; + date: number; }; export default { - async fetch(req: Request, env: Env): Promise { - // util endpoint for testing purposes - await env.CRAWLER_QUEUE.send({ url: await req.text() }); - return new Response("Success!"); - }, - async queue(batch: MessageBatch, env: Env): Promise { - const crawlPage = async (url: string): Promise => { - const page = await (browser as puppeteer.Browser).newPage(); - - await page.goto(url, { - waitUntil: "load", - }); - - const numCloudflareLinks = await page.$$eval("a", (links) => { - links = links.filter((link) => { - try { - return new URL(link.href).hostname.includes("cloudflare.com"); - } catch { - return false; - } - }); - return links.length; - }); - - // to crawl recursively - uncomment this! - /*await page.$$eval("a", async (links) => { + async fetch(req: Request, env: Env): Promise { + // util endpoint for testing purposes + await env.CRAWLER_QUEUE.send({ url: await req.text() }); + return new Response("Success!"); + }, + async queue(batch: MessageBatch, env: Env): Promise { + const crawlPage = async (url: string): Promise => { + const page = await (browser as puppeteer.Browser).newPage(); + + await page.goto(url, { + waitUntil: "load", + }); + + const numCloudflareLinks = await page.$$eval("a", (links) => { + links = links.filter((link) => { + try { + return new URL(link.href).hostname.includes("cloudflare.com"); + } catch { + return false; + } + }); + return links.length; + }); + + // to crawl recursively - uncomment this! + /*await page.$$eval("a", async (links) => { const urls: MessageSendRequest[] = links.map((link) => { return { body: { @@ -441,81 +445,81 @@ export default { } catch {} // do nothing, might've hit subrequest limit });*/ - await page.setViewport({ - width: 1920, - height: 1080, - deviceScaleFactor: 1, - }); - - return { - numCloudflareLinks, - screenshot: ((await page.screenshot({ fullPage: true })) as Buffer) - .buffer, - }; - }; - - let browser: puppeteer.Browser | null = null; - try { - browser = await puppeteer.launch(env.CRAWLER_BROWSER); - } catch { - batch.retryAll(); - return; - } - - for (const message of batch.messages) { - const { url } = message.body; - const timestamp = new Date().getTime(); - const resultKey = `${encodeURIComponent(url)}-${timestamp}`; - - const sameUrlCrawls = await env.CRAWLER_LINKS_KV.list({ - prefix: `${encodeURIComponent(url)}`, - }); - - let shouldSkip = false; - for (const key of sameUrlCrawls.keys) { - if (timestamp - (key.metadata as KeyMetadata)?.date < 60 * 60 * 1000) { - // if crawled in last hour, skip - message.ack(); - shouldSkip = true; - break; - } - } - if (shouldSkip) { - continue; - } - - let isAllowed = true; - try { - const robotsTextPath = new URL(url).origin + "/robots.txt"; - const response = await fetch(robotsTextPath); - - const robots = robotsParser(robotsTextPath, await response.text()); - isAllowed = robots.isAllowed(url) ?? true; // respect robots.txt! - } catch {} - - if (!isAllowed) { - message.ack(); - continue; - } - - try { - const { numCloudflareLinks, screenshot } = await crawlPage(url); - await env.CRAWLER_LINKS_KV.put( - resultKey, - numCloudflareLinks.toString(), - { metadata: { date: timestamp } } - ); - await env.CRAWLER_SCREENSHOTS_KV.put(resultKey, screenshot, { - metadata: { date: timestamp }, - }); - message.ack(); - } catch { - message.retry(); - } - } - - await browser.close(); - }, + await page.setViewport({ + width: 1920, + height: 1080, + deviceScaleFactor: 1, + }); + + return { + numCloudflareLinks, + screenshot: ((await page.screenshot({ fullPage: true })) as Buffer) + .buffer, + }; + }; + + let browser: puppeteer.Browser | null = null; + try { + browser = await puppeteer.launch(env.CRAWLER_BROWSER); + } catch { + batch.retryAll(); + return; + } + + for (const message of batch.messages) { + const { url } = message.body; + const timestamp = new Date().getTime(); + const resultKey = `${encodeURIComponent(url)}-${timestamp}`; + + const sameUrlCrawls = await env.CRAWLER_LINKS_KV.list({ + prefix: `${encodeURIComponent(url)}`, + }); + + let shouldSkip = false; + for (const key of sameUrlCrawls.keys) { + if (timestamp - (key.metadata as KeyMetadata)?.date < 60 * 60 * 1000) { + // if crawled in last hour, skip + message.ack(); + shouldSkip = true; + break; + } + } + if (shouldSkip) { + continue; + } + + let isAllowed = true; + try { + const robotsTextPath = new URL(url).origin + "/robots.txt"; + const response = await fetch(robotsTextPath); + + const robots = robotsParser(robotsTextPath, await response.text()); + isAllowed = robots.isAllowed(url) ?? true; // respect robots.txt! + } catch {} + + if (!isAllowed) { + message.ack(); + continue; + } + + try { + const { numCloudflareLinks, screenshot } = await crawlPage(url); + await env.CRAWLER_LINKS_KV.put( + resultKey, + numCloudflareLinks.toString(), + { metadata: { date: timestamp } }, + ); + await env.CRAWLER_SCREENSHOTS_KV.put(resultKey, screenshot, { + metadata: { date: timestamp }, + }); + message.ack(); + } catch { + message.retry(); + } + } + + await browser.close(); + }, }; ``` @@ -524,7 +528,7 @@ export default { To deploy your Worker, run the following command: ```sh -$ npx wrangler deploy +npx wrangler deploy ``` You have successfully created a Worker which can submit URLs to a queue for crawling and save results to Workers KV. @@ -542,7 +546,7 @@ Refer to the [GitHub repository for the complete tutorial](https://github.com/cl ## Related resources -* [How Queues works](/queues/reference/how-queues-works/) -* [Queues Batching and Retries](/queues/configuration/batching-retries/) -* [Browser Rendering](/browser-rendering/) -* [Puppeteer Examples](https://github.com/puppeteer/puppeteer/tree/main/examples) +- [How Queues works](/queues/reference/how-queues-works/) +- [Queues Batching and Retries](/queues/configuration/batching-retries/) +- [Browser Rendering](/browser-rendering/) +- [Puppeteer Examples](https://github.com/puppeteer/puppeteer/tree/main/examples) diff --git a/src/content/docs/r2/api/workers/workers-api-usage.mdx b/src/content/docs/r2/api/workers/workers-api-usage.mdx index 965114ddcc39d1..1374d126445d48 100644 --- a/src/content/docs/r2/api/workers/workers-api-usage.mdx +++ b/src/content/docs/r2/api/workers/workers-api-usage.mdx @@ -6,10 +6,9 @@ sidebar: head: - tag: title content: Use R2 from Workers - --- -import { Render, PackageManagers } from "~/components" +import { Render, PackageManagers } from "~/components"; ## 1. Create a new application with C3 @@ -17,14 +16,22 @@ C3 (`create-cloudflare-cli`) is a command-line tool designed to help you set up To get started, open a terminal window and run: - + - + Then, move into your newly created directory: ```sh -$ cd r2-worker +cd r2-worker ``` ## 2. Create your bucket @@ -32,13 +39,13 @@ $ cd r2-worker Create your bucket by running: ```sh -$ wrangler r2 bucket create +wrangler r2 bucket create ``` To check that your bucket was created, run: ```sh -$ wrangler r2 bucket list +wrangler r2 bucket list ``` After running the `list` command, you will see all bucket names, including the one you have just created. @@ -49,12 +56,10 @@ You will need to bind your bucket to a Worker. :::note[Bindings] - A binding is how your Worker interacts with external resources such as [KV Namespaces](/kv/concepts/kv-namespaces/), [Durable Objects](/durable-objects/), or [R2 Buckets](/r2/buckets/). A binding is a runtime variable that the Workers runtime provides to your code. You can declare a variable name in your `wrangler.toml` file that will be bound to these resources at runtime, and interact with them through this variable. Every binding's variable name and behavior is determined by you when deploying the Worker. Refer to the [Environment Variables](/workers/configuration/environment-variables/) documentation for more information. A binding is defined in the `wrangler.toml` file of your Worker project's directory. - ::: To bind your R2 bucket to your Worker, add the following to your `wrangler.toml` file. Update the `binding` property to a valid JavaScript variable identifier and `bucket_name` to the `` you used to create your bucket in [step 2](#2-create-your-bucket): @@ -73,63 +78,59 @@ Within your Worker code, your bucket is now available under the `MY_BUCKET` vari :::caution[Local Development mode in Wrangler] - By default `wrangler dev` runs in local development mode. In this mode, all operations performed by your local worker will operate against local storage on your machine. Use `wrangler dev --remote` if you want R2 operations made during development to be performed against a real R2 bucket. - ::: An R2 bucket is able to READ, LIST, WRITE, and DELETE objects. You can see an example of all operations below using the Module Worker syntax. Add the following snippet into your project's `index.js` file: ```js export default { - async fetch(request, env) { - const url = new URL(request.url); - const key = url.pathname.slice(1); - - switch (request.method) { - case 'PUT': - await env.MY_BUCKET.put(key, request.body); - return new Response(`Put ${key} successfully!`); - case 'GET': - const object = await env.MY_BUCKET.get(key); - - if (object === null) { - return new Response('Object Not Found', { status: 404 }); - } - - const headers = new Headers(); - object.writeHttpMetadata(headers); - headers.set('etag', object.httpEtag); - - return new Response(object.body, { - headers, - }); - case 'DELETE': - await env.MY_BUCKET.delete(key); - return new Response('Deleted!'); - - default: - return new Response('Method Not Allowed', { - status: 405, - headers: { - Allow: 'PUT, GET, DELETE', - }, - }); - } - }, + async fetch(request, env) { + const url = new URL(request.url); + const key = url.pathname.slice(1); + + switch (request.method) { + case "PUT": + await env.MY_BUCKET.put(key, request.body); + return new Response(`Put ${key} successfully!`); + case "GET": + const object = await env.MY_BUCKET.get(key); + + if (object === null) { + return new Response("Object Not Found", { status: 404 }); + } + + const headers = new Headers(); + object.writeHttpMetadata(headers); + headers.set("etag", object.httpEtag); + + return new Response(object.body, { + headers, + }); + case "DELETE": + await env.MY_BUCKET.delete(key); + return new Response("Deleted!"); + + default: + return new Response("Method Not Allowed", { + status: 405, + headers: { + Allow: "PUT, GET, DELETE", + }, + }); + } + }, }; ``` :::caution[Prevent potential errors when accessing request.body] - The body of a [Request](https://developer.mozilla.org/en-US/docs/Web/API/Request) can only be accessed once. If you previously used `request.formData()` in the same request, you may encounter a TypeError when attempting to access `request.body`.

To avoid errors, create a clone of the Request object with `request.clone()` for each subsequent attempt to access a Request's body. Keep in mind that Workers have a [memory limit of 128MB per Worker](https://developers.cloudflare.com/workers/platform/limits#worker-limits) and loading particularly large files into a Worker's memory multiple times may reach this limit. To ensure memory usage does not reach this limit, consider using [Streams](https://developers.cloudflare.com/workers/runtime-apis/streams/). - ::: ## 5. Bucket access and privacy @@ -150,49 +151,52 @@ For `PUT` and `DELETE` requests, you will make use of a new `AUTH_KEY_SECRET` en For `GET` requests, you will ensure that only a specific file can be requested. All of this custom logic occurs inside of an `authorizeRequest` function, with the `hasValidHeader` function handling the custom header logic. If all validation passes, then the operation is allowed. ```js -const ALLOW_LIST = ['cat-pic.jpg']; +const ALLOW_LIST = ["cat-pic.jpg"]; // Check requests for a pre-shared secret const hasValidHeader = (request, env) => { - return request.headers.get('X-Custom-Auth-Key') === env.AUTH_KEY_SECRET; + return request.headers.get("X-Custom-Auth-Key") === env.AUTH_KEY_SECRET; }; function authorizeRequest(request, env, key) { - switch (request.method) { - case 'PUT': - case 'DELETE': - return hasValidHeader(request, env); - case 'GET': - return ALLOW_LIST.includes(key); - default: - return false; - } + switch (request.method) { + case "PUT": + case "DELETE": + return hasValidHeader(request, env); + case "GET": + return ALLOW_LIST.includes(key); + default: + return false; + } } export default { - async fetch(request, env, ctx) { - const url = new URL(request.url); - const key = url.pathname.slice(1); + async fetch(request, env, ctx) { + const url = new URL(request.url); + const key = url.pathname.slice(1); - if (!authorizeRequest(request, env, key)) { - return new Response('Forbidden', { status: 403 }); - } + if (!authorizeRequest(request, env, key)) { + return new Response("Forbidden", { status: 403 }); + } - // ... - } + // ... + }, }; ``` For this to work, you need to create a secret via Wrangler: ```sh -$ wrangler secret put AUTH_KEY_SECRET +wrangler secret put AUTH_KEY_SECRET ``` This command will prompt you to enter a secret in your terminal: ```sh -$ wrangler secret put AUTH_KEY_SECRET +wrangler secret put AUTH_KEY_SECRET +``` + +```sh output Enter the secret text you'd like assigned to the variable AUTH_KEY_SECRET on the script named : ********* 🌀 Creating the secret for script name @@ -206,7 +210,7 @@ This secret is now available as `AUTH_KEY_SECRET` on the `env` parameter in your With your Worker and bucket set up, run the `npx wrangler deploy` [command](/workers/wrangler/commands/#deploy) to deploy to Cloudflare's global network: ```sh -$ npx wrangler deploy +npx wrangler deploy ``` You can verify your authorization logic is working through the following commands, using your deployed Worker endpoint: @@ -218,27 +222,27 @@ When uploading files to R2 via `curl`, ensure you use **[`--data-binary`](https: ```sh # Attempt to write an object without providing the "X-Custom-Auth-Key" header -$ curl https://your-worker.dev/cat-pic.jpg -X PUT --data-binary 'test' +curl https://your-worker.dev/cat-pic.jpg -X PUT --data-binary 'test' #=> Forbidden # Expected because header was missing # Attempt to write an object with the wrong "X-Custom-Auth-Key" header value -$ curl https://your-worker.dev/cat-pic.jpg -X PUT --header "X-Custom-Auth-Key: hotdog" --data-binary 'test' +curl https://your-worker.dev/cat-pic.jpg -X PUT --header "X-Custom-Auth-Key: hotdog" --data-binary 'test' #=> Forbidden # Expected because header value did not match the AUTH_KEY_SECRET value # Attempt to write an object with the correct "X-Custom-Auth-Key" header value # Note: Assume that "*********" is the value of your AUTH_KEY_SECRET Wrangler secret -$ curl https://your-worker.dev/cat-pic.jpg -X PUT --header "X-Custom-Auth-Key: *********" --data-binary 'test' +curl https://your-worker.dev/cat-pic.jpg -X PUT --header "X-Custom-Auth-Key: *********" --data-binary 'test' #=> Put cat-pic.jpg successfully! # Attempt to read object called "foo" -$ curl https://your-worker.dev/foo +curl https://your-worker.dev/foo #=> Forbidden # Expected because "foo" is not in the ALLOW_LIST # Attempt to read an object called "cat-pic.jpg" -$ curl https://your-worker.dev/cat-pic.jpg +curl https://your-worker.dev/cat-pic.jpg #=> test # Note: This is the value that was successfully PUT above ``` diff --git a/src/content/docs/r2/buckets/cors.mdx b/src/content/docs/r2/buckets/cors.mdx index 0b71d4a2e6ad3f..ae6b910d48dc6c 100644 --- a/src/content/docs/r2/buckets/cors.mdx +++ b/src/content/docs/r2/buckets/cors.mdx @@ -3,7 +3,6 @@ pcx_content_type: how-to title: Configure CORS sidebar: order: 3 - --- [Cross-Origin Resource Sharing (CORS)](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) is a standardized method that prevents domain X from accessing the resources of domain Y. It does so by using special headers in HTTP responses from domain Y, that allow your browser to verify that domain Y permits domain X to access these resources. @@ -20,9 +19,9 @@ CORS is used when you interact with a bucket from a web browser, and you have tw Before you configure CORS, you must have: -* An R2 bucket with at least one object. If you need to create a bucket, refer to [Create a public bucket](/r2/buckets/public-buckets/). -* A domain you can use to access the object. This can also be a `localhost`. -* (Optional) Access keys. An access key is only required when creating a presigned URL. +- An R2 bucket with at least one object. If you need to create a bucket, refer to [Create a public bucket](/r2/buckets/public-buckets/). +- A domain you can use to access the object. This can also be a `localhost`. +- (Optional) Access keys. An access key is only required when creating a presigned URL. ## Use CORS with a public bucket @@ -60,7 +59,7 @@ const url = await getSignedUrl( }), { expiresIn: 60 * 60 * 24 * 7, // 7d - } + }, ); console.log(url); ``` @@ -70,7 +69,7 @@ console.log(url); Test the presigned URL by uploading an object using cURL. The example below would upload the `123` text to R2 with a `Content-Type` of `text/plain`. ```sh -$ curl --request PUT --header "Content-Type: text/plain" --data "123" +curl --request PUT --header "Content-Type: text/plain" --data "123" ``` ## Add CORS policies from the dashboard @@ -104,31 +103,27 @@ The `AllowedOrigins` specify the web server being used, and `localhost:3000` is ```json [ - { - "AllowedOrigins": [ - "http://localhost:3000" - ], - "AllowedMethods": [ - "GET" - ] - } + { + "AllowedOrigins": ["http://localhost:3000"], + "AllowedMethods": ["GET"] + } ] ``` In general, a good strategy for making sure you have set the correct CORS rules is to look at the network request that is being blocked by your browser. -* Make sure the rule's `AllowedOrigins` includes the origin where the request is being made from. (like `http://localhost:3000` or `https://yourdomain.com`) -* Make sure the rule's `AllowedMethods` includes the blocked request's method. -* Make sure the rule's `AllowedHeaders` includes the blocked request's headers. +- Make sure the rule's `AllowedOrigins` includes the origin where the request is being made from. (like `http://localhost:3000` or `https://yourdomain.com`) +- Make sure the rule's `AllowedMethods` includes the blocked request's method. +- Make sure the rule's `AllowedHeaders` includes the blocked request's headers. Also note that CORS rule propagation can, in rare cases, take up to 30 seconds. ## Common Issues -* Only a cross-origin request will include CORS response headers. - * A cross-origin request is identified by the presence of an `Origin` HTTP request header, with the value of the `Origin` representing a valid, allowed origin as defined by the `AllowedOrigins` field of your CORS policy. - * A request without an `Origin` HTTP request header will *not* return any CORS response headers. Origin values must match exactly. -* The value(s) for `AllowedOrigins` in your CORS policy must be a valid [HTTP Origin header value](https://fetch.spec.whatwg.org/#origin-header). A valid `Origin` header does *not* include a path component and must only be comprised of a `scheme://host[:port]` (where port is optional). - * Valid `AllowedOrigins` value: `https://static.example.com` - includes the scheme and host. A port is optional and implied by the scheme. - * Invalid `AllowedOrigins` value: `https://static.example.com/` or `https://static.example.com/fonts/Calibri.woff2` - incorrectly includes the path component. -* If you need to access specific header values via JavaScript on the origin page, such as when using a video player, ensure you set `Access-Control-Expose-Headers` correctly and include the headers your JavaScript needs access to, such as `Content-Length`. +- Only a cross-origin request will include CORS response headers. + - A cross-origin request is identified by the presence of an `Origin` HTTP request header, with the value of the `Origin` representing a valid, allowed origin as defined by the `AllowedOrigins` field of your CORS policy. + - A request without an `Origin` HTTP request header will _not_ return any CORS response headers. Origin values must match exactly. +- The value(s) for `AllowedOrigins` in your CORS policy must be a valid [HTTP Origin header value](https://fetch.spec.whatwg.org/#origin-header). A valid `Origin` header does _not_ include a path component and must only be comprised of a `scheme://host[:port]` (where port is optional). + - Valid `AllowedOrigins` value: `https://static.example.com` - includes the scheme and host. A port is optional and implied by the scheme. + - Invalid `AllowedOrigins` value: `https://static.example.com/` or `https://static.example.com/fonts/Calibri.woff2` - incorrectly includes the path component. +- If you need to access specific header values via JavaScript on the origin page, such as when using a video player, ensure you set `Access-Control-Expose-Headers` correctly and include the headers your JavaScript needs access to, such as `Content-Length`. diff --git a/src/content/docs/r2/buckets/create-buckets.mdx b/src/content/docs/r2/buckets/create-buckets.mdx index 06f74f0722d77c..2100b8b988f90d 100644 --- a/src/content/docs/r2/buckets/create-buckets.mdx +++ b/src/content/docs/r2/buckets/create-buckets.mdx @@ -3,19 +3,16 @@ pcx_content_type: how-to title: Create new buckets sidebar: order: 1 - --- You can create a bucket from the Cloudflare dashboard or using Wrangler. :::note - Wrangler is [a command-line tool](/workers/wrangler/install-and-update/) for building with Cloudflare's developer products, including R2. The R2 support in Wrangler allows you to manage buckets and perform basic operations against objects in your buckets. For more advanced use-cases, including bulk uploads or mirroring files from legacy object storage providers, we recommend [rclone](/r2/examples/rclone/) or an [S3-compatible](/r2/api/s3/) tool of your choice. - ::: ## Bucket-Level Operations @@ -23,33 +20,31 @@ The R2 support in Wrangler allows you to manage buckets and perform basic operat Create a bucket with the [`r2 bucket create`](/workers/wrangler/commands/#create-4) command: ```sh -$ wrangler r2 bucket create your-bucket-name +wrangler r2 bucket create your-bucket-name ``` :::note - Bucket names can only contain lowercase letters (a-z), numbers (0-9), and hyphens (-). The placeholder text is only for the example. - ::: List buckets in the current account with the [`r2 bucket list`](/workers/wrangler/commands/#list-5) command: ```sh -$ wrangler r2 bucket list +wrangler r2 bucket list ``` Delete a bucket with the [`r2 bucket delete`](/workers/wrangler/commands/#delete-7) command. Note that the bucket must be empty and all objects must be deleted. ```sh -$ wrangler r2 bucket delete BUCKET_TO_DELETE +wrangler r2 bucket delete BUCKET_TO_DELETE ``` ## Notes -* Bucket names and buckets are not public by default. To allow public access to a bucket, [visit the public bucket documentation](/r2/buckets/public-buckets/). -* Invalid (unauthorized) access attempts to private buckets do not incur R2 operations charges against that bucket. Refer to the [R2 pricing FAQ](/r2/pricing/#frequently-asked-questions) to understand what operations are billed vs. not billed. -* The TLS (SSL) certificate created for each R2 bucket uses a wildcard certificate of the form `*.r2.cloudflarestorage.com`, which prevents account IDs and names from showing up in Certificate Transparency logs. +- Bucket names and buckets are not public by default. To allow public access to a bucket, [visit the public bucket documentation](/r2/buckets/public-buckets/). +- Invalid (unauthorized) access attempts to private buckets do not incur R2 operations charges against that bucket. Refer to the [R2 pricing FAQ](/r2/pricing/#frequently-asked-questions) to understand what operations are billed vs. not billed. +- The TLS (SSL) certificate created for each R2 bucket uses a wildcard certificate of the form `*.r2.cloudflarestorage.com`, which prevents account IDs and names from showing up in Certificate Transparency logs. diff --git a/src/content/docs/r2/buckets/event-notifications.mdx b/src/content/docs/r2/buckets/event-notifications.mdx index 7313ea2a517239..a5b6a1166c3248 100644 --- a/src/content/docs/r2/buckets/event-notifications.mdx +++ b/src/content/docs/r2/buckets/event-notifications.mdx @@ -1,17 +1,14 @@ --- title: Event notifications pcx_content_type: how-to - --- Event notifications send messages to your [queue](/queues/) when data in your R2 bucket changes. You can consume these messages with a [consumer Worker](/queues/reference/how-queues-works/#create-a-consumer-worker) or [pull over HTTP](/queues/configuration/pull-consumers/) from outside of Cloudflare Workers. :::note[Open Beta] - The event notifications feature is currently in open beta. To report bugs or request features, go to the #r2-storage channel in the [Cloudflare Developer Discord](https://discord.cloudflare.com) or fill out the [feedback form](https://forms.gle/2HBKD9zG9PFiU4v79). - ::: ## Get started with event notifications @@ -20,9 +17,9 @@ The event notifications feature is currently in open beta. To report bugs or req Before getting started, you will need: -* An existing R2 bucket. If you do not already have an existing R2 bucket, refer to [Create buckets](/r2/buckets/create-buckets/). -* An existing queue. If you do not already have a queue, refer to [Create a queue](/queues/get-started/#3-create-a-queue). -* A [consumer Worker](/queues/reference/how-queues-works/#create-a-consumer-worker) or [HTTP pull](/queues/configuration/pull-consumers/) enabled on your Queue. +- An existing R2 bucket. If you do not already have an existing R2 bucket, refer to [Create buckets](/r2/buckets/create-buckets/). +- An existing queue. If you do not already have a queue, refer to [Create a queue](/queues/get-started/#3-create-a-queue). +- A [consumer Worker](/queues/reference/how-queues-works/#create-a-consumer-worker) or [HTTP pull](/queues/configuration/pull-consumers/) enabled on your Queue. ### Set up Wrangler @@ -33,7 +30,7 @@ To begin, refer to [Install/Update Wrangler](/workers/wrangler/install-and-updat To enable event notifications, add an event notification rule to your bucket by running the [`r2 bucket notification create` command](/workers/wrangler/commands/#notification-create). Event notification rules determine the [event types](/r2/buckets/event-notifications/#event-types) that trigger notifications and enable filtering based on object `prefix` and `suffix`. ```sh -$ npx wrangler r2 bucket notification create --event-type --queue +npx wrangler r2 bucket notification create --event-type --queue ``` For a more complete step-by-step example, refer to the [Log and store upload events in R2 with event notifications](/r2/examples/upload-logs-event-notifications/) example. @@ -41,46 +38,52 @@ For a more complete step-by-step example, refer to the [Log and store upload eve ## Event types - - - - - - - - - - - - - - - + + + + + + + + + + + + + + +
- Event type - - Description - - Trigger actions -
- object-create - - Triggered when new objects are created or existing objects are overwritten. - -
    -
  • PutObject
  • -
  • CopyObject
  • -
  • CompleteMultipartUpload
  • -
-
- object-delete - - Triggered when an object is explicitly removed from the bucket.

- Note: During the beta, deletes that occur as a result of object lifecycle policies will not trigger this event. -
-
    -
  • DeleteObject
  • -
-
Event typeDescriptionTrigger actions
+ object-create + + Triggered when new objects are created or existing objects are + overwritten. + +
    +
  • + PutObject +
  • +
  • + CopyObject +
  • +
  • + CompleteMultipartUpload +
  • +
+
+ object-delete + + Triggered when an object is explicitly removed from the bucket. +
+
+ Note: During the beta, deletes that occur as a result of object + lifecycle policies will not trigger this event. +
+
    +
  • + DeleteObject +
  • +
+
## Message format @@ -89,165 +92,131 @@ Queue consumers receive notifications as [Messages](/queues/configuration/javasc ```json { - "account": "3f4b7e3dcab231cbfdaa90a6a28bd548", - "action": "CopyObject", - "bucket": "my-bucket", - "object": { - "key": "my-new-object", - "size": 65536, - "eTag": "c846ff7a18f28c2e262116d6e8719ef0" - }, - "eventTime": "2024-05-24T19:36:44.379Z", - "copySource": { - "bucket": "my-bucket", - "object": "my-original-object" - } + "account": "3f4b7e3dcab231cbfdaa90a6a28bd548", + "action": "CopyObject", + "bucket": "my-bucket", + "object": { + "key": "my-new-object", + "size": 65536, + "eTag": "c846ff7a18f28c2e262116d6e8719ef0" + }, + "eventTime": "2024-05-24T19:36:44.379Z", + "copySource": { + "bucket": "my-bucket", + "object": "my-original-object" + } } ``` ### Properties - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
- Property - - Type - - Description -
- account - - String - - The Cloudflare account ID that the event is associated with. -
- action - - String - - The type of action that triggered the event notification. Example actions include: PutObject, CopyObject, CompleteMultipartUpload, DeleteObject. -
- bucket - - String - - The name of the bucket where the event occurred. -
- object - - Object - - A nested object containing details about the object involved in the event. -
- object.key - - String - - The key (or name) of the object within the bucket. -
- object.size - - Number - - The size of the object in bytes. Note: not present for object-delete events. -
- object.eTag - - String - - The entity tag (eTag) of the object. Note: not present for object-delete events. -
- eventTime - - String - - The time when the action that triggered the event occurred. -
- copySource - - Object - - A nested object containing details about the source of a copied object. Note: only present for events triggered by CopyObject. -
- copySource.bucket - - String - - The bucket that contained the source object. -
- copySource.object - - String - - The name of the source object. -
PropertyTypeDescription
+ account + StringThe Cloudflare account ID that the event is associated with.
+ action + String + The type of action that triggered the event notification. Example + actions include: PutObject, CopyObject,{" "} + CompleteMultipartUpload, DeleteObject. +
+ bucket + StringThe name of the bucket where the event occurred.
+ object + Object + A nested object containing details about the object involved in the + event. +
+ object.key + StringThe key (or name) of the object within the bucket.
+ object.size + Number + The size of the object in bytes. Note: not present for object-delete + events. +
+ object.eTag + String + The entity tag (eTag) of the object. Note: not present for object-delete + events. +
+ eventTime + StringThe time when the action that triggered the event occurred.
+ copySource + Object + A nested object containing details about the source of a copied object. + Note: only present for events triggered by CopyObject. +
+ copySource.bucket + StringThe bucket that contained the source object.
+ copySource.object + StringThe name of the source object.
## Limitations During the beta, event notifications has the following limitations: -* Queues [per-queue message throughput](/queues/platform/limits/) is currently 400 messages per second. If your workload produces more than 400 notifications per second, messages may be dropped. -* For a given bucket, only one event notification rule can be created per queue. -* Each bucket can have up to 5 event notification rules. -* Deletes that occur as a result of object lifecycle policies will not trigger an event notification. -* Event notifications are not available for buckets with [jursdictional restrictions](/r2/reference/data-location/#jurisdictional-restrictions). +- Queues [per-queue message throughput](/queues/platform/limits/) is currently 400 messages per second. If your workload produces more than 400 notifications per second, messages may be dropped. +- For a given bucket, only one event notification rule can be created per queue. +- Each bucket can have up to 5 event notification rules. +- Deletes that occur as a result of object lifecycle policies will not trigger an event notification. +- Event notifications are not available for buckets with [jursdictional restrictions](/r2/reference/data-location/#jurisdictional-restrictions). diff --git a/src/content/docs/r2/data-migration/sippy.mdx b/src/content/docs/r2/data-migration/sippy.mdx index 343d52517b025d..a326e4aba48a15 100644 --- a/src/content/docs/r2/data-migration/sippy.mdx +++ b/src/content/docs/r2/data-migration/sippy.mdx @@ -6,21 +6,18 @@ learning_center: link: https://www.cloudflare.com/learning/cloud/what-is-data-migration/ sidebar: order: 2 - --- -import { Render } from "~/components" +import { Render } from "~/components"; Sippy is a data migration service that allows you to copy data from other cloud providers to R2 as the data is requested, without paying unnecessary cloud egress fees typically associated with moving large amounts of data. :::note[Open Beta] - This feature is currently in beta. We do not recommend using Sippy for production traffic while in beta. To report bugs or request features, reach out to us on the [Cloudflare Developer Discord](https://discord.cloudflare.com) in the #r2-storage channel or fill out our [feedback form](https://forms.gle/7WuCsbu5LmWkQVu76). - ::: Migration-specific egress fees are reduced by leveraging requests within the flow of your application where you would already be paying egress fees to simultaneously copy objects to R2. @@ -29,17 +26,17 @@ Migration-specific egress fees are reduced by leveraging requests within the flo When enabled for an R2 bucket, Sippy implements the following migration strategy across [Workers](/r2/api/workers/), [S3 API](/r2/api/s3/), and [public buckets](/r2/buckets/public-buckets/): -* When an object is requested, it is served from your R2 bucket if it is found. -* If the object is not found in R2, the object will simultaneously be returned from your source storage bucket and copied to R2. -* All other operations, including put and delete, continue to work as usual. +- When an object is requested, it is served from your R2 bucket if it is found. +- If the object is not found in R2, the object will simultaneously be returned from your source storage bucket and copied to R2. +- All other operations, including put and delete, continue to work as usual. ## When is Sippy useful? Using Sippy as part of your migration strategy can be a good choice when: -* You want to start migrating your data, but you want to avoid paying upfront egress fees to facilitate the migration of your data all at once. -* You want to experiment by serving frequently accessed objects from R2 to eliminate egress fees, without investing time in data migration. -* You have frequently changing data and are looking to conduct a migration while avoiding downtime. Sippy can be used to serve requests while [Super Slurper](/r2/data-migration/super-slurper/) can be used to migrate your remaining data. +- You want to start migrating your data, but you want to avoid paying upfront egress fees to facilitate the migration of your data all at once. +- You want to experiment by serving frequently accessed objects from R2 to eliminate egress fees, without investing time in data migration. +- You have frequently changing data and are looking to conduct a migration while avoiding downtime. Sippy can be used to serve requests while [Super Slurper](/r2/data-migration/super-slurper/) can be used to migrate your remaining data. If you are looking to migrate all of your data from an existing cloud provider to R2 at one time, we recommend using [Super Slurper](/r2/data-migration/super-slurper/). @@ -47,9 +44,9 @@ If you are looking to migrate all of your data from an existing cloud provider t Before getting started, you will need: -* An existing R2 bucket. If you don't already have one, refer to [Create buckets](/r2/buckets/create-buckets/). -* [API credentials](/r2/data-migration/sippy/#create-credentials-for-storage-providers) for your source object storage bucket. -* (Wrangler only) Cloudflare R2 Access Key ID and Secret Access Key with read and write permissions. For more information, refer to [Authentication](/r2/api/s3/tokens/). +- An existing R2 bucket. If you don't already have one, refer to [Create buckets](/r2/buckets/create-buckets/). +- [API credentials](/r2/data-migration/sippy/#create-credentials-for-storage-providers) for your source object storage bucket. +- (Wrangler only) Cloudflare R2 Access Key ID and Secret Access Key with read and write permissions. For more information, refer to [Authentication](/r2/api/s3/tokens/). ### Enable Sippy via the Dashboard @@ -70,7 +67,7 @@ To begin, install [`npm`](https://docs.npmjs.com/getting-started). Then [install Log in to Wrangler with the [`wrangler login` command](/workers/wrangler/commands/#login). Then run the [`r2 bucket sippy enable` command](/workers/wrangler/commands/#sippy-enable): ```sh -$ npx wrangler r2 bucket sippy enable +npx wrangler r2 bucket sippy enable ``` This will prompt you to select between supported object storage providers and lead you through setup. @@ -81,10 +78,8 @@ For information on required parameters and examples of how to enable Sippy, refe :::note - If your bucket is setup with [jurisdictional restrictions](/r2/reference/data-location/#jurisdictional-restrictions), you will need to pass a `cf-r2-jurisdiction` request header with that jurisdiction. For example, `cf-r2-jurisdiction: eu`. - ::: ## Disable Sippy on your R2 bucket @@ -101,7 +96,7 @@ If your bucket is setup with [jurisdictional restrictions](/r2/reference/data-lo To disable Sippy, run the [`r2 bucket sippy disable` command](/workers/wrangler/commands/#sippy-disable): ```sh -$ npx wrangler r2 bucket sippy disable +npx wrangler r2 bucket sippy disable ``` ### API @@ -112,64 +107,89 @@ For more information on required parameters and examples of how to disable Sippy Cloudflare currently supports copying data from the following cloud object storage providers to R2: -* Amazon S3 -* Google Cloud Storage (GCS) +- Amazon S3 +- Google Cloud Storage (GCS) ## R2 API interactions When Sippy is enabled, it changes the behavior of certain actions on your R2 bucket across [Workers](/r2/api/workers/), [S3 API](/r2/api/s3/), and [public buckets](/r2/buckets/public-buckets/). - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + +
- Action - - New behavior -
- GetObject - - Calls to GetObject will first attempt to retrieve the object from your R2 bucket. If the object is not present, the object will be served from the source storage bucket and simultaneously uploaded to the requested R2 bucket.

- Additional considerations: -
    -
  • Modifications to objects in the source bucket will not be reflected in R2 after the initial copy. Once an object is stored in R2, it will not be re-retrieved and updated.
  • -
  • Only user-defined metadata that is prefixed by x-amz-meta- in the HTTP response will be migrated. Remaining metadata will be omitted.
  • -
  • For larger objects, multiple GET requests may be required to fully copy the object to R2.
  • -
-
- HeadObject - - Behaves similarly to GetObject, but only retrieves object metadata. Will not copy objects to the requested R2 bucket. -
- PutObject - - No change to behavior. Calls to PutObject will add objects to the requested R2 bucket. -
- DeleteObject - - No change to behavior. Calls to DeleteObject will delete objects in the requested R2 bucket.

- Additional considerations: -
    -
  • If deletes to objects in R2 are not also made in the source storage bucket, subsequent GetObject requests will result in objects being retrieved from the source bucket and copied to R2.
  • -
-
+ Action + + New behavior +
+ GetObject + + Calls to GetObject will first attempt to retrieve the object from your + R2 bucket. If the object is not present, the object will be served from + the source storage bucket and simultaneously uploaded to the requested + R2 bucket. +
+
+ Additional considerations: +
    +
  • + Modifications to objects in the source bucket will not be reflected + in R2 after the initial copy. Once an object is stored in R2, it + will not be re-retrieved and updated. +
  • +
  • + Only user-defined metadata that is prefixed by{" "} + x-amz-meta- in the HTTP response will be migrated. + Remaining metadata will be omitted. +
  • +
  • + For larger objects, multiple GET requests may be required to fully + copy the object to R2. +
  • +
+
+ HeadObject + + Behaves similarly to GetObject, but only retrieves object metadata. Will + not copy objects to the requested R2 bucket. +
+ PutObject + + No change to behavior. Calls to PutObject will add objects to the + requested R2 bucket. +
+ DeleteObject + + No change to behavior. Calls to DeleteObject will delete objects in the + requested R2 bucket. +
+
+ Additional considerations: +
    +
  • + If deletes to objects in R2 are not also made in the source storage + bucket, subsequent GetObject requests will result in objects being + retrieved from the source bucket and copied to R2. +
  • +
+
Actions not listed above have no change in behavior. For more information, refer to [Workers API reference](/r2/api/workers/workers-api-reference/) or [S3 API compatibility](/r2/api/s3/api/). @@ -187,20 +207,14 @@ To create credentials with the correct permissions: ```json { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Action": [ - "s3:Get*", - "s3:List*" - ], - "Resource": [ - "arn:aws:s3:::", - "arn:aws:s3:::/*" - ] - } - ] + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": ["s3:Get*", "s3:List*"], + "Resource": ["arn:aws:s3:::", "arn:aws:s3:::/*"] + } + ] } ``` diff --git a/src/content/docs/r2/examples/aws/aws-cli.mdx b/src/content/docs/r2/examples/aws/aws-cli.mdx index b9cc102e4ddd40..d5cd96b3236b97 100644 --- a/src/content/docs/r2/examples/aws/aws-cli.mdx +++ b/src/content/docs/r2/examples/aws/aws-cli.mdx @@ -1,17 +1,20 @@ --- title: aws CLI pcx_content_type: configuration - --- -import { Render } from "~/components" +import { Render } from "~/components"; -
+ +
With the [`aws`](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) CLI installed, you may run [`aws configure`](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) to configure a new profile. You will be prompted with a series of questions for the new profile's details. ```shell -$ aws configure +aws configure +``` + +```sh output AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: auto @@ -21,7 +24,7 @@ Default output format [None]: json You may then use the `aws` CLI for any of your normal workflows. ```sh -$ aws s3api list-buckets --endpoint-url https://.r2.cloudflarestorage.com +aws s3api list-buckets --endpoint-url https://.r2.cloudflarestorage.com # { # "Buckets": [ # { @@ -35,7 +38,7 @@ $ aws s3api list-buckets --endpoint-url https://.r2.cloudflarestorage # } # } -$ aws s3api list-objects-v2 --endpoint-url https://.r2.cloudflarestorage.com --bucket sdk-example +aws s3api list-objects-v2 --endpoint-url https://.r2.cloudflarestorage.com --bucket sdk-example # { # "Contents": [ # { @@ -55,6 +58,6 @@ You can also generate presigned links which allow you to share public access to ```sh # You can pass the --expires-in flag to determine how long the presigned link is valid. -$ aws s3 presign --endpoint-url https://.r2.cloudflarestorage.com s3://sdk-example/ferriswasm.png --expires-in 3600 -# https://.r2.cloudflarestorage.com/sdk-example/ferriswasm.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature= +aws s3 presign --endpoint-url https://.r2.cloudflarestorage.com s3://sdk-example/ferriswasm.png --expires-in 3600 +# https://.r2.cloudflarestorage.com/sdk-example/ferriswasm.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature= ``` diff --git a/src/content/docs/r2/examples/aws/aws-sdk-js-v3.mdx b/src/content/docs/r2/examples/aws/aws-sdk-js-v3.mdx index b21e96af58d124..15ffafeb00a8b7 100644 --- a/src/content/docs/r2/examples/aws/aws-sdk-js-v3.mdx +++ b/src/content/docs/r2/examples/aws/aws-sdk-js-v3.mdx @@ -1,38 +1,34 @@ --- title: aws-sdk-js-v3 pcx_content_type: configuration - --- -import { Render } from "~/components" +import { Render } from "~/components"; -
+ +
JavaScript or TypeScript users may continue to use the [`@aws-sdk/client-s3`](https://www.npmjs.com/package/@aws-sdk/client-s3) npm package as per normal. You must pass in the R2 configuration credentials when instantiating your `S3` service client: ```ts import { - S3Client, - ListBucketsCommand, - ListObjectsV2Command, - GetObjectCommand, - PutObjectCommand + S3Client, + ListBucketsCommand, + ListObjectsV2Command, + GetObjectCommand, + PutObjectCommand, } from "@aws-sdk/client-s3"; const S3 = new S3Client({ - region: "auto", - endpoint: `https://${ACCOUNT_ID}.r2.cloudflarestorage.com`, - credentials: { - accessKeyId: ACCESS_KEY_ID, - secretAccessKey: SECRET_ACCESS_KEY, - }, + region: "auto", + endpoint: `https://${ACCOUNT_ID}.r2.cloudflarestorage.com`, + credentials: { + accessKeyId: ACCESS_KEY_ID, + secretAccessKey: SECRET_ACCESS_KEY, + }, }); -console.log( - await S3.send( - new ListBucketsCommand('') - ) -); +console.log(await S3.send(new ListBucketsCommand(""))); // { // '$metadata': { // httpStatusCode: 200, @@ -53,9 +49,7 @@ console.log( // } console.log( - await S3.send( - new ListObjectsV2Command({ Bucket: 'my-bucket-name' }) - ) + await S3.send(new ListObjectsV2Command({ Bucket: "my-bucket-name" })), ); // { // '$metadata': { @@ -105,22 +99,30 @@ console.log( You can also generate presigned links that can be used to share public read or write access to a bucket temporarily. ```ts -import { getSignedUrl } from '@aws-sdk/s3-request-presigner'; +import { getSignedUrl } from "@aws-sdk/s3-request-presigner"; // Use the expiresIn property to determine how long the presigned link is valid. console.log( - await getSignedUrl(S3, new GetObjectCommand({Bucket: 'my-bucket-name', Key: 'dog.png'}), { expiresIn: 3600 }) -) + await getSignedUrl( + S3, + new GetObjectCommand({ Bucket: "my-bucket-name", Key: "dog.png" }), + { expiresIn: 3600 }, + ), +); // https://my-bucket-name..r2.cloudflarestorage.com/dog.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-Signature=&X-Amz-SignedHeaders=host&x-id=GetObject // You can also create links for operations such as putObject to allow temporary write access to a specific key. console.log( - await getSignedUrl(S3, new PutObjectCommand({Bucket: 'my-bucket-name', Key: 'dog.png'}), { expiresIn: 3600 }) -) + await getSignedUrl( + S3, + new PutObjectCommand({ Bucket: "my-bucket-name", Key: "dog.png" }), + { expiresIn: 3600 }, + ), +); ``` You can use the link generated by the `putObject` example to upload to the specified bucket and key, until the presigned link expires. ```sh -$ curl -X PUT https://my-bucket-name..r2.cloudflarestorage.com/dog.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-Signature=&X-Amz-SignedHeaders=host&x-id=PutObject -F "data=@dog.png" +curl -X PUT https://my-bucket-name..r2.cloudflarestorage.com/dog.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-Signature=&X-Amz-SignedHeaders=host&x-id=PutObject -F "data=@dog.png" ``` diff --git a/src/content/docs/r2/examples/aws/aws-sdk-js.mdx b/src/content/docs/r2/examples/aws/aws-sdk-js.mdx index d43736f3929682..a0bfe5418a59b6 100644 --- a/src/content/docs/r2/examples/aws/aws-sdk-js.mdx +++ b/src/content/docs/r2/examples/aws/aws-sdk-js.mdx @@ -1,30 +1,28 @@ --- title: aws-sdk-js pcx_content_type: configuration - --- -import { Render } from "~/components" +import { Render } from "~/components"; -
+ +
If you are interested in the newer version of the AWS JavaScript SDK visit this [dedicated aws-sdk-js-v3 example page](/r2/examples/aws/aws-sdk-js-v3/). JavaScript or TypeScript users may continue to use the [`aws-sdk`](https://www.npmjs.com/package/aws-sdk) npm package as per normal. You must pass in the R2 configuration credentials when instantiating your `S3` service client: ```ts -import S3 from 'aws-sdk/clients/s3.js'; +import S3 from "aws-sdk/clients/s3.js"; const s3 = new S3({ - endpoint: `https://${accountid}.r2.cloudflarestorage.com`, - accessKeyId: `${access_key_id}`, - secretAccessKey: `${access_key_secret}`, - signatureVersion: 'v4', + endpoint: `https://${accountid}.r2.cloudflarestorage.com`, + accessKeyId: `${access_key_id}`, + secretAccessKey: `${access_key_secret}`, + signatureVersion: "v4", }); -console.log( - await s3.listBuckets().promise() -); +console.log(await s3.listBuckets().promise()); //=> { //=> Buckets: [ //=> { Name: 'user-uploads', CreationDate: 2022-04-13T21:23:47.102Z }, @@ -36,9 +34,7 @@ console.log( //=> } //=> } -console.log( - await s3.listObjects({ Bucket: 'my-bucket-name' }).promise() -); +console.log(await s3.listObjects({ Bucket: "my-bucket-name" }).promise()); //=> { //=> IsTruncated: false, //=> Name: 'my-bucket-name', @@ -72,18 +68,26 @@ You can also generate presigned links that can be used to share public read or w ```ts // Use the expires property to determine how long the presigned link is valid. console.log( - await s3.getSignedUrlPromise('getObject', { Bucket: 'my-bucket-name', Key: 'dog.png', Expires: 3600 }) -) + await s3.getSignedUrlPromise("getObject", { + Bucket: "my-bucket-name", + Key: "dog.png", + Expires: 3600, + }), +); // https://my-bucket-name..r2.cloudflarestorage.com/dog.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-Signature=&X-Amz-SignedHeaders=host // You can also create links for operations such as putObject to allow temporary write access to a specific key. console.log( - await s3.getSignedUrlPromise('putObject', { Bucket: 'my-bucket-name', Key: 'dog.png', Expires: 3600 }) -) + await s3.getSignedUrlPromise("putObject", { + Bucket: "my-bucket-name", + Key: "dog.png", + Expires: 3600, + }), +); ``` You can use the link generated by the `putObject` example to upload to the specified bucket and key, until the presigned link expires. ```sh -$ curl -X PUT https://my-bucket-name..r2.cloudflarestorage.com/dog.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-Signature=&X-Amz-SignedHeaders=host --data-binary @dog.png +curl -X PUT https://my-bucket-name..r2.cloudflarestorage.com/dog.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-Signature=&X-Amz-SignedHeaders=host --data-binary @dog.png ``` diff --git a/src/content/docs/r2/examples/aws/aws-sdk-php.mdx b/src/content/docs/r2/examples/aws/aws-sdk-php.mdx index d9d439f31729d7..fceb5260c5fbe2 100644 --- a/src/content/docs/r2/examples/aws/aws-sdk-php.mdx +++ b/src/content/docs/r2/examples/aws/aws-sdk-php.mdx @@ -3,12 +3,12 @@ title: aws-sdk-php summary: Example of how to configure `aws-sdk-php` to use R2. pcx_content_type: configuration description: Example of how to configure `aws-sdk-php` to use R2. - --- -import { Render } from "~/components" +import { Render } from "~/components"; -
+ +
This example uses version 3 of the [aws-sdk-php](https://packagist.org/packages/aws/aws-sdk-php) package. You must pass in the R2 configuration credentials when instantiating your `S3` service client: @@ -115,5 +115,5 @@ print_r((string)$request->getUri()) You can use the link generated by the `putObject` example to upload to the specified bucket and key, until the presigned link expires. ```sh -$ curl -X PUT https://sdk-example..r2.cloudflarestorage.com/ferriswasm.png?X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Date=&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Signature= --data-binary @ferriswasm.png +curl -X PUT https://sdk-example..r2.cloudflarestorage.com/ferriswasm.png?X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Date=&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Signature= --data-binary @ferriswasm.png ``` diff --git a/src/content/docs/r2/examples/aws/boto3.mdx b/src/content/docs/r2/examples/aws/boto3.mdx index 5296d41bad29e4..888f310b16e7dc 100644 --- a/src/content/docs/r2/examples/aws/boto3.mdx +++ b/src/content/docs/r2/examples/aws/boto3.mdx @@ -1,12 +1,12 @@ --- title: boto3 pcx_content_type: configuration - --- -import { Render } from "~/components" +import { Render } from "~/components"; -
+ +
You must configure [`boto3`](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) to use a preconstructed `endpoint_url` value. This can be done through any `boto3` usage that accepts connection arguments; for example: @@ -46,11 +46,14 @@ s3.delete_object(Bucket=, Key=) ``` ```sh -$ python main.py -# Buckets: -# - user-uploads -# - my-bucket-name -# Objects: -# - cat.png -# - todos.txt +python main.py +``` + +```sh output +Buckets: + - user-uploads + - my-bucket-name +Objects: + - cat.png + - todos.txt ``` diff --git a/src/content/docs/r2/examples/rclone.mdx b/src/content/docs/r2/examples/rclone.mdx index 157d59c2bc1280..03050da93bd256 100644 --- a/src/content/docs/r2/examples/rclone.mdx +++ b/src/content/docs/r2/examples/rclone.mdx @@ -1,12 +1,12 @@ --- title: rclone pcx_content_type: configuration - --- -import { Render } from "~/components" +import { Render } from "~/components"; -
+ +
With [`rclone`](https://rclone.org/install/) installed, you may run [`rclone config`](https://rclone.org/s3/) to configure a new S3 storage provider. You will be prompted with a series of questions for the new provider details. @@ -14,18 +14,18 @@ With [`rclone`](https://rclone.org/install/) installed, you may run [`rclone con It is recommended that you choose a unique provider name and then rely on all default answers to the prompts. -This will create a `rclone` configuration file, which you can then modify with the preset configuration given below. +This will create a `rclone` configuration file, which you can then modify with the preset configuration given below. ::: :::note -Ensure you are running `rclone` v1.59 or greater ([rclone downloads](https://beta.rclone.org/)). Versions prior to v1.59 may return `HTTP 401: Unauthorized` errors, as earlier versions of `rclone` do not strictly align to the S3 specification in all cases. +Ensure you are running `rclone` v1.59 or greater ([rclone downloads](https://beta.rclone.org/)). Versions prior to v1.59 may return `HTTP 401: Unauthorized` errors, as earlier versions of `rclone` do not strictly align to the S3 specification in all cases. ::: If you have already configured `rclone` in the past, you may run `rclone config file` to print the location of your `rclone` configuration file: ```sh -$ rclone config file +rclone config file # Configuration file is stored at: # ~/.config/rclone/rclone.conf ``` @@ -44,7 +44,7 @@ acl = private :::note -If you are using a token with [Object-level permissions](/r2/api/s3/tokens/#permissions), you will need to add `no_check_bucket = true` to the configuration to avoid errors. +If you are using a token with [Object-level permissions](/r2/api/s3/tokens/#permissions), you will need to add `no_check_bucket = true` to the configuration to avoid errors. ::: You may then use the new `rclone` provider for any of your normal workflows. @@ -54,7 +54,7 @@ You may then use the new `rclone` provider for any of your normal workflows. The [rclone tree](https://rclone.org/commands/rclone_tree/) command can be used to list the contents of the remote, in this case Cloudflare R2. ```sh -$ rclone tree r2demo: +rclone tree r2demo: # / # ├── user-uploads # │ └── foobar.png @@ -62,7 +62,7 @@ $ rclone tree r2demo: # ├── cat.png # └── todos.txt -$ rclone tree r2demo:my-bucket-name +rclone tree r2demo:my-bucket-name # / # ├── cat.png # └── todos.txt @@ -74,14 +74,14 @@ The [rclone copy](https://rclone.org/commands/rclone_copy/) command can be used ```sh # Upload dog.txt to the user-uploads bucket -$ rclone copy dog.txt r2demo:user-uploads/ -$ rclone tree r2demo:user-uploads +rclone copy dog.txt r2demo:user-uploads/ +rclone tree r2demo:user-uploads # / # ├── foobar.png # └── dog.txt # Download dog.txt from the user-uploads bucket -$ rclone copy r2demo:user-uploads/dog.txt . +rclone copy r2demo:user-uploads/dog.txt . ``` ### A note about multipart upload part sizes @@ -94,7 +94,7 @@ Balancing part size depends heavily on your use-case, but these factors can help You can configure rclone's multipart upload part size using the `--s3-chunk-size` CLI argument. Note that you might also have to adjust the `--s3-upload-cutoff` argument to ensure that rclone is using multipart uploads. Both of these can be set in your configuration file as well. Generally, `--s3-upload-cutoff` will be no less than `--s3-chunk-size`. ```sh -$ rclone copy long-video.mp4 r2demo:user-uploads/ --s3-upload-cutoff=100M --s3-chunk-size=100M +rclone copy long-video.mp4 r2demo:user-uploads/ --s3-upload-cutoff=100M --s3-chunk-size=100M ``` ## Generate presigned URLs @@ -103,6 +103,6 @@ You can also generate presigned links which allow you to share public access to ```sh # You can pass the --expire flag to determine how long the presigned link is valid. The --unlink flag isn't supported by R2. -$ rclone link r2demo:my-bucket-name/cat.png --expire 3600 +rclone link r2demo:my-bucket-name/cat.png --expire 3600 # https://.r2.cloudflarestorage.com/my-bucket-name/cat.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature= ``` diff --git a/src/content/docs/r2/examples/upload-logs-event-notifications.mdx b/src/content/docs/r2/examples/upload-logs-event-notifications.mdx index d0bf17ad223343..d4ddfe71a64b7c 100644 --- a/src/content/docs/r2/examples/upload-logs-event-notifications.mdx +++ b/src/content/docs/r2/examples/upload-logs-event-notifications.mdx @@ -9,12 +9,9 @@ difficulty: Beginner updated: 2024-04-02 languages: - TypeScript - --- - - -import { Render, PackageManagers } from "~/components" +import { Render, PackageManagers } from "~/components"; This example provides a step-by-step guide on using [event notifications](/r2/buckets/event-notifications/) to capture and store R2 upload logs in a separate bucket. @@ -22,7 +19,7 @@ This example provides a step-by-step guide on using [event notifications](/r2/bu To continue, you will need: -* A subscription to [Workers Paid](/workers/platform/pricing/#workers), required for using queues. +- A subscription to [Workers Paid](/workers/platform/pricing/#workers), required for using queues. ## 1. Install Wrangler @@ -32,30 +29,28 @@ To begin, refer to [Install/Update Wrangler](/workers/wrangler/install-and-updat You will need to create two R2 buckets: -* `example-upload-bucket`: When new objects are uploaded to this bucket, your [consumer Worker](/queues/get-started/#5-create-your-consumer-worker) will write logs. -* `example-log-sink-bucket`: Upload logs from `example-upload-bucket` will be written to this bucket. +- `example-upload-bucket`: When new objects are uploaded to this bucket, your [consumer Worker](/queues/get-started/#5-create-your-consumer-worker) will write logs. +- `example-log-sink-bucket`: Upload logs from `example-upload-bucket` will be written to this bucket. To create the buckets, run the following Wrangler commands: ```sh -$ npx wrangler r2 bucket create example-upload-bucket -$ npx wrangler r2 bucket create example-log-sink-bucket +npx wrangler r2 bucket create example-upload-bucket +npx wrangler r2 bucket create example-log-sink-bucket ``` ## 3. Create a queue :::note - You will need a [Workers Paid plan](/workers/platform/pricing/) to create and use [Queues](/queues/) and Cloudflare Workers to consume event notifications. - ::: Event notifications capture changes to data in `example-upload-bucket`. You will need to create a new queue to receive notifications: ```sh -$ npx wrangler queues create example-event-notification-queue +npx wrangler queues create example-event-notification-queue ``` ## 4. Create a Worker @@ -64,14 +59,26 @@ Before you enable event notifications for `example-upload-bucket`, you need to c Create a new Worker with C3 (`create-cloudflare` CLI). [C3](/pages/get-started/c3/) is a command-line tool designed to help you set up and deploy new applications, including Workers, to Cloudflare. - - - + + + Then, move into your newly created directory: ```sh -$ cd consumer-worker +cd consumer-worker ``` ## 5. Configure your Worker @@ -100,25 +107,27 @@ Add a [`queue` handler](/queues/configuration/javascript-apis/#consumer) to `src ```ts export interface Env { - LOG_SINK: R2Bucket; + LOG_SINK: R2Bucket; } export default { async queue(batch, env): Promise { - const batchId = new Date().toISOString().replace(/[:.]/g, '-'); + const batchId = new Date().toISOString().replace(/[:.]/g, "-"); const fileName = `upload-logs-${batchId}.json`; // Serialize the entire batch of messages to JSON - const fileContent = new TextEncoder().encode(JSON.stringify(batch.messages)); + const fileContent = new TextEncoder().encode( + JSON.stringify(batch.messages), + ); // Write the batch of messages to R2 await env.LOG_SINK.put(fileName, fileContent, { httpMetadata: { - contentType: "application/json" - } + contentType: "application/json", + }, }); - } -} satisfies ExportedHandler;; + }, +} satisfies ExportedHandler; ``` ## 7. Deploy your Worker @@ -126,7 +135,7 @@ export default { To deploy your consumer Worker, run the [`wrangler deploy`](/workers/wrangler/commands/#deploy) command: ```sh -$ npx wrangler deploy +npx wrangler deploy ``` ## 8. Enable event notifications @@ -134,7 +143,7 @@ $ npx wrangler deploy Now that you have your consumer Worker ready to handle incoming event notification messages, you need to enable event notifications with the [`wrangler r2 bucket notification create` command](/workers/wrangler/commands/#notification-create) for `example-upload-bucket`: ```sh -$ npx wrangler r2 bucket notification create example-upload-bucket --event-type object-create --queue example-event-notification-queue +npx wrangler r2 bucket notification create example-upload-bucket --event-type object-create --queue example-event-notification-queue ``` ## 9. Test diff --git a/src/content/docs/r2/objects/delete-objects.mdx b/src/content/docs/r2/objects/delete-objects.mdx index d1c1853ab6f70e..149c444ce17013 100644 --- a/src/content/docs/r2/objects/delete-objects.mdx +++ b/src/content/docs/r2/objects/delete-objects.mdx @@ -3,7 +3,6 @@ title: Delete objects pcx_content_type: how-to sidebar: order: 3 - --- You can delete objects from your bucket from the Cloudflare dashboard or using the Wrangler. @@ -20,10 +19,8 @@ You can delete objects from your bucket from the Cloudflare dashboard or using t :::caution - Deleting objects from a bucket is irreversible. - ::: You can delete an object directly by calling `delete` against a `{bucket}/{path/to/object}`. @@ -31,7 +28,10 @@ You can delete an object directly by calling `delete` against a `{bucket}/{path/ For example, to delete the object `foo.png` from bucket `test-bucket`: ```sh -$ wrangler r2 object delete test-bucket/foo.png +wrangler r2 object delete test-bucket/foo.png +``` + +```sh output Deleting object "foo.png" from bucket "test-bucket". Delete complete. diff --git a/src/content/docs/r2/objects/download-objects.mdx b/src/content/docs/r2/objects/download-objects.mdx index e556f0f8f5d549..d2300996fc1837 100644 --- a/src/content/docs/r2/objects/download-objects.mdx +++ b/src/content/docs/r2/objects/download-objects.mdx @@ -3,7 +3,6 @@ title: Download objects pcx_content_type: how-to sidebar: order: 2 - --- You can download objects from your bucket from the Cloudflare dashboard or using the Wrangler. @@ -22,8 +21,10 @@ You can download objects from a bucket, including private buckets in your accoun For example, to download `file.bin` from `test-bucket`: ```sh -$ wrangler r2 object get test-bucket/file.bin +wrangler r2 object get test-bucket/file.bin +``` +```sh output Downloading "file.bin" from "test-bucket". Download complete. ``` diff --git a/src/content/docs/r2/objects/upload-objects.mdx b/src/content/docs/r2/objects/upload-objects.mdx index 172446cbd4ed61..878b1c275c4227 100644 --- a/src/content/docs/r2/objects/upload-objects.mdx +++ b/src/content/docs/r2/objects/upload-objects.mdx @@ -3,7 +3,6 @@ title: Upload objects pcx_content_type: how-to sidebar: order: 1 - --- You can upload objects to your bucket from the Cloudflare dashboard or using the Wrangler. @@ -23,17 +22,17 @@ You will receive a confirmation message after a successful upload. :::note - Wrangler only supports uploading files up to 315MB in size. To upload large files, we recommend [rclone](/r2/examples/rclone/) or an [S3-compatible](/r2/api/s3/) tool of your choice. - ::: To upload a file to R2, call `put` and provide a name (key) for the object, as well as the path to the file via `--file`: ```sh -$ wrangler r2 object put test-bucket/dataset.csv --file=dataset.csv +wrangler r2 object put test-bucket/dataset.csv --file=dataset.csv +``` +```sh output Creating object "dataset.csv" in bucket "test-bucket". Upload complete. ``` diff --git a/src/content/docs/radar/investigate/bgp-anomalies.mdx b/src/content/docs/radar/investigate/bgp-anomalies.mdx index a4a84348de74e4..3148040a4c8859 100644 --- a/src/content/docs/radar/investigate/bgp-anomalies.mdx +++ b/src/content/docs/radar/investigate/bgp-anomalies.mdx @@ -5,10 +5,9 @@ sidebar: order: 3 badge: text: Beta - --- -import { Render, PackageManagers } from "~/components" +import { Render, PackageManagers } from "~/components"; To access Cloudflare Radar BGP Anomaly Detection results, you will first need to create an API token that includes a `User:User Details` permission. All the following examples should work with a free-tier Cloudflare account. @@ -80,17 +79,17 @@ The result shows the most recent 10 BGP hijack events that affects `AS64512`. In the response we can learn about the following information about each event: -* `hijack_msg_count`: the number of potential BGP hijack messages observed from all peers. -* `peer_asns`: the AS numbers of the route collector peers who observed the hijack messages. -* `prefixes`: the affected prefixes. -* `hijacker_asn` and `victim_asns`: the potential hijacker ASN and victim ASNs. -* `confidence_score`: a quantitative score describing how confident the system is for this event being a hijack: - * 1-3: low confidence. - * 4-7: medium confidence. - * 8-above: high confidence. -* `tags`: the evidence collected for the events. Each `tag` is also associated with a score that affects the overall confidence score: - * a positive score indicates that the event is *more likely* to be a hijack. - * a negative score indicates that the event is *less likely* to be a hijack. +- `hijack_msg_count`: the number of potential BGP hijack messages observed from all peers. +- `peer_asns`: the AS numbers of the route collector peers who observed the hijack messages. +- `prefixes`: the affected prefixes. +- `hijacker_asn` and `victim_asns`: the potential hijacker ASN and victim ASNs. +- `confidence_score`: a quantitative score describing how confident the system is for this event being a hijack: + - 1-3: low confidence. + - 4-7: medium confidence. + - 8-above: high confidence. +- `tags`: the evidence collected for the events. Each `tag` is also associated with a score that affects the overall confidence score: + - a positive score indicates that the event is _more likely_ to be a hijack. + - a negative score indicates that the event is _less likely_ to be a hijack. Users can further filter out low-confidence events by attaching a `minConfidence=8` parameter, which will return only events with a `confidence_score` of `8` or higher. @@ -156,12 +155,12 @@ The result shows the most recent 10 BGP route leak events that affects `AS64512` In the response we can learn about the following information about each event: -* `leak_asn`: the AS who potentially caused the leak. -* `leak_seg`: the AS path segment observed and believed to be a leak. -* `min_ts` and `max_ts`: the earliest and latest timestamps of the leak announcements. -* `leak_count`: the total number of BGP route leak announcements observed. -* `peer_count`: the number of route collector peers observed the leak. -* `prefix_count` and `origin_count`: the number of prefixes and origin ASes affected by the leak. +- `leak_asn`: the AS who potentially caused the leak. +- `leak_seg`: the AS path segment observed and believed to be a leak. +- `min_ts` and `max_ts`: the earliest and latest timestamps of the leak announcements. +- `leak_count`: the total number of BGP route leak announcements observed. +- `peer_count`: the number of route collector peers observed the leak. +- `prefix_count` and `origin_count`: the number of prefixes and origin ASes affected by the leak. ## Send alerts for BGP hijacks @@ -171,9 +170,9 @@ We will use Cloudflare Workers as the platform and use its Cron Triggers to peri For the app, we would like it to do the following things: -* Fetch from Cloudflare API with a given API token. -* Check against Cloudflare KV to know what events are new. -* Construct messages for new hijacks and send out alerts via webhook triggers. +- Fetch from Cloudflare API with a given API token. +- Check against Cloudflare KV to know what events are new. +- Construct messages for new hijacks and send out alerts via webhook triggers. ### Worker app setup @@ -181,14 +180,22 @@ We will start with setting up a Cloudflare Worker app. First, create a new Workers app in a local directory: - + - + To start developing your Worker, `cd` into your new project directory: ```sh -$ cd hijack-alerts +cd hijack-alerts ``` In your `wrangler.toml` file, change the default checking frequency (once per hour) to what you like. Here is an example @@ -220,19 +227,22 @@ The following `apiFetch(env, paramsStr)` handles taking in a request parameters fetch from the Cloudflare API BGP hijacks endpoint. ```javascript -async function apiFetch (env, paramsStr) { - const config = { - headers:{ - "Authorization": `Bearer ${env.CF_API_TOKEN}`, - } - }; - const res = await fetch(`https://api.cloudflare.com/client/v4/radar/bgp/hijacks/events?${paramsStr}`, config); - - if(!res.ok){ - console.log(JSON.stringify(res)) - return null - } - return await (res).json() +async function apiFetch(env, paramsStr) { + const config = { + headers: { + Authorization: `Bearer ${env.CF_API_TOKEN}`, + }, + }; + const res = await fetch( + `https://api.cloudflare.com/client/v4/radar/bgp/hijacks/events?${paramsStr}`, + config, + ); + + if (!res.ok) { + console.log(JSON.stringify(res)); + return null; + } + return await res.json(); } ``` @@ -268,33 +278,33 @@ The main loop that checks for the most recent events looks like this (some of th ```javascript let new_events = []; let page = 1; -while(true) { - // query for events - const query_params = `per_page=10&page=${page}&involvedAsn=${env.TARGET_ASN}&sortBy=ID&sortOrder=DESC` - const data = await apiFetch(env, query_params); - - // first batch, save KV value only - if(first_batch) { - await env.HIJACKS_KV.put("latest_id", (events[0].id).toString()); - return - } - - // some validation skipped - // ... - - let reached_last = false; - for(const event of data.result.events){ - if(event.id <= kv_latest_id) { - // reached the latest events - reached_last = true; - break - } - new_events.push(event) - } - if(reached_last){ - break - } - page += 1; +while (true) { + // query for events + const query_params = `per_page=10&page=${page}&involvedAsn=${env.TARGET_ASN}&sortBy=ID&sortOrder=DESC`; + const data = await apiFetch(env, query_params); + + // first batch, save KV value only + if (first_batch) { + await env.HIJACKS_KV.put("latest_id", events[0].id.toString()); + return; + } + + // some validation skipped + // ... + + let reached_last = false; + for (const event of data.result.events) { + if (event.id <= kv_latest_id) { + // reached the latest events + reached_last = true; + break; + } + new_events.push(event); + } + if (reached_last) { + break; + } + page += 1; } ``` @@ -302,11 +312,11 @@ Now that we have all the newly detected events saved in `new_events` variable, w ```javascript // sort events by increasing ID order -new_events.sort((a,b)=>a.id - b.id); -const kv_latest_id = new_events[new_events.length-1].id +new_events.sort((a, b) => a.id - b.id); +const kv_latest_id = new_events[new_events.length - 1].id; // push new events -for(const event of new_events) { - await send_alert(env, event); +for (const event of new_events) { + await send_alert(env, event); } // update latest_id KV value await env.HIJACKS_KV.put("latest_id", kv_latest_id.toString()); @@ -323,20 +333,19 @@ async function send_hangout_alert(env, event) { const webhook_url = `${env.WEBHOOK_URL}&threadKey=bgp-hijacks-event-${event.id}`; const data = JSON.stringify({ - 'text': - `Detected BGP hijack event (${event.id}): + text: `Detected BGP hijack event (${event.id}): Detected time: *${event.min_hijack_ts} UTC* Detected ASN: *${event.hijacker_asn}* Expected ASN(s): *${event.victim_asns.join(" ")}* Prefixes: *${event.prefixes.join(" ")}* -Tags: *${event.tags.map((tag)=>tag.name).join(" ")}* +Tags: *${event.tags.map((tag) => tag.name).join(" ")}* Peer Count: *${event.peer_ip_count}* `, }); await fetch(webhook_url, { - method: 'POST', + method: "POST", headers: { - 'Content-Type': 'application/json; charset=UTF-8', + "Content-Type": "application/json; charset=UTF-8", }, body: data, }); @@ -363,36 +372,37 @@ Then, you can create an email-sending function to send alert emails to your conf ```javascript async function send_email_alert(hijacker, prefixes, victims) { - const msg = createMimeMessage(); - msg.setSender({ name: "BGP Hijack Alerter", addr: "@" }); - msg.setRecipient("@example.com"); - msg.setSubject("BGP hijack alert"); - msg.addMessage({ - contentType: 'text/plain', - data: `BGP hijack detected: + const msg = createMimeMessage(); + msg.setSender({ + name: "BGP Hijack Alerter", + addr: "@", + }); + msg.setRecipient("@example.com"); + msg.setSubject("BGP hijack alert"); + msg.addMessage({ + contentType: "text/plain", + data: `BGP hijack detected: Detected origin: ${hijacker} Expected origins: ${victims.join(" ")} Prefixes: ${prefixes.join(" ")} - ` - }) - - var message = new EmailMessage( - "@", - "@example.com", - msg.asRaw() - ); - try { - await env.SEND_EMAIL_BINDING.send(message); - } catch (e) { - return new Response(e.message); - } + `, + }); + + var message = new EmailMessage( + "@", + "@example.com", + msg.asRaw(), + ); + try { + await env.SEND_EMAIL_BINDING.send(message); + } catch (e) { + return new Response(e.message); + } } ``` [email-routing]: /email-routing/ - [email-workers-tutorial]: /email-routing/email-workers/send-email-workers/ - [wrangler-send-email]: /workers/wrangler/configuration/#email-bindings ## Next steps @@ -400,5 +410,4 @@ async function send_email_alert(hijacker, prefixes, victims) { Refer to our API documentation for [BGP route leaks][route-leak-api-doc] and [BGP hijacks][hijack-api-doc] for more information on these topics. [route-leak-api-doc]: /api/operations/radar-get-bgp-route-leak-events - [hijack-api-doc]: /api/operations/radar-get-bgp-hijacks-events diff --git a/src/content/docs/speed/optimization/other/signed-exchanges/reference.mdx b/src/content/docs/speed/optimization/other/signed-exchanges/reference.mdx index 137b56a90792c5..7840b038e5dc48 100644 --- a/src/content/docs/speed/optimization/other/signed-exchanges/reference.mdx +++ b/src/content/docs/speed/optimization/other/signed-exchanges/reference.mdx @@ -6,7 +6,6 @@ sidebar: head: - tag: title content: Reference - Signed exchanges - --- ## Verify that signed exchanges are working @@ -16,7 +15,7 @@ Make a request with the signed exchange request header: 1. Open a terminal and run the following command, replacing `https://example.com` with your domain: ```sh -$ curl -svo /dev/null https://example.com -H "Accept: application/signed-exchange;v=b3" +curl -svo /dev/null https://example.com -H "Accept: application/signed-exchange;v=b3" ``` 2. Verify that the `Content-Type` in the response headers is `application/signed-exchange;v=b3` rather than `text/html`. @@ -26,7 +25,10 @@ $ curl -svo /dev/null https://example.com -H "Accept: application/signed-exchang Cloudflare uses [Google for SXGs' certificate issuance](https://web.dev/signed-exchanges/#certificates). Once SXGs is enabled, Cloudflare automatically adds the Certification Authority Authorization records on behalf of the zones. Refer to the following example below: ```bash -$ dig example.com caa +dig example.com caa +``` + +```bash output ;; ANSWER SECTION: example.com. 3600 IN CAA 0 issue "digicert.com; cansignhttpexchanges=yes" example.com. 3600 IN CAA 0 issue "pki.goog; cansignhttpexchanges=yes" diff --git a/src/content/docs/ssl/client-certificates/configure-your-mobile-app-or-iot-device.mdx b/src/content/docs/ssl/client-certificates/configure-your-mobile-app-or-iot-device.mdx index 29ee6752c896f9..4b058b390c0da3 100644 --- a/src/content/docs/ssl/client-certificates/configure-your-mobile-app-or-iot-device.mdx +++ b/src/content/docs/ssl/client-certificates/configure-your-mobile-app-or-iot-device.mdx @@ -3,7 +3,6 @@ pcx_content_type: tutorial title: Configure your mobile app or IoT device sidebar: order: 4 - --- This tutorial demonstrates how to configure your Internet-of-things (IoT) device and mobile application to use client certificates with [API Shield](/api-shield/). @@ -19,77 +18,85 @@ Temperatures are stored in [Workers KV](/kv/concepts/how-kv-works/) using the so The example API code below saves a temperature and timestamp into KV when a POST is made and returns the most recent five temperatures when a GET request is made. ```js -const defaultData = { temperatures: [] } +const defaultData = { temperatures: [] }; -const getCache = key => TEMPERATURES.get(key) -const setCache = (key, data) => TEMPERATURES.put(key, data) +const getCache = (key) => TEMPERATURES.get(key); +const setCache = (key, data) => TEMPERATURES.put(key, data); async function addTemperature(request) { - - // Pull previously recorded temperatures for this client. - const ip = request.headers.get('CF-Connecting-IP') - const cacheKey = `data-${ip}` - let data - const cache = await getCache(cacheKey) - if (!cache) { - await setCache(cacheKey, JSON.stringify(defaultData)) - data = defaultData - } else { - data = JSON.parse(cache) - } - - // Append the recorded temperatures with the submitted reading (assuming it has both temperature and a timestamp). - try { - const body = await request.text() - const val = JSON.parse(body) - - if (val.temperature && val.time) { - data.temperatures.push(val) - await setCache(cacheKey, JSON.stringify(data)) - return new Response("", { status: 201 }) - } else { - return new Response("Unable to parse temperature and/or timestamp from JSON POST body", { status: 400 }) - } - } catch (err) { - return new Response(err, { status: 500 }) - } + // Pull previously recorded temperatures for this client. + const ip = request.headers.get("CF-Connecting-IP"); + const cacheKey = `data-${ip}`; + let data; + const cache = await getCache(cacheKey); + if (!cache) { + await setCache(cacheKey, JSON.stringify(defaultData)); + data = defaultData; + } else { + data = JSON.parse(cache); + } + + // Append the recorded temperatures with the submitted reading (assuming it has both temperature and a timestamp). + try { + const body = await request.text(); + const val = JSON.parse(body); + + if (val.temperature && val.time) { + data.temperatures.push(val); + await setCache(cacheKey, JSON.stringify(data)); + return new Response("", { status: 201 }); + } else { + return new Response( + "Unable to parse temperature and/or timestamp from JSON POST body", + { status: 400 }, + ); + } + } catch (err) { + return new Response(err, { status: 500 }); + } } -function compareTimestamps(a,b) { - return -1 * (Date.parse(a.time) - Date.parse(b.time)) +function compareTimestamps(a, b) { + return -1 * (Date.parse(a.time) - Date.parse(b.time)); } // Return the 5 most recent temperature measurements. async function getTemperatures(request) { - const ip = request.headers.get('CF-Connecting-IP') - const cacheKey = `data-${ip}` - - const cache = await getCache(cacheKey) - if (!cache) { - return new Response(JSON.stringify(defaultData), { status: 200, headers: { 'content-type': 'application/json' } }) - } else { - data = JSON.parse(cache) - const retval = JSON.stringify(data.temperatures.sort(compareTimestamps).splice(0,5)) - return new Response(retval, { status: 200, headers: { 'content-type': 'application/json' } }) - } + const ip = request.headers.get("CF-Connecting-IP"); + const cacheKey = `data-${ip}`; + + const cache = await getCache(cacheKey); + if (!cache) { + return new Response(JSON.stringify(defaultData), { + status: 200, + headers: { "content-type": "application/json" }, + }); + } else { + data = JSON.parse(cache); + const retval = JSON.stringify( + data.temperatures.sort(compareTimestamps).splice(0, 5), + ); + return new Response(retval, { + status: 200, + headers: { "content-type": "application/json" }, + }); + } } async function handleRequest(request) { - - if (request.method === 'POST') { - return addTemperature(request) - } else { - return getTemperatures(request) - } - + if (request.method === "POST") { + return addTemperature(request); + } else { + return getTemperatures(request); + } } -addEventListener('fetch', event => { - event.respondWith(handleRequest(event.request)) -}) +addEventListener("fetch", (event) => { + event.respondWith(handleRequest(event.request)); +}); ``` -*** +--- ## Step 1 — Validate API @@ -132,7 +139,7 @@ $ curl --silent https://shield.upinatoms.com/temps | jq . ] ``` -*** +--- ## Step 2 — Create Cloudflare-issued certificates @@ -253,7 +260,7 @@ $ curl https://api.cloudflare.com/client/v4/zones/{zone_id}/client_certificates --data "$request_body" | perl -npe 's/\\n/\n/g; s/"//g' > sensor.pem ``` -*** +--- ## Step 3 — Embed the client certificate in your mobile app @@ -342,7 +349,7 @@ private OkHttpClient setUpClient() { The above function returns an `OkHttpClient` embedded with the client certificate. You can now use this client to make HTTP requests to your API endpoint protected with mTLS. -*** +--- ## Step 4 — Embed the client certificate on your IoT device @@ -402,13 +409,13 @@ Request body: {"temperature": "36.5", "time": "2020-09-28T15:56:45Z"} Response status code: 201 ``` -*** +--- ## Step 5 — Enable mTLS After creating Cloudflare-issued certificates, the next step is to [enable mTLS](/ssl/client-certificates/enable-mtls/) for the hosts you want to protect with API Shield. -*** +--- ## Step 6 — Configure API Shield to require client certificates diff --git a/src/content/docs/ssl/client-certificates/label-client-certificate.mdx b/src/content/docs/ssl/client-certificates/label-client-certificate.mdx index 8e984b5b3e160c..8d78dd25954f2c 100644 --- a/src/content/docs/ssl/client-certificates/label-client-certificate.mdx +++ b/src/content/docs/ssl/client-certificates/label-client-certificate.mdx @@ -4,7 +4,6 @@ source: https://support.cloudflare.com/hc/en-us/articles/4567119364749-How-to-la title: Label client certificates sidebar: order: 7 - --- After [creating client certificates](/ssl/client-certificates/) at Cloudflare, it may be hard to differentiate the generated certificates. @@ -20,7 +19,7 @@ If you need to differentiate client certificates for your clients on a per-organ For example, if you run the following command (with OpenSSL installed): ```sh -$ openssl req -new -newkey rsa:2048 -nodes -keyout client1.key -out client1.csr +openssl req -new -newkey rsa:2048 -nodes -keyout client1.key -out client1.csr ``` You can then specify: diff --git a/src/content/docs/ssl/client-certificates/troubleshooting.mdx b/src/content/docs/ssl/client-certificates/troubleshooting.mdx index 385c2d14dd5dff..560beda9311e83 100644 --- a/src/content/docs/ssl/client-certificates/troubleshooting.mdx +++ b/src/content/docs/ssl/client-certificates/troubleshooting.mdx @@ -6,30 +6,29 @@ sidebar: head: - tag: title content: Troubleshooting client certificates - --- If your query returns an error even after configuring and embedding a client SSL certificate, check the following settings. -*** +--- ## Check SSL/TLS handshake On your terminal, use the following command to check whether an SSL/TLS connection can be established successfully between the client and the API endpoint. ```sh -$ curl --verbose --cert /path/to/certificate.pem --key /path/to/key.pem https://your-api-endpoint.com +curl --verbose --cert /path/to/certificate.pem --key /path/to/key.pem https://your-api-endpoint.com ``` If the SSL/TLS handshake cannot be completed, check whether the certificate and the private key are correct. -*** +--- ## Check mTLS hosts Check whether [mTLS has been enabled](/ssl/client-certificates/enable-mtls/) for the correct host. The host should match the API endpoint that you want to protect. -*** +--- ## Review mTLS rules @@ -41,8 +40,8 @@ To review mTLS rules: 3. On that rule, check whether: - * The Expression Preview is correct. - * The hostname, if defined, matches your API endpoint. For example, for the API endpoint `api.trackers.ninja/time`, the rule should look like: + - The Expression Preview is correct. + - The hostname, if defined, matches your API endpoint. For example, for the API endpoint `api.trackers.ninja/time`, the rule should look like: ```txt (http.host in {"api.trackers.ninja"} and not cf.tls_client_auth.cert_verified) diff --git a/src/content/docs/ssl/edge-certificates/additional-options/minimum-tls.mdx b/src/content/docs/ssl/edge-certificates/additional-options/minimum-tls.mdx index f3f22adf2494ce..af60e6584b83bc 100644 --- a/src/content/docs/ssl/edge-certificates/additional-options/minimum-tls.mdx +++ b/src/content/docs/ssl/edge-certificates/additional-options/minimum-tls.mdx @@ -70,7 +70,7 @@ To test supported TLS versions, attempt a request to your website or application For example, use a `curl` command to test TLS 1.1 (replace `www.example.com` with your Cloudflare domain and hostname): ```sh -$ curl https://www.example.com -svo /dev/null --tls-max 1.1 +curl https://www.example.com -svo /dev/null --tls-max 1.1 ``` If the TLS version you are testing is blocked by Cloudflare, the TLS handshake is not completed and returns an error: diff --git a/src/content/docs/ssl/edge-certificates/changing-dcv-method/methods/delegated-dcv.mdx b/src/content/docs/ssl/edge-certificates/changing-dcv-method/methods/delegated-dcv.mdx index 426ce4e4cdca4e..893684200ad3c0 100644 --- a/src/content/docs/ssl/edge-certificates/changing-dcv-method/methods/delegated-dcv.mdx +++ b/src/content/docs/ssl/edge-certificates/changing-dcv-method/methods/delegated-dcv.mdx @@ -6,10 +6,9 @@ sidebar: head: - tag: title content: Delegated DCV — Domain Control Validation — SSL/TLS - --- -import { Example, FeatureTable } from "~/components" +import { Example, FeatureTable } from "~/components"; Delegated DCV allows zones with [partial DNS setups](/dns/zone-setups/partial-setup/) - meaning authoritative DNS is not provided by Cloudflare - to delegate the DCV process to Cloudflare. @@ -17,7 +16,7 @@ DCV Delegation requires you to place a one-time record that allows Cloudflare to :::note -DCV Delegation will not work with Universal Certificates and requires the use of an [Advanced certificate](/ssl/edge-certificates/advanced-certificate-manager/). +DCV Delegation will not work with Universal Certificates and requires the use of an [Advanced certificate](/ssl/edge-certificates/advanced-certificate-manager/). ::: ## Availability @@ -28,15 +27,15 @@ DCV Delegation will not work with Universal Certificates and requires the use of You should use Delegated DCV when all of the following conditions are true: -* Your zone is using a [partial DNS setup](/dns/zone-setups/partial-setup/). -* Cloudflare is not already [performing DCV automatically](/ssl/edge-certificates/changing-dcv-method/). -* Your zone is using an [Advanced certificate](/ssl/edge-certificates/advanced-certificate-manager/). -* Your zone is not using multiple CDN providers. -* The Certificate Authority is either Google or Let's Encrypt +- Your zone is using a [partial DNS setup](/dns/zone-setups/partial-setup/). +- Cloudflare is not already [performing DCV automatically](/ssl/edge-certificates/changing-dcv-method/). +- Your zone is using an [Advanced certificate](/ssl/edge-certificates/advanced-certificate-manager/). +- Your zone is not using multiple CDN providers. +- The Certificate Authority is either Google or Let's Encrypt :::note[Delegated DCV and origin certificates] -As explained in the [announcement blog post](https://blog.cloudflare.com/introducing-dcv-delegation/), currently, you can only delegate DCV to one provider at a time. If you also issue publicly trusted certificates for the same hostname for your [origin server](/ssl/concepts/#origin-certificate), this will no longer be possible. You can use [Cloudflare Origin CA certificates](/ssl/origin-configuration/origin-ca/) instead. +As explained in the [announcement blog post](https://blog.cloudflare.com/introducing-dcv-delegation/), currently, you can only delegate DCV to one provider at a time. If you also issue publicly trusted certificates for the same hostname for your [origin server](/ssl/concepts/#origin-certificate), this will no longer be possible. You can use [Cloudflare Origin CA certificates](/ssl/origin-configuration/origin-ca/) instead. ::: ## Setup @@ -48,7 +47,7 @@ To set up Delegated DCV: 3. Copy the Cloudflare validation URL. 4. At your authoritative DNS provider, create `CNAME` record(s) considering the following: -* If your certificate only covers the apex domain and a wildcard, you only need to create a single `CNAME` record for your apex domain. Any direct subdomains will be covered as well. +- If your certificate only covers the apex domain and a wildcard, you only need to create a single `CNAME` record for your apex domain. Any direct subdomains will be covered as well. @@ -58,7 +57,7 @@ _acme-challenge.example.com CNAME example.com.. -* If your certificate also covers subdomains specified by their name, you will need to add multiple `CNAME` records to your authoritative DNS provider, one for each specific subdomain. +- If your certificate also covers subdomains specified by their name, you will need to add multiple `CNAME` records to your authoritative DNS provider, one for each specific subdomain. @@ -79,7 +78,6 @@ Existing TXT records for `_acme-challenge` will conflict with the delegated DCV _acme-challenge.example.com TXT ``` - ::: Once the `CNAME` records are in place, Cloudflare will add TXT DCV tokens for every hostname on the Advanced certificate that has a DCV delegation record in place, as long as the zone is [active](/dns/zone-setups/reference/domain-status/) on Cloudflare. @@ -95,16 +93,16 @@ If you use a `dig` command to test, you should only be able see the placed token This is because Cloudflare places the tokens when needed and then cleans them up. ```sh -$ dig TXT +noadditional +noquestion +nocomments +nocmd +nostats _acme-challenge.example.com. @1.1.1.1_acme-challenge.example.com. 3600 IN CNAME example.com. +dig TXT +noadditional +noquestion +nocomments +nocmd +nostats _acme-challenge.example.com. @1.1.1.1_acme-challenge.example.com. 3600 IN CNAME example.com. ``` ### Renewal Currently, at certificate renewal, Cloudflare attempts to automatically perform DCV via HTTP if your certificate matches certain criteria: -* Hostnames are proxied. -* Hostnames on the certificate resolve to the IPs assigned to the zone. -* The certificate does not contain wildcards. +- Hostnames are proxied. +- Hostnames on the certificate resolve to the IPs assigned to the zone. +- The certificate does not contain wildcards. Note that settings that interfere with the validation URLs can cause issues in this case. Refer to [Troubleshooting](/ssl/edge-certificates/changing-dcv-method/troubleshooting/) for guidance. @@ -112,7 +110,7 @@ Note that settings that interfere with the validation URLs can cause issues in t If a hostname becomes unreachable during certificate renewal time, the certificate will not be able to be renewed automatically via Delegated DCV. Should you need to renew a certificate for a hostname that is not resolving currently, you can send a PATCH request to [the changing DCV method API endpoint](/api/operations/ssl-verification-edit-ssl-certificate-pack-validation-method) and change the method to TXT to proceed with manual renewal per [the TXT DCV method](/ssl/edge-certificates/changing-dcv-method/methods/txt/). -Once the hostname becomes resolvable again, [Delegated DCV](/ssl/edge-certificates/changing-dcv-method/methods/delegated-dcv/) will resume working as expected. +Once the hostname becomes resolvable again, [Delegated DCV](/ssl/edge-certificates/changing-dcv-method/methods/delegated-dcv/) will resume working as expected. ::: ### Moved domains diff --git a/src/content/docs/ssl/edge-certificates/custom-certificates/remove-file-key-password.mdx b/src/content/docs/ssl/edge-certificates/custom-certificates/remove-file-key-password.mdx index cebd8da408a792..36472ba9975ba9 100644 --- a/src/content/docs/ssl/edge-certificates/custom-certificates/remove-file-key-password.mdx +++ b/src/content/docs/ssl/edge-certificates/custom-certificates/remove-file-key-password.mdx @@ -3,16 +3,14 @@ pcx_content_type: how-to title: Remove key file password sidebar: order: 7 - --- -import { Details } from "~/components" +import { Details } from "~/components"; You cannot upload a custom certificate with a password-protected key file. The process for removing the password depends on your operating system. The following examples remove the password from `example.com.key`. -
1. Open a command console. @@ -22,26 +20,24 @@ The process for removing the password depends on your operating system. The foll 3. Copy the original key. ```sh - $ cp example.com.key temp.key + cp example.com.key temp.key ``` 4. Run the following command (if using an ECDSA certificate, replace `rsa` with `ec`). ```sh - $ openssl rsa -in temp.key -out example.com.key + openssl rsa -in temp.key -out example.com.key ``` 5. When prompted in the console window, enter the original key password. 6. [Upload the file contents](/ssl/edge-certificates/custom-certificates/uploading/#upload-a-custom-certificate) to Cloudflare. -
-
-1. Go to [https://indy.fulgan.com/SSL/](https://indy.fulgan.com/SSL/) and download the latest version of OpenSSL for your x86 or x86\_64 operating system. +1. Go to [https://indy.fulgan.com/SSL/](https://indy.fulgan.com/SSL/) and download the latest version of OpenSSL for your x86 or x86_64 operating system. 2. Open the `.zip` file and extract it. @@ -50,12 +46,11 @@ The process for removing the password depends on your operating system. The foll 4. In the command window that appears, run: ```sh - $ rsa -in C:\Path\To\example.com.key -out key.pem + rsa -in C:\Path\To\example.com.key -out key.pem ``` 5. Enter the original key password when prompted by the **openssl.exe** command window. 6. [Upload](/ssl/edge-certificates/custom-certificates/uploading/#upload-a-custom-certificate) the contents of the `key.pem` file to Cloudflare. -
diff --git a/src/content/docs/ssl/edge-certificates/custom-certificates/troubleshooting.mdx b/src/content/docs/ssl/edge-certificates/custom-certificates/troubleshooting.mdx index 1db5873fc299e6..7f188a5fc96787 100644 --- a/src/content/docs/ssl/edge-certificates/custom-certificates/troubleshooting.mdx +++ b/src/content/docs/ssl/edge-certificates/custom-certificates/troubleshooting.mdx @@ -5,7 +5,6 @@ title: Troubleshooting head: - tag: title content: Troubleshooting | Custom certificates - --- ## Generic troubleshooting @@ -19,7 +18,7 @@ You can use an external tool such as the [SSLShopper Certificate Key Matcher](ht You can use `openssl` to check all the details of your certificate: ```bash -$ openssl x509 -in certificate.crt -noout -text +openssl x509 -in certificate.crt -noout -text ``` Then, make sure all the information is correct before uploading. @@ -35,7 +34,7 @@ The certificate you are trying to upload is invalid. For example, there might be Carefully check the content of the certificate. You may use `openssl` to check all the details of your certificate: ```bash -$ openssl x509 -in certificate.crt -noout -text +openssl x509 -in certificate.crt -noout -text ``` ## You have reached the maximum number of custom certificates. (Code: 1212) @@ -66,7 +65,7 @@ You are trying to upload a custom certificate that does not support any cipher t **Solution** -Modify the certificate so that it supports chromium-supported ciphers and try again. +Modify the certificate so that it supports chromium-supported ciphers and try again. ## You have reached your quota for the requested resource. (Code: 2005) @@ -113,7 +112,7 @@ Cloudflare verifies that uploaded custom certificates include a hostname for the Make sure your certificate contains a Subject Alternative Name (SAN) specifying a hostname in your zone. You can use the `openssl` command below and look for `Subject Alternative Name` in the output. ```bash -$ openssl x509 -in certificateFile.pem -noout -text +openssl x509 -in certificateFile.pem -noout -text ``` If it does not exist, you will need to request a new certificate. diff --git a/src/content/docs/ssl/keyless-ssl/hardware-security-modules/aws-cloud-hsm.mdx b/src/content/docs/ssl/keyless-ssl/hardware-security-modules/aws-cloud-hsm.mdx index 112b7826edf202..ecd124f7e2d102 100644 --- a/src/content/docs/ssl/keyless-ssl/hardware-security-modules/aws-cloud-hsm.mdx +++ b/src/content/docs/ssl/keyless-ssl/hardware-security-modules/aws-cloud-hsm.mdx @@ -3,27 +3,24 @@ pcx_content_type: tutorial title: AWS cloud HSM sidebar: order: 2 - --- :::note[Note] - This example imports an existing key pair, but you may prefer to [generate your key on the HSM](https://docs.aws.amazon.com/cloudhsm/latest/userguide/manage-keys.html). - ::: -*** +--- ## Before you start Make sure you have: -* Provisioned an [AWS CloudHSM cluster](https://docs.aws.amazon.com/cloudhsm/latest/userguide/getting-started.html) . -* Installed the [appropriate software library for PKCS#11](https://docs.aws.amazon.com/cloudhsm/latest/userguide/pkcs11-library-install.html). +- Provisioned an [AWS CloudHSM cluster](https://docs.aws.amazon.com/cloudhsm/latest/userguide/getting-started.html) . +- Installed the [appropriate software library for PKCS#11](https://docs.aws.amazon.com/cloudhsm/latest/userguide/pkcs11-library-install.html). -*** +--- ## 1. Import the public and private key to the HSM @@ -66,7 +63,7 @@ Command: logoutHSM Command: exit ``` -*** +--- ## 2. Modify the gokeyless config file and restart the service @@ -88,6 +85,6 @@ add: With the config file saved, restart `gokeyless` and verify it started successfully. ```sh -$ sudo systemctl restart gokeyless.service -$ sudo systemctl status gokeyless.service -l +sudo systemctl restart gokeyless.service +sudo systemctl status gokeyless.service -l ``` diff --git a/src/content/docs/ssl/keyless-ssl/hardware-security-modules/azure-dedicated-hsm.mdx b/src/content/docs/ssl/keyless-ssl/hardware-security-modules/azure-dedicated-hsm.mdx index 26400369ae5208..d4bb036808e038 100644 --- a/src/content/docs/ssl/keyless-ssl/hardware-security-modules/azure-dedicated-hsm.mdx +++ b/src/content/docs/ssl/keyless-ssl/hardware-security-modules/azure-dedicated-hsm.mdx @@ -3,21 +3,20 @@ pcx_content_type: tutorial title: Azure Dedicated HSM sidebar: order: 3 - --- This tutorial uses [Azure Dedicated HSM](https://azure.microsoft.com/en-us/services/azure-dedicated-hsm/) — a FIPS 140-2 Level 3 certified implementation based on the Gemalto SafeNet Luna a790. -*** +--- ## Before you start Make sure you have: -* Followed Microsoft's [tutorial](https://docs.microsoft.com/en-us/azure/dedicated-hsm/tutorial-deploy-hsm-powershell) for deploying HSMs into an existing virtual network using PowerShell -* Installed the [SafeNet client software](https://cpl.thalesgroup.com/node/11350) +- Followed Microsoft's [tutorial](https://docs.microsoft.com/en-us/azure/dedicated-hsm/tutorial-deploy-hsm-powershell) for deploying HSMs into an existing virtual network using PowerShell +- Installed the [SafeNet client software](https://cpl.thalesgroup.com/node/11350) -*** +--- ## 1. Create, assign, and initialize a new partition @@ -96,7 +95,7 @@ lunacm:>partition init -label KeylessSSL -domain cloudflare Command Result : No Error ``` -*** +--- ## 2. Generate a RSA key pair and certificate signing request (CSR) @@ -123,13 +122,13 @@ Please enter password for token in slot 0 : ******** Using "CKM_SHA256_RSA_PKCS" Mechanism ``` -*** +--- ## 3. Obtain and upload a signed certificate from your Certificate Authority (CA) Provide the CSR created in the previous step to your organization’s preferred CA, demonstrate control of your domain as requested, and then download the signed SSL certificates. Follow the instructions provided in [Uploading “Keyless” SSL Certificates](/ssl/keyless-ssl/configuration/cloudflare-tunnel/#step-3---upload-keyless-ssl-certificates). -*** +--- ## 4. Modify your gokeyless config file and restart the service @@ -151,6 +150,6 @@ add: With the config file saved, restart `gokeyless` and verify it started successfully. ```sh -$ sudo systemctl restart gokeyless.service -$ sudo systemctl status gokeyless.service -l +sudo systemctl restart gokeyless.service +sudo systemctl status gokeyless.service -l ``` diff --git a/src/content/docs/ssl/keyless-ssl/hardware-security-modules/azure-managed-hsm.mdx b/src/content/docs/ssl/keyless-ssl/hardware-security-modules/azure-managed-hsm.mdx index 06402af65f9799..cc54711d3587ad 100644 --- a/src/content/docs/ssl/keyless-ssl/hardware-security-modules/azure-managed-hsm.mdx +++ b/src/content/docs/ssl/keyless-ssl/hardware-security-modules/azure-managed-hsm.mdx @@ -3,33 +3,32 @@ pcx_content_type: tutorial title: Azure Managed HSM sidebar: order: 4 - --- This tutorial uses [Microsoft Azure’s Managed HSM](https://azure.microsoft.com/en-us/updates/akv-managed-hsm-public-preview/) — a FIPS 140-2 Level 3 certified implementation — to deploy a VM with the Keyless SSL daemon. -*** +--- ## Before you start Make sure you have: -* Followed Microsoft's [tutorial](https://docs.microsoft.com/en-us/azure/key-vault/managed-hsm/quick-create-cli) for provisioning and activating the managed HSM -* Set up a VM for your key server +- Followed Microsoft's [tutorial](https://docs.microsoft.com/en-us/azure/key-vault/managed-hsm/quick-create-cli) for provisioning and activating the managed HSM +- Set up a VM for your key server -*** +--- ## 1. Create a VM Create a VM where you will deploy the keyless daemon. -*** +--- ## 2. Deploy the keyless server Follow [these instructions](/ssl/keyless-ssl/configuration/cloudflare-tunnel/#step-4---set-up-and-activate-key-server) to deploy your keyless server. -*** +--- ## 3. Set up the Azure CLI @@ -41,19 +40,19 @@ For example, if you were using macOS: brew install azure-cli ``` -*** +--- ## 4. Set up the Managed HSM 1. Log in through the Azure CLI and create a resource group for the Managed HSM in one of the supported regions: ```sh - $ az login - $ az group create --name HSMgroup --location southcentralus + az login + az group create --name HSMgroup --location southcentralus ``` :::note - For a list of supported regions, see the [Microsoft documentation](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=key-vault). + For a list of supported regions, see the [Microsoft documentation](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=key-vault). ::: 2. [Create, provision, and activate](https://docs.microsoft.com/en-us/azure/key-vault/managed-hsm/quick-create-cli) the HSM. @@ -61,7 +60,7 @@ brew install azure-cli 3. Add your private key to the `keyvault`, which returns the URI you need for **Step 4**: ``` - $ az keyvault key import --hsm-name "KeylessHSM" --name "hsm-pub-keyless" --pem-file server.key + az keyvault key import --hsm-name "KeylessHSM" --name "hsm-pub-keyless" --pem-file server.key ``` 4. If the key server is running in an Azure VM in the same account, use **Managed services** for authorization: @@ -70,7 +69,7 @@ brew install azure-cli 2. Give your service user (associated with your VM) HSM sign permissions ``` - $ az keyvault role assignment create --hsm-name KeylessHSM --assignee $(az vm identity show --name "hsmtestvm" --resource-group "HSMgroup" --query principalId -o tsv) --scope / --role "Managed HSM Crypto User" + az keyvault role assignment create --hsm-name KeylessHSM --assignee $(az vm identity show --name "hsmtestvm" --resource-group "HSMgroup" --query principalId -o tsv) --scope / --role "Managed HSM Crypto User" ``` 5. In the `gokeyless` YAML file, add the URI from **Step 2** under `private_key_stores`. See our [README](https://github.com/cloudflare/gokeyless/blob/master/README.md) for an example. @@ -80,6 +79,6 @@ brew install azure-cli Once you save the config file, restart `gokeyless` and verify that it started successfully: ``` -$ sudo systemctl restart gokeyless.service -$ sudo systemctl status gokeyless.service -l +sudo systemctl restart gokeyless.service +sudo systemctl status gokeyless.service -l ``` diff --git a/src/content/docs/ssl/keyless-ssl/hardware-security-modules/configuration.mdx b/src/content/docs/ssl/keyless-ssl/hardware-security-modules/configuration.mdx index e4b6faf5630a74..27eef2735dbcf0 100644 --- a/src/content/docs/ssl/keyless-ssl/hardware-security-modules/configuration.mdx +++ b/src/content/docs/ssl/keyless-ssl/hardware-security-modules/configuration.mdx @@ -3,27 +3,24 @@ pcx_content_type: reference title: Configuration sidebar: order: 1 - --- :::caution[Important] - Carefully review the manufacturer documentation for your HSM and properly restrict access to the key server. - ::: To get started with your PKCS#11 token you will need to initialize it with a private key, PIN, and token label. The instructions to do this will be specific to each hardware device, and you should follow the instructions provided by your vendor. You will also need to find the path to your `module`, a shared object file (`.so`). Having initialized your device, you can query it to check your token label with: ```sh -$ pkcs11-tool --module --list-token-slots +pkcs11-tool --module --list-token-slots ``` You will also want to check the label of the private key you imported (or generated). Run the following command and look for a `Private Key Object`: ```bash -$ pkcs11-tool --module --pin \ +pkcs11-tool --module --pin \ --list-token-slots --login --list-objects ``` @@ -41,9 +38,9 @@ The URI path component contains attributes that identify a resource. The query c Keyless requires the following three attributes be specified: -* **Module**: use `module-path` to locate the PKCS#11 module library. -* **Token**: use `serial`, `slot-id`, or `token` to specify the PKCS#11 token. -* **Slot**: use `id` or `object` to specify the PKCS#11 key pair. +- **Module**: use `module-path` to locate the PKCS#11 module library. +- **Token**: use `serial`, `slot-id`, or `token` to specify the PKCS#11 token. +- **Slot**: use `id` or `object` to specify the PKCS#11 key pair. For certain modules, a query attribute `max-sessions` is required in order to prevent opening too many sessions to the module. Certain additional attributes, such as `pin-value`, may be necessary depending on the situation. Refer to the documentation for your PKCS#11 module for more details. diff --git a/src/content/docs/ssl/keyless-ssl/hardware-security-modules/entrust-nshield-connect.mdx b/src/content/docs/ssl/keyless-ssl/hardware-security-modules/entrust-nshield-connect.mdx index 8abe994861041c..777322035e1c38 100644 --- a/src/content/docs/ssl/keyless-ssl/hardware-security-modules/entrust-nshield-connect.mdx +++ b/src/content/docs/ssl/keyless-ssl/hardware-security-modules/entrust-nshield-connect.mdx @@ -3,15 +3,12 @@ pcx_content_type: tutorial title: Entrust nShield Connect sidebar: order: 6 - --- :::note[Note] - This example assumes you have already configured the nShield Connect device and generated or imported your private keys. - ::: Since the keys are already in place, we merely need to build the configuration file that the key server will read on startup. In this example the device contains a single RSA key pair. @@ -19,7 +16,10 @@ Since the keys are already in place, we merely need to build the configuration f We ask `pkcs11-tool` (provided by the `opensc` package) to display the objects stored in the token: ```txt -$ pkcs11-tool --module /opt/nfast/toolkits/pkcs11/libcknfast.so -O +pkcs11-tool --module /opt/nfast/toolkits/pkcs11/libcknfast.so -O +``` + +```txt output Using slot 0 with a present token (0x1d622495) Private Key Object; RSA label: rsa-privkey @@ -49,6 +49,6 @@ add Save the config file, restart `gokeyless`, and verify it started successfully. ```sh -$ sudo systemctl restart gokeyless.service -$ sudo systemctl status gokeyless.service -l +sudo systemctl restart gokeyless.service +sudo systemctl status gokeyless.service -l ``` diff --git a/src/content/docs/ssl/keyless-ssl/hardware-security-modules/google-cloud-hsm.mdx b/src/content/docs/ssl/keyless-ssl/hardware-security-modules/google-cloud-hsm.mdx index 0cd902eb110500..23b97d10e887f8 100644 --- a/src/content/docs/ssl/keyless-ssl/hardware-security-modules/google-cloud-hsm.mdx +++ b/src/content/docs/ssl/keyless-ssl/hardware-security-modules/google-cloud-hsm.mdx @@ -3,20 +3,19 @@ pcx_content_type: tutorial title: Google Cloud HSM sidebar: order: 8 - --- This tutorial uses [Google Cloud HSM](https://cloud.google.com/kms/docs/hsm) — a FIPS 140-2 Level 3 certified implementation. -*** +--- ## Before you start Make sure that you have: -* Set up your [Google Cloud project](https://cloud.google.com/kms/docs/quickstart#before-you-begin) +- Set up your [Google Cloud project](https://cloud.google.com/kms/docs/quickstart#before-you-begin) -*** +--- ## 1. Create a key ring @@ -24,44 +23,42 @@ To set up the Google Cloud HSM, [create a key ring](https://cloud.google.com/kms :::note[Note:] - Only [certain locations](https://cloud.google.com/kms/docs/locations#hsm-regions) support Google Cloud HSM. - ::: -*** +--- ## 2. Create a key Create a key, including the following information: - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + +
FieldValue
Key ring - The key ring you created in Step 2 -
Protection levelHSM
PurposeAsymmetric Encrypt
FieldValue
Key ring + The key ring you created in Step 2 +
Protection levelHSM
PurposeAsymmetric Encrypt
-*** +--- ## 3. Import the private key @@ -69,13 +66,11 @@ After creating a key ring and key, [import the private key](https://cloud.google :::note[Note:] - You need to [convert your key](https://cloud.google.com/kms/docs/formatting-keys-for-import#formatting_asymmetric_keys) from a PEM to DER format. - ::: -*** +--- ## 4. Modify your gokeyless config file and restart the service @@ -83,7 +78,7 @@ Once you’ve imported the key, copy the **Resource name** from the UI. Then, ad With the config file saved, restart `gokeyless` and verify it started successfully. -``` -$ sudo systemctl restart gokeyless.service -$ sudo systemctl status gokeyless.service -l +```sh +sudo systemctl restart gokeyless.service +sudo systemctl status gokeyless.service -l ``` diff --git a/src/content/docs/ssl/keyless-ssl/hardware-security-modules/ibm-cloud-hsm.mdx b/src/content/docs/ssl/keyless-ssl/hardware-security-modules/ibm-cloud-hsm.mdx index f0d59610ab64b2..1509eb96de8393 100644 --- a/src/content/docs/ssl/keyless-ssl/hardware-security-modules/ibm-cloud-hsm.mdx +++ b/src/content/docs/ssl/keyless-ssl/hardware-security-modules/ibm-cloud-hsm.mdx @@ -3,21 +3,20 @@ pcx_content_type: tutorial title: IBM cloud HSM sidebar: order: 7 - --- The example below was tested using [IBM Cloud HSM 7.0](https://console.bluemix.net/docs/infrastructure/hardware-security-modules/about.html#about-ibm-cloud-hsm), a FIPS 140-2 Level 3 certified implementation based on the Gemalto SafeNet Luna a750. -*** +--- ## Before you start Make sure that you have: -* Initialized [your device](https://console.bluemix.net/docs/infrastructure/hardware-security-modules/initialize_hsm.html#initializing-the-ibm-cloud-hsm) -* Installed the [SafeNet client software](https://cpl.thalesgroup.com/node/11350) +- Initialized [your device](https://console.bluemix.net/docs/infrastructure/hardware-security-modules/initialize_hsm.html#initializing-the-ibm-cloud-hsm) +- Installed the [SafeNet client software](https://cpl.thalesgroup.com/node/11350) -*** +--- ## 1. Create, assign, and initialize a new partition @@ -85,7 +84,7 @@ lunacm:>partition init -label KeylessSSL -domain cloudflare Command Result : No Error ``` -*** +--- ## 2. Generate RSA and ECDSA key pairs and certificate signing requests (CSRs) @@ -120,13 +119,13 @@ Please enter password for token in slot 0 : ******** Using "CKM_ECDSA_SHA256" Mechanism ``` -*** +--- ## 3. Obtain and upload signed certificates from your Certificate Authority (CA) Provide the CSRs created in the previous step to your organization’s preferred CA, demonstrate control of your domain as requested, and then download the signed SSL certificates. Follow the instructions provided in [Uploading “Keyless” SSL Certificates](/ssl/keyless-ssl/configuration/cloudflare-tunnel/#step-3---upload-keyless-ssl-certificates). -*** +--- ## 4. Modify your gokeyless config file and restart the service @@ -149,6 +148,6 @@ add: With the config file saved, restart `gokeyless` and verify it started successfully. ```sh -$ sudo systemctl restart gokeyless.service -$ sudo systemctl status gokeyless.service -l +sudo systemctl restart gokeyless.service +sudo systemctl status gokeyless.service -l ``` diff --git a/src/content/docs/ssl/keyless-ssl/hardware-security-modules/softhsmv2.mdx b/src/content/docs/ssl/keyless-ssl/hardware-security-modules/softhsmv2.mdx index 307be8e4cf86a2..11b54456794486 100644 --- a/src/content/docs/ssl/keyless-ssl/hardware-security-modules/softhsmv2.mdx +++ b/src/content/docs/ssl/keyless-ssl/hardware-security-modules/softhsmv2.mdx @@ -3,60 +3,60 @@ pcx_content_type: tutorial title: SoftHSMv2 sidebar: order: 5 - --- :::caution[Important] - SoftHSMv2 should not be considered any more secure than storing private keys directly on disk. No attempt is made below to secure this installation; it is provided simply for demonstration purposes. - ::: -*** +--- ## 1. Install and configure SoftHSMv2 First, we install SoftHSMv2 and configure it to store tokens in the default location `/var/lib/softhsm/tokens`. We also need to give the `softhsm` group permission to this directory as this is how the `keyless` user will access this directory. ```bash -$ sudo apt-get install -y softhsm2 opensc +sudo apt-get install -y softhsm2 opensc #... -$ cat < -s 0 -w keyless-$(date +%s).pcap port 2407` + `sudo tcpdump -nni -s 0 -w keyless-$(date +%s).pcap port 2407` ## Clients are connecting, but immediately aborting @@ -49,8 +48,10 @@ If you run `gokeyless` with debug logging enabled, and you see logs like this: These logs likely indicate that the key server is not using an appropriate server or .`PEM` file and the client is aborting the connection after the certificate exchange. The certificate must be signed by the keyless CA and the SANs must include the hostname of the keyless server. Here is a valid example for a keyless server located at `11aa40b4a5db06d4889e48e2f.example.com` (note the Subject Alternative Name and Authority Key Identifier): ```bash -$ openssl x509 -in server.pem -noout -text -certopt no_subject,no_header,no_version,no_serial,no_signame,no_validity,no_subject,no_issuer,no_pubkey,no_sigdump,no_aux | sed -e 's/^ //' +openssl x509 -in server.pem -noout -text -certopt no_subject,no_header,no_version,no_serial,no_signame,no_validity,no_subject,no_issuer,no_pubkey,no_sigdump,no_aux | sed -e 's/^ //' +``` +```bash output X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment diff --git a/src/content/docs/ssl/reference/certificate-statuses.mdx b/src/content/docs/ssl/reference/certificate-statuses.mdx index f9ccdf70ffa003..ffd5d76908bfc3 100644 --- a/src/content/docs/ssl/reference/certificate-statuses.mdx +++ b/src/content/docs/ssl/reference/certificate-statuses.mdx @@ -3,7 +3,6 @@ pcx_content_type: reference title: Certificate statuses sidebar: order: 8 - --- Certificates statuses show which stage of the issuance process each certificate is in. @@ -34,21 +33,21 @@ If your zone is already active when you upload a custom certificate, you will no When you create certificates in your [staging environment](/ssl/edge-certificates/staging-environment/), those staging certificates have their own set of statuses: -* **Staging deployment**: Similar to **Pending Deployment**, but for staging certificates. -* **Staging active**: Similar to **Active**, but for staging certificates. -* **Deactivating**: Your staging certificate is in the process of becoming **Inactive**. -* **Inactive**: Your staging certificate is not at the edge, but you can deploy it if needed. +- **Staging deployment**: Similar to **Pending Deployment**, but for staging certificates. +- **Staging active**: Similar to **Active**, but for staging certificates. +- **Deactivating**: Your staging certificate is in the process of becoming **Inactive**. +- **Inactive**: Your staging certificate is not at the edge, but you can deploy it if needed. ## Client certificates When you use [client certificates](/ssl/client-certificates/), those client certificates have their own set of statuses: -* **Active**: The client certificate is active. -* **Revoked**: The client certificate is revoked. -* **Pending Reactivation**: The client certificate was revoked, but it is being restored. -* **Pending Revocation**: The client certificate was active, but it is being revoked. +- **Active**: The client certificate is active. +- **Revoked**: The client certificate is revoked. +- **Pending Reactivation**: The client certificate was revoked, but it is being restored. +- **Pending Revocation**: The client certificate was active, but it is being revoked. -*** +--- ## Monitor certificate statuses @@ -69,5 +68,5 @@ For more details on certificate validation, refer to [Issue and validate certifi To view certificates, use `openssl` or your browser. The command below can be used in advance of your customer pointing the `app.example.com` hostname to the edge ([provided validation was completed](/cloudflare-for-platforms/cloudflare-for-saas/security/certificate-management/issue-and-validate/)). ```sh -$ openssl s_client -servername app.example.com -connect $CNAME_TARGET:443 /dev/null | openssl x509 -noout -text | grep app.example.com +openssl s_client -servername app.example.com -connect $CNAME_TARGET:443 /dev/null | openssl x509 -noout -text | grep app.example.com ``` diff --git a/src/content/docs/ssl/troubleshooting/general-ssl-errors.mdx b/src/content/docs/ssl/troubleshooting/general-ssl-errors.mdx index 47fa7d9b86403d..48f291b1d1663a 100644 --- a/src/content/docs/ssl/troubleshooting/general-ssl-errors.mdx +++ b/src/content/docs/ssl/troubleshooting/general-ssl-errors.mdx @@ -4,10 +4,9 @@ source: https://support.cloudflare.com/hc/en-us/articles/200170566-Troubleshooti title: General SSL errors head: [] description: Learn how to troubleshoot various SSL/TLS errors with Cloudflare. - --- -import { GlossaryTooltip } from "~/components" +import { GlossaryTooltip } from "~/components"; ## Outdated browsers @@ -15,26 +14,24 @@ import { GlossaryTooltip } from "~/components" Until Cloudflare provides an SSL certificate for your domain, the following errors may appear in various browsers for HTTPS traffic: -* **Firefox**: `_ssl_error_bad_cert_domain` / `This connection is untrusted` -* **Chrome**: `Your connection is not private` -* **Safari**: `Safari can't verify the identity of the website` -* **Edge / Internet Explorer**: `There is a problem with this website's security certificate` +- **Firefox**: `_ssl_error_bad_cert_domain` / `This connection is untrusted` +- **Chrome**: `Your connection is not private` +- **Safari**: `Safari can't verify the identity of the website` +- **Edge / Internet Explorer**: `There is a problem with this website's security certificate` ### Resolution -Even with a Cloudflare SSL certificate provisioned for your domain, older browsers display errors about untrusted SSL certificates because they do not [support the Server Name Indication (SNI) protocol](https://en.wikipedia.org/wiki/Server_Name_Indication#Support) used by Cloudflare Universal SSL certificates.  +Even with a Cloudflare SSL certificate provisioned for your domain, older browsers display errors about untrusted SSL certificates because they do not [support the Server Name Indication (SNI) protocol](https://en.wikipedia.org/wiki/Server_Name_Indication#Support) used by Cloudflare Universal SSL certificates. To solve, [determine if the browser supports SNI](https://caniuse.com/#feat=sni). If not, upgrade your browser. :::note - It is possible for [Cloudflare Support](/support/contacting-cloudflare-support/) to enable non-SNI support for paid plans using any certificate. - ::: -*** +--- ## Only some of your subdomains return SSL errors @@ -44,12 +41,12 @@ It is possible for [Cloudflare Support](/support/contacting-cloudflare-support/ ### Resolution -* Purchase an [advanced certificate](/ssl/edge-certificates/advanced-certificate-manager) that covers `dev.www.example.com`. -* Upload a [Custom SSL certificate](/ssl/edge-certificates/custom-certificates) that covers `dev.www.example.com`. -* Enable [Total TLS](/ssl/edge-certificates/additional-options/total-tls). -* If you have a valid certificate for the second level subdomains at your origin web server, change the DNS record for `dev.www` to [DNS Only (grey cloud)](/dns/manage-dns-records/reference/proxied-dns-records/). +- Purchase an [advanced certificate](/ssl/edge-certificates/advanced-certificate-manager) that covers `dev.www.example.com`. +- Upload a [Custom SSL certificate](/ssl/edge-certificates/custom-certificates) that covers `dev.www.example.com`. +- Enable [Total TLS](/ssl/edge-certificates/additional-options/total-tls). +- If you have a valid certificate for the second level subdomains at your origin web server, change the DNS record for `dev.www` to [DNS Only (grey cloud)](/dns/manage-dns-records/reference/proxied-dns-records/). -*** +--- ## Your Cloudflare Universal SSL certificate is not active @@ -65,8 +62,8 @@ Our SSL vendors verify each SSL certificate request before Cloudflare can issue If your Cloudflare SSL certificate is not issued within 24 hours of Cloudflare domain activation: -* If your origin web server has a valid SSL certificate, [temporarily pause Cloudflare](/fundamentals/setup/manage-domains/pause-cloudflare/), and -* [Contact Support](/support/contacting-cloudflare-support/) and provide a screenshot of the errors. +- If your origin web server has a valid SSL certificate, [temporarily pause Cloudflare](/fundamentals/setup/manage-domains/pause-cloudflare/), and +- [Contact Support](/support/contacting-cloudflare-support/) and provide a screenshot of the errors. Temporarily pausing Cloudflare will allow the HTTPS traffic to be served properly from your origin web server while the support team investigates the issue. @@ -80,7 +77,7 @@ Cloudflare SSL/TLS certificates only apply for traffic [proxied through Cloudfla If your domain is on a partial setup, confirm whether you have CAA DNS records enabled at your current hosting provider. If so, ensure you [specify the Certificate Authorities that Cloudflare uses](/ssl/edge-certificates/caa-records/) to provision certificates for your domain. -*** +--- ## OCSP response error @@ -95,7 +92,7 @@ This error is either caused by the browser version or an issue requiring attenti 1. The output from [https://aboutmybrowser.com/](https://aboutmybrowser.com/)  . 2. The output of `https:///cdn-cgi/trace` from the visitor’s browser. -*** +--- ## Incorrect HSTS headers @@ -112,7 +109,7 @@ You may have configured [HTTP Response Header Modification Rules](/rules/transfo 3. Delete (or edit) the rule so that the HSTS configuration settings defined in the **SSL/TLS** app are applied. 4. Repeat this procedure for the other HSTS header. -*** +--- ## Other errors @@ -122,28 +119,26 @@ You are getting the error `NET::ERR_CERT_COMMON_NAME_INVALID` in your browser. ### Resolution -* Make sure that you are using a browser that supports [SNI (Server Name Indication)](https://www.cloudflare.com/learning/ssl/what-is-sni/). Refer to [Browser compatibility](/ssl/reference/browser-compatibility/) for more details. -* Ensure that the hostname you are accessing is set to [proxied (orange cloud)](/dns/manage-dns-records/reference/proxied-dns-records/) in the DNS tab of your Cloudflare Dashboard. -* If the hostname you are accessing is a second level subdomain (such as `dev.www.example.com`), you'll need to either: - * Purchase an [advanced certificate](/ssl/edge-certificates/advanced-certificate-manager) that covers `dev.www.example.com`. - * Upload a [Custom SSL certificate](/ssl/edge-certificates/custom-certificates) that covers `dev.www.example.com`. - * Enable [Total TLS](/ssl/edge-certificates/additional-options/total-tls) +- Make sure that you are using a browser that supports [SNI (Server Name Indication)](https://www.cloudflare.com/learning/ssl/what-is-sni/). Refer to [Browser compatibility](/ssl/reference/browser-compatibility/) for more details. +- Ensure that the hostname you are accessing is set to [proxied (orange cloud)](/dns/manage-dns-records/reference/proxied-dns-records/) in the DNS tab of your Cloudflare Dashboard. +- If the hostname you are accessing is a second level subdomain (such as `dev.www.example.com`), you'll need to either: + - Purchase an [advanced certificate](/ssl/edge-certificates/advanced-certificate-manager) that covers `dev.www.example.com`. + - Upload a [Custom SSL certificate](/ssl/edge-certificates/custom-certificates) that covers `dev.www.example.com`. + - Enable [Total TLS](/ssl/edge-certificates/additional-options/total-tls) :::note - The following [`openssl`](https://www.openssl.org/) command might help troubleshooting TLS handshake between the client and the Cloudflare network edge: -```txt +```sh -$ openssl s_client -connect example.com:443 -servername example.com version +openssl s_client -connect example.com:443 -servername example.com version ``` - ::: -*** +--- ## Kaspersky Antivirus diff --git a/src/content/docs/stream/examples/rtmps_playback.mdx b/src/content/docs/stream/examples/rtmps_playback.mdx index 7d7760c8b9d12a..8e1b06dbdee759 100644 --- a/src/content/docs/stream/examples/rtmps_playback.mdx +++ b/src/content/docs/stream/examples/rtmps_playback.mdx @@ -8,17 +8,16 @@ title: RTMPS playback sidebar: order: 8 description: Example of sub 1s latency video playback using RTMPS and ffplay - --- -import { Render } from "~/components" +import { Render } from "~/components"; -Copy the RTMPS *playback* key for your live input from the [Stream Dashboard](https://dash.cloudflare.com/?to=/:account/stream/inputs) or the [Stream API](/stream/stream-live/start-stream-live/#use-the-api), and paste it into the URL below, replacing ``: +Copy the RTMPS _playback_ key for your live input from the [Stream Dashboard](https://dash.cloudflare.com/?to=/:account/stream/inputs) or the [Stream API](/stream/stream-live/start-stream-live/#use-the-api), and paste it into the URL below, replacing ``: ```sh title="RTMPS playback with ffplay" -$ ffplay -analyzeduration 1 -fflags -nobuffer -sync ext 'rtmps://live.cloudflare.com:443/live/' +ffplay -analyzeduration 1 -fflags -nobuffer -sync ext 'rtmps://live.cloudflare.com:443/live/' ``` For more, refer to [Play live video in native apps with less than one second latency](/stream/viewing-videos/using-own-player/#play-live-video-in-native-apps-with-less-than-1-second-latency). diff --git a/src/content/docs/stream/examples/srt_playback.mdx b/src/content/docs/stream/examples/srt_playback.mdx index 3f1fa6694a9093..81286444b140d3 100644 --- a/src/content/docs/stream/examples/srt_playback.mdx +++ b/src/content/docs/stream/examples/srt_playback.mdx @@ -8,17 +8,16 @@ title: SRT playback sidebar: order: 9 description: Example of sub 1s latency video playback using SRT and ffplay - --- -import { Render } from "~/components" +import { Render } from "~/components"; Copy the **SRT Playback URL** for your live input from the [Stream Dashboard](https://dash.cloudflare.com/?to=/:account/stream/inputs) or the [Stream API](/stream/stream-live/start-stream-live/#use-the-api), and paste it into the URL below, replacing ``: ```sh title="SRT playback with ffplay" -$ ffplay -analyzeduration 1 -fflags -nobuffer -probesize 32 -sync ext '' +ffplay -analyzeduration 1 -fflags -nobuffer -probesize 32 -sync ext '' ``` For more, refer to [Play live video in native apps with less than one second latency](/stream/viewing-videos/using-own-player/#play-live-video-in-native-apps-with-less-than-1-second-latency). diff --git a/src/content/docs/stream/stream-live/start-stream-live.mdx b/src/content/docs/stream/stream-live/start-stream-live.mdx index 0974c6794640dd..54b65ad6aa6af5 100644 --- a/src/content/docs/stream/stream-live/start-stream-live.mdx +++ b/src/content/docs/stream/stream-live/start-stream-live.mdx @@ -6,10 +6,9 @@ learning_center: link: https://www.cloudflare.com/learning/video/what-is-live-streaming/ sidebar: order: 1 - --- -import { InlineBadge, Render, Badge } from "~/components" +import { InlineBadge, Render, Badge } from "~/components"; After you subscribe to Stream, you can create Live Inputs in Dash or via the API. Broadcast to your new Live Input using RTMPS or SRT. SRT supports newer video codecs and makes using accessibility features, such as captions and multiple audio tracks, easier. @@ -44,25 +43,25 @@ https://api.cloudflare.com/client/v4/accounts/{account_id}/stream/live_inputs ```json title="Response" { - "uid": "f256e6ea9341d51eea64c9454659e576", - "rtmps": { - "url": "rtmps://live.cloudflare.com:443/live/", - "streamKey": "MTQ0MTcjM3MjI1NDE3ODIyNTI1MjYyMjE4NTI2ODI1NDcxMzUyMzcf256e6ea9351d51eea64c9454659e576" - }, - "created": "2021-09-23T05:05:53.451415Z", - "modified": "2021-09-23T05:05:53.451415Z", - "meta": { - "name": "test stream" - }, - "status": null, - "recording": { - "mode": "automatic", - "requireSignedURLs": false, - "allowedOrigins": null, - "hideLiveViewerCount": false - }, - "deleteRecordingAfterDays": null, - "preferLowLatency": false + "uid": "f256e6ea9341d51eea64c9454659e576", + "rtmps": { + "url": "rtmps://live.cloudflare.com:443/live/", + "streamKey": "MTQ0MTcjM3MjI1NDE3ODIyNTI1MjYyMjE4NTI2ODI1NDcxMzUyMzcf256e6ea9351d51eea64c9454659e576" + }, + "created": "2021-09-23T05:05:53.451415Z", + "modified": "2021-09-23T05:05:53.451415Z", + "meta": { + "name": "test stream" + }, + "status": null, + "recording": { + "mode": "automatic", + "requireSignedURLs": false, + "allowedOrigins": null, + "hideLiveViewerCount": false + }, + "deleteRecordingAfterDays": null, + "preferLowLatency": false } ``` @@ -70,47 +69,46 @@ https://api.cloudflare.com/client/v4/accounts/{account_id}/stream/live_inputs [API Reference Docs for `/live_inputs`](/api/operations/stream-live-inputs-create-a-live-input) -* `preferLowLatency` boolean default: `false` +- `preferLowLatency` boolean default: `false` - * When set to true, this live input will be enabled for the beta Low-Latency HLS pipeline. The Stream built-in player will automatically use LL-HLS when possible. (Recording `mode` property must also be set to `automatic`.) + - When set to true, this live input will be enabled for the beta Low-Latency HLS pipeline. The Stream built-in player will automatically use LL-HLS when possible. (Recording `mode` property must also be set to `automatic`.) -* `deleteRecordingAfterDays` integer default: `null` (any) +- `deleteRecordingAfterDays` integer default: `null` (any) - * Specifies a date and time when the recording, not the input, will be deleted. This property applies from the time the recording is made available and ready to stream. After the recording is deleted, it is no longer viewable and no longer counts towards storage for billing. Minimum value is `30`, maximum value is `1096`. + - Specifies a date and time when the recording, not the input, will be deleted. This property applies from the time the recording is made available and ready to stream. After the recording is deleted, it is no longer viewable and no longer counts towards storage for billing. Minimum value is `30`, maximum value is `1096`. When the stream ends, a `scheduledDeletion` timestamp is calculated using the `deleteRecordingAfterDays` value if present. Note that if the value is added to a live input while a stream is live, the property will only apply to future streams. -* `timeoutSeconds` integer default: `0` +- `timeoutSeconds` integer default: `0` - * The `timeoutSeconds` property specifies how long a live feed can be disconnected before it results in a new video being created. + - The `timeoutSeconds` property specifies how long a live feed can be disconnected before it results in a new video being created. The following four properties are nested under the `recoring` object. -* `mode` string default: `off` - - * When the mode property is set to `automatic`, the live stream will be automatically available for viewing using HLS/DASH. In addition, the live stream will be automatically recorded for later replays. By default, recording mode is set to `off`, and the input will not be recorded or available for playback. +- `mode` string default: `off` -* `requireSignedURLs` boolean default: `false` + - When the mode property is set to `automatic`, the live stream will be automatically available for viewing using HLS/DASH. In addition, the live stream will be automatically recorded for later replays. By default, recording mode is set to `off`, and the input will not be recorded or available for playback. - * The `requireSignedURLs` property indicates if signed URLs are required to view the video. This setting is applied by default to all videos recorded from the input. In addition, if viewing a video via the live input ID, this field takes effect over any video-level settings. +- `requireSignedURLs` boolean default: `false` -* `allowedOrigins` integer default: `null` (any) + - The `requireSignedURLs` property indicates if signed URLs are required to view the video. This setting is applied by default to all videos recorded from the input. In addition, if viewing a video via the live input ID, this field takes effect over any video-level settings. - * The `allowedOrigins` property can optionally be invoked to provide a list of allowed origins. This setting is applied by default to all videos recorded from the input. In addition, if viewing a video via the live input ID, this field takes effect over any video-level settings. +- `allowedOrigins` integer default: `null` (any) -* `hideLiveViewerCount` boolean default: `false` + - The `allowedOrigins` property can optionally be invoked to provide a list of allowed origins. This setting is applied by default to all videos recorded from the input. In addition, if viewing a video via the live input ID, this field takes effect over any video-level settings. - * Restrict access to the live viewer count and remove the value from the player. +- `hideLiveViewerCount` boolean default: `false` + - Restrict access to the live viewer count and remove the value from the player. ## Manage live inputs You can update live inputs by making a `PUT` request: ```bash title="Request" -$ curl --request PUT \ +curl --request PUT \ https://api.cloudflare.com/client/v4/accounts/{account_id}/stream/live_inputs/{input_id} \ --header "Authorization: Bearer " \ --data '{"meta": {"name":"test stream 1"},"recording": { "mode": "automatic", "timeoutSeconds": 10 }}' @@ -119,7 +117,7 @@ https://api.cloudflare.com/client/v4/accounts/{account_id}/stream/live_inputs/{i Delete a live input by making a `DELETE` request: ```bash title="Request" -$ curl --request DELETE \ +curl --request DELETE \ https://api.cloudflare.com/client/v4/accounts/{account_id}/stream/live_inputs/{input_id} \ --header "Authorization: Bearer " ``` @@ -128,23 +126,23 @@ https://api.cloudflare.com/client/v4/accounts/{account_id}/stream/live_inputs/{i ### Recommendations -* Your creators should use an appropriate bitrate for their live streams, typically well under 12Mbps (12000Kbps). High motion, high frame rate content typically should use a higher bitrate, while low motion content like slide presentations should use a lower bitrate. -* Your creators should use a [GOP duration](https://en.wikipedia.org/wiki/Group_of_pictures) (keyframe interval) of between 2 to 8 seconds. The default in most encoding software and hardware, including Open Broadcaster Software (OBS), is within this range. Setting a lower GOP duration will reduce latency for viewers, while also reducing encoding efficiency. Setting a higher GOP duration will improve encoding efficiency, while increasing latency for viewers. This is a tradeoff inherent to video encoding, and not a limitation of Cloudflare Stream. -* When possible, select CBR (constant bitrate) instead of VBR (variable bitrate) as CBR helps to ensure a stable streaming experience while preventing buffering and interruptions. +- Your creators should use an appropriate bitrate for their live streams, typically well under 12Mbps (12000Kbps). High motion, high frame rate content typically should use a higher bitrate, while low motion content like slide presentations should use a lower bitrate. +- Your creators should use a [GOP duration](https://en.wikipedia.org/wiki/Group_of_pictures) (keyframe interval) of between 2 to 8 seconds. The default in most encoding software and hardware, including Open Broadcaster Software (OBS), is within this range. Setting a lower GOP duration will reduce latency for viewers, while also reducing encoding efficiency. Setting a higher GOP duration will improve encoding efficiency, while increasing latency for viewers. This is a tradeoff inherent to video encoding, and not a limitation of Cloudflare Stream. +- When possible, select CBR (constant bitrate) instead of VBR (variable bitrate) as CBR helps to ensure a stable streaming experience while preventing buffering and interruptions. #### Low-Latency HLS broadcast recommendations -* For lowest latency, use a GOP size (keyframe interval) of 1 or 2 seconds. -* Broadcast to the RTMP endpoint if possible. -* If using OBS, select the "ultra low" latency profile. +- For lowest latency, use a GOP size (keyframe interval) of 1 or 2 seconds. +- Broadcast to the RTMP endpoint if possible. +- If using OBS, select the "ultra low" latency profile. ### Requirements -* Closed GOPs are required. This means that if there are any B frames in the video, they should always refer to frames within the same GOP. This setting is the default in most encoding software and hardware, including [OBS Studio](https://obsproject.com/). -* Stream Live only supports H.264 video and AAC audio codecs as inputs. This requirement does not apply to inputs that are relayed to Stream Connect outputs. Stream Live supports ADTS but does not presently support LATM. -* Clients must be configured to reconnect when a disconnection occurs. Stream Live is designed to handle reconnection gracefully by continuing the live stream. +- Closed GOPs are required. This means that if there are any B frames in the video, they should always refer to frames within the same GOP. This setting is the default in most encoding software and hardware, including [OBS Studio](https://obsproject.com/). +- Stream Live only supports H.264 video and AAC audio codecs as inputs. This requirement does not apply to inputs that are relayed to Stream Connect outputs. Stream Live supports ADTS but does not presently support LATM. +- Clients must be configured to reconnect when a disconnection occurs. Stream Live is designed to handle reconnection gracefully by continuing the live stream. ### Limitations -* Watermarks cannot yet be used with live videos. -* If a live video exceeds seven days in length, the recording will be truncated to seven days. Only the first seven days of live video content will be recorded. +- Watermarks cannot yet be used with live videos. +- If a live video exceeds seven days in length, the recording will be truncated to seven days. Only the first seven days of live video content will be recorded. diff --git a/src/content/docs/stream/uploading-videos/upload-video-file.mdx b/src/content/docs/stream/uploading-videos/upload-video-file.mdx index ad5e4225b419f0..c230db34c58e5f 100644 --- a/src/content/docs/stream/uploading-videos/upload-video-file.mdx +++ b/src/content/docs/stream/uploading-videos/upload-video-file.mdx @@ -3,7 +3,6 @@ pcx_content_type: how-to title: Upload a video file sidebar: order: 3 - --- ## Basic Uploads (for small videos) @@ -23,10 +22,8 @@ https://api.cloudflare.com/client/v4/accounts//stream :::undefined - Note that cURL `-F` flag automatically configures the content-type header and maps `skiing.mp4` to a form input called `file`. - ::: ## Resumable uploads with tus (for large files) @@ -39,24 +36,20 @@ Note that cURL `-F` flag automatically configures the content-type header and ma :::undefined - Important: Cloudflare Stream requires a minimum chunk size of 5,242,880 bytes when using TUS, unless the entire file is less than this amount. We recommend increasing the chunk size to 52,428,800 bytes for better performance when the client connection is expected to be reliable. Maximum chunk size can be 209,715,200 bytes. - ::: :::undefined - Important: Cloudflare Stream requires a chunk size divisible by 256KiB (256x1024 bytes). Please round your desired chunk size to the nearest multiple of 256KiB. The final chunk of an upload or uploads that fit within a single chunk are exempt from this requirement. - ::: ### Specifying upload options @@ -67,33 +60,29 @@ The tus protocol allows you to add optional parameters [in the `Upload-Metadata` Setting arbitrary metadata values in the `Upload-Metadata` header sets values the [meta key in Stream API](/api/operations/stream-videos-list-videos). +- `name` + - Setting this key will set `meta.name` in the API and display the value as the name of the video in the dashboard. -* `name` - - * Setting this key will set `meta.name` in the API and display the value as the name of the video in the dashboard. - -* `requiresignedurls` - - * If this key is present, the video playback for this video will be required to use signed urls after upload. - -* `scheduleddeletion` +- `requiresignedurls` - * Specifies a date and time when a video will be deleted. After a video is deleted, it is no longer viewable and no longer counts towards storage for billing. The specified date and time cannot be earlier than 30 days or later than 1096 days from the video's created timestamp. + - If this key is present, the video playback for this video will be required to use signed urls after upload. -* `allowedorigins` +- `scheduleddeletion` - * An array of strings listing origins allowed to display the video. This will set the [allowed origins setting](/stream/viewing-videos/securing-your-stream/#security-considerations) for the video. + - Specifies a date and time when a video will be deleted. After a video is deleted, it is no longer viewable and no longer counts towards storage for billing. The specified date and time cannot be earlier than 30 days or later than 1096 days from the video's created timestamp. -* `thumbnailtimestamppct` +- `allowedorigins` - * Specify the default thumbnail [timestamp percentage](/stream/viewing-videos/displaying-thumbnails/). Note that percentage is a floating point value between 0.0 and 1.0. + - An array of strings listing origins allowed to display the video. This will set the [allowed origins setting](/stream/viewing-videos/securing-your-stream/#security-considerations) for the video. -* `watermark` +- `thumbnailtimestamppct` - * The watermark profile UID. + - Specify the default thumbnail [timestamp percentage](/stream/viewing-videos/displaying-thumbnails/). Note that percentage is a floating point value between 0.0 and 1.0. +- `watermark` + - The watermark profile UID. ### Set creator property @@ -118,11 +107,11 @@ stream-media-id: cab807e0c477d01baq20f66c3d1dfc26cf You will also need to download a tus client. This tutorial will use the [tus Python client](https://github.com/tus/tus-py-client), available through pip, Python's package manager. ```sh -$ pip install -U tus.py +pip install -U tus.py ``` ```sh -$ tus-upload --chunk-size 52428800 --header Authorization "Bearer " https://api.cloudflare.com/client/v4/accounts//stream +tus-upload --chunk-size 52428800 --header Authorization "Bearer " https://api.cloudflare.com/client/v4/accounts//stream ``` In the beginning of the response from tus, you’ll see the endpoint for getting information about your newly uploaded video. @@ -197,57 +186,57 @@ Please see [go-tus](https://github.com/eventials/go-tus) on GitHub for functiona 1. Install tus-js-client ```sh -$ npm install tus-js-client +npm install tus-js-client ``` 1. Set up an index.js and configure: -* API endpoint with your Cloudflare Account ID -* Request headers to include a API token +- API endpoint with your Cloudflare Account ID +- Request headers to include a API token ```js -var fs = require('fs'); -var tus = require('tus-js-client'); +var fs = require("fs"); +var tus = require("tus-js-client"); // specify location of file you'd like to upload below -var path = __dirname + '/test.mp4'; +var path = __dirname + "/test.mp4"; var file = fs.createReadStream(path); var size = fs.statSync(path).size; -var mediaId = ''; +var mediaId = ""; var options = { - endpoint: 'https://api.cloudflare.com/client/v4/accounts//stream', - headers: { - Authorization: 'Bearer ', - }, - chunkSize: 50 * 1024 * 1024, // Required a minimum chunk size of 5MB, here we use 50MB. - retryDelays: [0, 3000, 5000, 10000, 20000], // Indicates to tus-js-client the delays after which it will retry if the upload fails - metadata: { - name: 'test.mp4', - filetype: 'video/mp4', - // Optional if you want to include a watermark - // watermark: '', - }, - uploadSize: size, - onError: function (error) { - throw error; - }, - onProgress: function (bytesUploaded, bytesTotal) { - var percentage = ((bytesUploaded / bytesTotal) * 100).toFixed(2); - console.log(bytesUploaded, bytesTotal, percentage + '%'); - }, - onSuccess: function () { - console.log('Upload finished'); - }, - onAfterResponse: function (req, res) { - return new Promise(resolve => { - var mediaIdHeader = res.getHeader('stream-media-id'); - if (mediaIdHeader) { - mediaId = mediaIdHeader; - } - resolve(); - }); - }, + endpoint: "https://api.cloudflare.com/client/v4/accounts//stream", + headers: { + Authorization: "Bearer ", + }, + chunkSize: 50 * 1024 * 1024, // Required a minimum chunk size of 5MB, here we use 50MB. + retryDelays: [0, 3000, 5000, 10000, 20000], // Indicates to tus-js-client the delays after which it will retry if the upload fails + metadata: { + name: "test.mp4", + filetype: "video/mp4", + // Optional if you want to include a watermark + // watermark: '', + }, + uploadSize: size, + onError: function (error) { + throw error; + }, + onProgress: function (bytesUploaded, bytesTotal) { + var percentage = ((bytesUploaded / bytesTotal) * 100).toFixed(2); + console.log(bytesUploaded, bytesTotal, percentage + "%"); + }, + onSuccess: function () { + console.log("Upload finished"); + }, + onAfterResponse: function (req, res) { + return new Promise((resolve) => { + var mediaIdHeader = res.getHeader("stream-media-id"); + if (mediaIdHeader) { + mediaId = mediaIdHeader; + } + resolve(); + }); + }, }; var upload = new tus.Upload(file, options); diff --git a/src/content/docs/stream/viewing-videos/download-videos.mdx b/src/content/docs/stream/viewing-videos/download-videos.mdx index c55e771a8d4cb6..14550c98b04927 100644 --- a/src/content/docs/stream/viewing-videos/download-videos.mdx +++ b/src/content/docs/stream/viewing-videos/download-videos.mdx @@ -3,7 +3,6 @@ title: Download videos pcx_content_type: how-to sidebar: order: 6 - --- When you upload a video to Stream, it can be streamed using HLS/DASH. However, for certain use-cases (such as offline viewing), you may want to download the MP4. You can enable MP4 support on a per video basis by following the steps below: @@ -28,16 +27,16 @@ https://api.cloudflare.com/client/v4/accounts//stream//do ```json title="Response" { - "result": { - "default": { - "status": "inprogress", - "url": "https://customer-.cloudflarestream.com//downloads/default.mp4", - "percentComplete": 75.0 - } - }, - "success": true, - "errors": [], - "messages": [] + "result": { + "default": { + "status": "inprogress", + "url": "https://customer-.cloudflarestream.com//downloads/default.mp4", + "percentComplete": 75.0 + } + }, + "success": true, + "errors": [], + "messages": [] } ``` @@ -53,16 +52,16 @@ https://api.cloudflare.com/client/v4/accounts//stream//do ```json title="Response" { - "result": { - "default": { - "status": "ready", - "url": "https://customer-.cloudflarestream.com//downloads/default.mp4", - "percentComplete": 100.0 - } - }, - "success": true, - "errors": [], - "messages": [] + "result": { + "default": { + "status": "ready", + "url": "https://customer-.cloudflarestream.com//downloads/default.mp4", + "percentComplete": 100.0 + } + }, + "success": true, + "errors": [], + "messages": [] } ``` @@ -81,7 +80,7 @@ The `filename` can be a maximum of 120 characters long and composed of `abcdefgh The generated MP4 download files can be retrieved via the link in the download API response. ```sh -$ curl -L https://customer-.cloudflarestream.com//downloads/default.mp4 > download.mp4 +curl -L https://customer-.cloudflarestream.com//downloads/default.mp4 > download.mp4 ``` ## Secure video downloads diff --git a/src/content/docs/style-guide/documentation-content-strategy/content-types/tutorial.mdx b/src/content/docs/style-guide/documentation-content-strategy/content-types/tutorial.mdx index aa6269386425e6..94b65af07905b7 100644 --- a/src/content/docs/style-guide/documentation-content-strategy/content-types/tutorial.mdx +++ b/src/content/docs/style-guide/documentation-content-strategy/content-types/tutorial.mdx @@ -1,10 +1,9 @@ --- pcx_content_type: concept title: Tutorial - --- -import { GlossaryDefinition, Render } from "~/components" +import { GlossaryDefinition, Render } from "~/components"; @@ -17,41 +16,41 @@ We have a [pre-built template](https://github.com/cloudflare/cloudflare-docs/blo You can copy the file directly or - if you have [Hugo](https://github.com/cloudflare/cloudflare-docs?tab=readme-ov-file#setup) installed on your local machine - you can run the following command: ```sh -$ hugo new content --kind tutorial {new_file_location} +hugo new content --kind tutorial {new_file_location} ``` In practice, that might look like: ```sh -$ hugo new content --kind tutorial content/workers/tutorials/new-tutorial.md +hugo new content --kind tutorial content/workers/tutorials/new-tutorial.md ``` ## Guidelines **A tutorial is:** -* User-focused -* Aligned to a user's goal or job-to-be-done -* Descriptive and guiding +- User-focused +- Aligned to a user's goal or job-to-be-done +- Descriptive and guiding **A tutorial can:** -* Describe how to integrate with a third party -* Be delivered in the Cloudflare dashboard -* Describe how to set up multiple products to complete a single job-to-be-done +- Describe how to integrate with a third party +- Be delivered in the Cloudflare dashboard +- Describe how to set up multiple products to complete a single job-to-be-done **A tutorial is not:** -* Product configuration information, how-to (or any of the other content types) -* How to complete a task in the UI or API -* A dumping ground for screenshots -* Content with no end goal or job-to-be-done +- Product configuration information, how-to (or any of the other content types) +- How to complete a task in the UI or API +- A dumping ground for screenshots +- Content with no end goal or job-to-be-done ### Tone Guiding, straightforward, educational, authoritative -### content\_type +### content_type `tutorial` diff --git a/src/content/docs/style-guide/formatting/code-block-guidelines.mdx b/src/content/docs/style-guide/formatting/code-block-guidelines.mdx index f37560cc9561e1..480e8a97b793eb 100644 --- a/src/content/docs/style-guide/formatting/code-block-guidelines.mdx +++ b/src/content/docs/style-guide/formatting/code-block-guidelines.mdx @@ -1,13 +1,12 @@ --- pcx_content_type: concept title: Code block guidelines - --- You can create code blocks by: -* Using triple-acute characters as a "fence" around the code block. (Recommended) -* Indenting lines by four spaces or one tab. +- Using triple-acute characters as a "fence" around the code block. (Recommended) +- Indenting lines by four spaces or one tab. To define the syntax highlighting language used for the code block, enter a language name after the first fence. Refer to the [List of languages used in Cloudflare developer documentation](#list-of-languages-used-in-cloudflare-developer-documentation) for a list of supported languages. @@ -29,71 +28,70 @@ The rendered output looks like this: ```json { - "firstName": "John", - "lastName": "Smith", - "age": 25 + "firstName": "John", + "lastName": "Smith", + "age": 25 } ``` ## Displaying terminal commands -* Use the `sh` language for **one-line commands** executed in the Linux/macOS terminal (each command must be in a single line). - - Each line containing a command that the user should enter *must* start with a `$` sign. The reader will be able to select these prefixed lines with commands, but no other lines in the code block (which should be command output). +- Use the `sh` language for **one-line commands** executed in the Linux/macOS terminal (each command must be in a single line). :::note The **Copy to clipboard** button (top-right corner of the code block) will copy the entire content, not just what the reader can select. ::: -* Use the `bash` language for other **Linux/macOS/generic commands**. For example: - * Commands that span multiple lines (usually each line ends with a `\`) and may include one or more lines of JSON content. - * Commands for specific shells (for example, a command specifically for the zsh shell, where the prompt is usually `%`). +- Use the `bash` language for other **Linux/macOS/generic commands**. For example: + + - Commands that span multiple lines (usually each line ends with a `\`) and may include one or more lines of JSON content. + - Commands for specific shells (for example, a command specifically for the zsh shell, where the prompt is usually `%`). -* Use the `powershell` language for Windows PowerShell commands. +- Use the `powershell` language for Windows PowerShell commands. -* Use the `txt` language for Windows console commands. +- Use the `txt` language for Windows console commands. ## Terminal prompts ### For "sh" blocks -Use "**`$`** "(dollar sign, space) or "**FOLDER\_NAME $** " (folder name, space, dollar sign, space). +Use "**`$`** "(dollar sign, space) or "**FOLDER_NAME $** " (folder name, space, dollar sign, space). Examples: -* **`$`** command-to-run -* **\~/my-folder `$`** command-to-run (where `~` means the home folder of the current user). +- **`$`** command-to-run +- **\~/my-folder `$`** command-to-run (where `~` means the home folder of the current user). ### For "bash" blocks Blocks containing **Linux/macOS/generic** commands: -* If a code block contains only one (multi-line) command, do not include a `$` prefix so that the user can run the command immediately after copying and pasting without having to remove the prefix. -* If a code block includes several commands or it includes output, consider including a prefix before each command to help differentiate between commands and their output. Use the same prefixes as described for `sh` blocks. -* For zsh-specific instructions you can use a `%` command prefix instead of `$`. +- If a code block contains only one (multi-line) command, do not include a `$` prefix so that the user can run the command immediately after copying and pasting without having to remove the prefix. +- If a code block includes several commands or it includes output, consider including a prefix before each command to help differentiate between commands and their output. Use the same prefixes as described for `sh` blocks. +- For zsh-specific instructions you can use a `%` command prefix instead of `$`. ### For "powershell" blocks -Use "**PS FOLDER\_NAME>** " (the `>` is part of the prompt, and there is a space after it). +Use "**PS FOLDER_NAME>** " (the `>` is part of the prompt, and there is a space after it). Examples: -* **PS C:\\>** command-to-run.exe -* **PS C:\Users\JohnDoe>** command-to-run.exe +- **PS C:\\>** command-to-run.exe +- **PS C:\Users\JohnDoe>** command-to-run.exe ### For Windows console ("txt") blocks -Use "**FOLDER\_NAME>**" (folder name, bigger than symbol, no space after). +Use "**FOLDER_NAME>**" (folder name, bigger than symbol, no space after). Alternatively, do not include any prompt and start the line with the command the user must enter (knowing that it will be harder to understand what must be entered and what is example output). Examples: -* C:\\>command-to-run.exe -* C:\Program Files>command-to-run.exe -* C:\Users\JohnDoe>command-to-run.exe +- C:\\>command-to-run.exe +- C:\Program Files>command-to-run.exe +- C:\Users\JohnDoe>command-to-run.exe -*** +--- ## For JSON code blocks @@ -102,35 +100,35 @@ Use the `json` language for **JSON code blocks** or **JSON fragments.** Multi-line curl commands with a JSON body should use `bash` syntax highlighting, as stated in [Displaying terminal commands](#displaying-terminal-commands). :::note -JSON fragments may appear with a red background in GitHub because they are not valid JSON. Make it clear in the documentation that it is a fragment and not an entire piece of valid JSON content. +JSON fragments may appear with a red background in GitHub because they are not valid JSON. Make it clear in the documentation that it is a fragment and not an entire piece of valid JSON content. ::: ## List of languages used in Cloudflare developer documentation -* `bash` (alias: `curl`) -* `c` -* `diff` -* `go` -* `graphql` -* `hcl` (alias: `tf`) -* `html` -* `ini` -* `java` -* `js` (alias: `javascript`) -* `json` -* `kotlin` -* `php` -* `powershell` -* `python` (alias: `py`) -* `ruby` (alias: `rb`) -* `rust` (alias: `rs`) -* `sh` (alias: `shell`) -* `sql` -* `swift` -* `toml` -* `ts` (alias: `typescript`) -* `txt` (aliases: `text`, `plaintext`) -* `xml` -* `yaml` (alias: `yml`) +- `bash` (alias: `curl`) +- `c` +- `diff` +- `go` +- `graphql` +- `hcl` (alias: `tf`) +- `html` +- `ini` +- `java` +- `js` (alias: `javascript`) +- `json` +- `kotlin` +- `php` +- `powershell` +- `python` (alias: `py`) +- `ruby` (alias: `rb`) +- `rust` (alias: `rs`) +- `sh` (alias: `shell`) +- `sql` +- `swift` +- `toml` +- `ts` (alias: `typescript`) +- `txt` (aliases: `text`, `plaintext`) +- `xml` +- `yaml` (alias: `yml`) Different capitalizations of these languages are also supported (but not recommended). For example, `JavaScript` will use the `javascript` language, and `HTML` will use the `html` language. diff --git a/src/content/docs/support/third-party-software/content-management-system-cms/wordpress.com-and-cloudflare.mdx b/src/content/docs/support/third-party-software/content-management-system-cms/wordpress.com-and-cloudflare.mdx index ad4f098dc7475d..080d5c71674be2 100644 --- a/src/content/docs/support/third-party-software/content-management-system-cms/wordpress.com-and-cloudflare.mdx +++ b/src/content/docs/support/third-party-software/content-management-system-cms/wordpress.com-and-cloudflare.mdx @@ -2,7 +2,6 @@ pcx_content_type: troubleshooting source: https://support.cloudflare.com/hc/en-us/articles/360058639551-WordPress-com-and-Cloudflare title: WordPress.com and Cloudflare - --- ## Getting started with WordPress.com and Cloudflare @@ -13,8 +12,8 @@ Cloudflare and WordPress.com are partnering to offer customers Cloudflare's perf During this process, Cloudflare scans your existing WordPress.com DNS records and displays them. The records will look similar to the examples below. -* `A example.com 192.0.78.12` -* `A example.com 192.0.78.13` +- `A example.com 192.0.78.12` +- `A example.com 192.0.78.13` WordPress.com does not guarantee the IP address will never change. For maximum uptime, you should complete the following: @@ -30,7 +29,7 @@ WordPress.com does not guarantee the IP address will never change. For maximum u Congratulations! Your site is now accelerated and protected by Cloudflare. -*** +--- ## Enabling additional Cloudflare products @@ -76,11 +75,11 @@ The [Automatic Platform Optimization (APO)](https://www.cloudflare.com/automatic-platform-optimization/wordpress/) feature requires that you be on a [Full Setup](/dns/zone-setups/full-setup/) -using Cloudflare nameservers. +using Cloudflare nameservers. ::: -* Cloudflare free plan + $5/month APO add-on or a Pro or Business plan subscription (includes APO) -* WordPress.com Business plan or above (requires plugins) +- Cloudflare free plan + $5/month APO add-on or a Pro or Business plan subscription (includes APO) +- WordPress.com Business plan or above (requires plugins) ### **Install and enable APO** @@ -92,7 +91,7 @@ using Cloudflare nameservers. For more details, refer to [Understanding Automatic Platform Optimization (APO) with WordPress](/automatic-platform-optimization/). -*** +--- ## Troubleshooting @@ -111,7 +110,10 @@ For more details, refer to [Understanding Automatic Platform Optimization (APO) In a terminal, use the following cURL. The header `'accept: text/html'` is important ```sh -$ curl -svo /dev/null -A "CF" 'https://example.com/' -H 'accept: text/html' 2>&1 | grep 'cf-cache-status\|cf-edge\|cf-apo-via' +curl -svo /dev/null -A "CF" 'https://example.com/' -H 'accept: text/html' 2>&1 | grep 'cf-cache-status\|cf-edge\|cf-apo-via' +``` + +```sh output < cf-cache-status: HIT < cf-apo-via: cache < cf-edge-cache: cache,platform=wordpress @@ -119,8 +121,8 @@ $ curl -svo /dev/null -A "CF" 'https://example.com/' -H 'accept: text/html' 2>&1 As always, `cf-cache-status` displays if the asset hit the cache or was considered dynamic and served from the origin. -* The `cf-apo-via` header returns the APO status for the given request. -* The `cf-edge-cache` header means the WordPress plugin is installed and enabled. +- The `cf-apo-via` header returns the APO status for the given request. +- The `cf-edge-cache` header means the WordPress plugin is installed and enabled. ### How can I verify APO and the WordPress.com integration works? diff --git a/src/content/docs/support/third-party-software/others/configure-cloudflare-and-heroku-over-https.mdx b/src/content/docs/support/third-party-software/others/configure-cloudflare-and-heroku-over-https.mdx index ccc6333bd4df3f..b262fffeef035b 100644 --- a/src/content/docs/support/third-party-software/others/configure-cloudflare-and-heroku-over-https.mdx +++ b/src/content/docs/support/third-party-software/others/configure-cloudflare-and-heroku-over-https.mdx @@ -2,10 +2,9 @@ pcx_content_type: troubleshooting source: https://support.cloudflare.com/hc/en-us/articles/205893698-Configure-Cloudflare-and-Heroku-over-HTTPS title: Configure Cloudflare and Heroku over HTTPS - --- -import { Example } from "~/components" +import { Example } from "~/components"; ## Overview @@ -13,13 +12,13 @@ Heroku is a cloud PaaS that supports several pre-configured programming language This article describes how to configure Heroku with Cloudflare to serve your traffic over HTTPS. For this article, we'll assume that you already have an [active domain on Cloudflare](https://support.cloudflare.com/hc/en-us/sections/200820158-CloudFlare-101), as well as a running Heroku app. -*** +--- ## Step 1 - Add a custom domain to your Heroku app Follow Heroku's instructions: [Custom Domain Names for Apps](https://devcenter.heroku.com/articles/custom-domains). -*** +--- ## Step 2 - Add a subdomain in Cloudflare DNS @@ -53,14 +52,17 @@ Add a CNAME record for your root and point it to DNS target you obtained in Step
-*** +--- ## Step 3 - Confirm that your domain is routed through Cloudflare The easiest way to confirm that Cloudflare is working for your domain is to issue a cURL command. ```sh -$ curl -I www.example.com +curl -I www.example.com +``` + +```sh output HTTP/1.1 200 OK Date: Tue, 23 Jan 2018 18:51:30 GMT Content-Type: text/html; charset=UTF-8 @@ -76,7 +78,7 @@ You can identify Cloudflare-proxied requests by the *CF-Ray* response header. You can repeat the above cURL command for any of the subdomains that you have configured within your DNS settings. -*** +--- ## Step 4 - Configure your domain for SSL @@ -86,7 +88,7 @@ Cloudflare provides a SANs wildcard certificate with all paid plans, and a SNI w If you don't know what this means, navigate to the **Overview** tab of the **SSL/TLS** app in your Cloudflare dashboard. Select *Flexible* mode to serve your site over HTTPS to all public visitors. -Once the certificate status changes to **• Active Certificate**, incoming traffic will be served to your site over HTTPS (e.g., visitors will see HTTPS prefixed to your domain name in the browser bar).   +Once the certificate status changes to **• Active Certificate**, incoming traffic will be served to your site over HTTPS (e.g., visitors will see HTTPS prefixed to your domain name in the browser bar). ### Step 4b - Force all traffic over HTTPS @@ -95,7 +97,10 @@ To ensure all traffic to your site is encrypted, Cloudflare lets you force an au You can then use a cURL command to verify that all requests are being forced over HTTPS. ```sh -$ curl -I -L example.com +curl -I -L example.com +``` + +```sh output HTTP/1.1 301 Moved Permanently Date: Tue, 23 Jan 2018 23:17:44 GMT Connection: keep-alive From ddb8b5ff75a9456b09b9f17bb48055c2e2cbb7a2 Mon Sep 17 00:00:00 2001 From: kodster28 Date: Tue, 20 Aug 2024 14:36:36 -0500 Subject: [PATCH 2/3] Remake comment --- .../configuration/consumer-concurrency.mdx | 28 ++++++++++--------- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/src/content/docs/queues/configuration/consumer-concurrency.mdx b/src/content/docs/queues/configuration/consumer-concurrency.mdx index 6dd1cb4bd6b4eb..86260c70897cd0 100644 --- a/src/content/docs/queues/configuration/consumer-concurrency.mdx +++ b/src/content/docs/queues/configuration/consumer-concurrency.mdx @@ -92,23 +92,21 @@ To set a fixed maximum number of concurrent consumer invocations for a given que To remove the limit, remove the `max_concurrency` setting from the `[[queues.consumers]]` configuration for a given queue and call `npx wrangler deploy` to push your configuration update. -{/\* \*/} + ````sh + # Call update without passing a flag to allow concurrency to scale to the maximum + wrangler queues consumer update + ``` */} ## Billing @@ -122,3 +120,7 @@ Billing for consumers follows the [Workers standard usage model](/workers/platfo ### Example A consumer Worker that takes 2 seconds to process a batch of messages will incur the same overall costs to process 50 million (50,000,000) messages, whether it does so concurrently (faster) or individually (slower). + +``` + +``` From 21830dd470619e6ddcc33bd6a0893eee9f5cb58b Mon Sep 17 00:00:00 2001 From: kodster28 Date: Tue, 20 Aug 2024 14:44:53 -0500 Subject: [PATCH 3/3] other comment --- .../queues/configuration/pull-consumers.mdx | 24 +++++++++---------- 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/src/content/docs/queues/configuration/pull-consumers.mdx b/src/content/docs/queues/configuration/pull-consumers.mdx index aea9ba4978ce34..441518afd26160 100644 --- a/src/content/docs/queues/configuration/pull-consumers.mdx +++ b/src/content/docs/queues/configuration/pull-consumers.mdx @@ -237,30 +237,30 @@ Additionally: Queues aims to be permissive when it comes to lease IDs: if a consumer acknowledges a message by its lease ID _after_ the visibility timeout is reached, Queues will still accept that acknowledgment. If the message was delivered to another consumer during the intervening period, it will also be able to acknowledge the message without an error. -{/\* \*/} + --> */} ## Content types