Skip to content

Commit

Permalink
Doc: Enhance and expand DLQ docs 7.14 (#13100)
Browse files Browse the repository at this point in the history
Backports: #12959
Fixes: #12923
Related: #10493
  • Loading branch information
karenzone authored Jul 22, 2021
1 parent bb40218 commit 29d52f1
Show file tree
Hide file tree
Showing 3 changed files with 53 additions and 10 deletions.
50 changes: 48 additions & 2 deletions docs/static/dead-letter-queues.asciidoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,18 @@
[[dead-letter-queues]]
=== Dead Letter Queues (DLQ)
=== Dead letter queues (DLQ)

The dead letter queue (DLQ) can provide another layer of data resilience.
The dead letter queue (DLQ) is designed as a place to temporarily write events that cannot be processed.
The DLQ gives you flexibility to investigate problematic events without blocking the pipeline or losing the events.
Your pipeline keeps flowing, and the immediate problem is averted.
But those events still need to be addressed.

You can <<es-proc-dlq,process events from the DLQ>> with the <<plugins-inputs-dead_letter_queue,`dead_letter_queue` input plugin>> .

Processing events does not delete items from the queue, and the DLQ sometimes needs attention.
See <<dlq-size>> and <<dlq-clear>> for more info.

[[dead-letter-how]]
==== How the dead letter queue works

By default, when Logstash encounters an event that it cannot process because the
data contains a mapping error or some other issue, the Logstash pipeline
Expand Down Expand Up @@ -45,6 +56,9 @@ actions in the bulk request could not be performed, along with an HTTP-style
status code per entry to indicate why the action could not be performed.
If the DLQ is configured, individual indexing failures are routed there.

Even if you regularly process events, events remain in the dead letter queue.
The dead letter queue requires <<dlq-clear,manual intervention>> to clear it.

[[configuring-dlq]]
==== Configuring {ls} to use dead letter queues

Expand Down Expand Up @@ -250,3 +264,35 @@ output {
<3> The clean event is sent to Elasticsearch, where it can be indexed because
the mapping issue is resolved.

[[dlq-size]]
==== Track dead letter queue size

Monitor the size of the dead letter queue before it becomes a problem.
By checking it periodically, you can determine the maximum queue size that makes sense for each pipeline.

The size of the DLQ for each pipeline is available in the node stats API.

[source,txt]
-----
pipelines.${pipeline_id}.dead_letter_queue.queue_size_in_bytes.
-----

Where `{pipeline_id}` is the name of a pipeline with DLQ enabled.


[[dlq-clear]]
==== Clear the dead letter queue

The dead letter queue cannot be cleared with the upstream pipeline running.

The dead letter queue is a directory of pages.
To clear it, stop the pipeline and delete location/<file-name>.

[source,txt]
-----
${path.data}/dead_letter_queue/${pipeline_id}
-----

Where `{pipeline_id}` is the name of a pipeline with DLQ enabled.

The pipeline creates a new dead letter queue when it starts again.
2 changes: 1 addition & 1 deletion docs/static/persistent-queues.asciidoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[[persistent-queues]]
=== Persistent Queues
=== Persistent queues (PQ)

By default, Logstash uses in-memory bounded queues between pipeline stages
(inputs → pipeline workers) to buffer events. The size of these in-memory
Expand Down
11 changes: 4 additions & 7 deletions docs/static/resiliency.asciidoc
Original file line number Diff line number Diff line change
@@ -1,23 +1,20 @@
[[resiliency]]
== Data Resiliency
== Data resiliency

As data flows through the event processing pipeline, Logstash may encounter
situations that prevent it from delivering events to the configured
output. For example, the data might contain unexpected data types, or
Logstash might terminate abnormally.

To guard against data loss and ensure that events flow through the
pipeline without interruption, Logstash provides the following data resiliency
pipeline without interruption, Logstash provides data resiliency
features.

* <<persistent-queues>> protect against data loss by storing events in an
internal queue on disk.

* <<dead-letter-queues>> provide on-disk storage for events that Logstash is
unable to process. You can easily reprocess events in the dead letter queue by
using the `dead_letter_queue` input plugin.

//TODO: Make dead_letter_queue an active link after the plugin docs are published.
* <<dead-letter-queues>> provide on-disk storage for events that Logstash is unable to process so that you can evaluate them.
You can easily reprocess events in the dead letter queue by using the `dead_letter_queue` input plugin.

These resiliency features are disabled by default. To turn on these features,
you must explicitly enable them in the Logstash <<logstash-settings-file,settings file>>.

0 comments on commit 29d52f1

Please sign in to comment.