Skip to content

Commit

Permalink
[Fleet/EA] Logstash & Kafka Outputs refresh (#1306) (#1357)
Browse files Browse the repository at this point in the history
* Update output-logstash.asciidoc

* Update output-kafka.asciidoc

* Update fleet-settings-output-kafka.asciidoc

* Update fleet-settings-output-logstash.asciidoc

* Update output-logstash.asciidoc

* Update docs/en/ingest-management/fleet/fleet-settings-output-kafka.asciidoc

Co-authored-by: David Kilfoyle <41695641+kilfoyle@users.noreply.github.com>

* Update docs/en/ingest-management/elastic-agent/configuration/outputs/output-kafka.asciidoc

Co-authored-by: David Kilfoyle <41695641+kilfoyle@users.noreply.github.com>

* Update docs/en/ingest-management/elastic-agent/configuration/outputs/output-logstash.asciidoc

Co-authored-by: David Kilfoyle <41695641+kilfoyle@users.noreply.github.com>

* Update docs/en/ingest-management/elastic-agent/configuration/outputs/output-logstash.asciidoc

Co-authored-by: David Kilfoyle <41695641+kilfoyle@users.noreply.github.com>

* Update docs/en/ingest-management/elastic-agent/configuration/outputs/output-kafka.asciidoc

Co-authored-by: David Kilfoyle <41695641+kilfoyle@users.noreply.github.com>

* Update docs/en/ingest-management/fleet/fleet-settings-output-kafka.asciidoc

Co-authored-by: David Kilfoyle <41695641+kilfoyle@users.noreply.github.com>

---------

Co-authored-by: David Kilfoyle <41695641+kilfoyle@users.noreply.github.com>
(cherry picked from commit 801d74b)

Co-authored-by: Luca Belluccini <luca.belluccini@elastic.co>
Co-authored-by: Julien Lind <julien.lind@elastic.co>
  • Loading branch information
3 people authored Oct 7, 2024
1 parent 9d13a19 commit c781ba4
Show file tree
Hide file tree
Showing 4 changed files with 100 additions and 6 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,29 @@ outputs:
verification_mode: full
----

== Kafka output and using {ls} to index data to {es}

If you are considering using {ls} to ship the data from `kafka` to {es}, please
be aware Elastic is not currently testing this kind of setup.

The structure of the documents sent from {agent} to `kafka` must not be modified by {ls}.
We suggest disabling `ecs_compatibility` on both the `kafka` input and the `json` codec.

Refer to <<logstash-output,{ls} output for {agent}>> documentation for more details.

[source,yaml]
----
inputs {
kafka {
...
ecs_compatibility => "disabled"
codec => json { ecs_compatibility => "disabled" }
...
}
}
...
----

== Kafka output configuration settings

The `kafka` output supports the following settings, grouped by category.
Expand Down Expand Up @@ -502,4 +525,4 @@ Note: If set to 0, no ACKs are returned by Kafka. Messages might be lost silentl

// =============================================================================

|===
|===
Original file line number Diff line number Diff line change
Expand Up @@ -32,28 +32,38 @@ To receive the events in {ls}, you also need to create a {ls} configuration pipe
The {ls} configuration pipeline listens for incoming {agent} connections,
processes received events, and then sends the events to {es}.

The following example configures a {ls} pipeline that listens on port `5044` for
incoming {agent} connections and routes received events to {es}:
The following {ls} pipeline definition example configures a pipeline that listens on port `5044` for
incoming {agent} connections and routes received events to {es}.


[source,yaml]
----
input {
elastic_agent {
port => 5044
enrich => none # don't modify the events' schema at all
# or minimal change, add only ssl and source metadata
# enrich => [ssl_peer_metadata, source_metadata]
ssl => true
ssl_certificate_authorities => ["<ca_path>"]
ssl_certificate => "<server_cert_path>"
ssl_key => "<server_cert_key_in_pkcs8>"
ssl_verify_mode => "force_peer"
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"] <1>
# cloud_id => "..."
data_stream => "true"
api_key => "<api_key>" <2>
data_stream => true
ssl => true
# cacert => "<elasticsearch_ca_path>"
}
}
----
<1> The {es} server and the port (`9200`) where {es} is running.
<2> The API Key used by {ls} to ship data to the destination data streams.

For more information about configuring {ls}, refer to
{logstash-ref}/configuration.html[Configuring {ls}] and
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,29 @@

Specify these settings to send data over a secure connection to Kafka. In the {fleet} <<output-settings,Output settings>>, make sure that the Kafka output type is selected.

== Kafka output and using {ls} to index data to {es}

If you are considering using {ls} to ship the data from `kafka` to {es}, please
be aware Elastic is not currently testing this kind of setup.

The structure of the documents sent from {agent} to `kafka` must not be modified by {ls}.
We suggest disabling `ecs_compatibility` on both the `kafka` input and the `json` codec.

Refer to the <<ls-output-settings,{ls} output for {agent}>> documentation for more details.

[source,yaml]
----
inputs {
kafka {
...
ecs_compatibility => "disabled"
codec => json { ecs_compatibility => "disabled" }
...
}
}
...
----

[discrete]
== General settings

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,44 @@ Before using the {ls} output, you need to make sure that for any integrations th

To learn how to generate certificates, refer to <<secure-logstash-connections>>.

To receive the events in {ls}, you also need to create a {ls} configuration pipeline.
The {ls} configuration pipeline listens for incoming {agent} connections,
processes received events, and then sends the events to {es}.

The following example configures a {ls} pipeline that listens on port `5044` for
incoming {agent} connections and routes received events to {es}.

The {ls} pipeline definition below is an example. Please refer to the `Additional Logstash
configuration required` steps when creating the {ls} output in the Fleet outputs page.

[source,yaml]
----
input {
elastic_agent {
port => 5044
enrich => none # don't modify the events' schema at all
ssl => true
ssl_certificate_authorities => ["<ca_path>"]
ssl_certificate => "<server_cert_path>"
ssl_key => "<server_cert_key_in_pkcs8>"
ssl_verify_mode => "force_peer"
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"] <1>
# cloud_id => "..."
data_stream => "true"
api_key => "<api_key>" <2>
data_stream => true
ssl => true
# cacert => "<elasticsearch_ca_path>"
}
}
----
<1> The {es} server and the port (`9200`) where {es} is running.
<2> The API Key obtained from the {ls} output creation steps in Fleet.

[cols="2*<a"]
|===
|
Expand Down Expand Up @@ -196,4 +234,4 @@ include::../elastic-agent/configuration/outputs/output-shared-settings.asciidoc[

|===

:type!:
:type!:

0 comments on commit c781ba4

Please sign in to comment.