Update log destination docs (#8242)

- Rename "Osquery logs" page to "Log destinations"
- Use exact product names in the log destination docs
- Move anchor links to the top of the page so that fleetdm.com/docs/log-destinations renders a sidebar
This commit is contained in:
Noah Talerman 2022-10-18 13:18:15 -04:00 committed by GitHub
parent 564a4a4ee9
commit c576b9de20
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 32 additions and 42 deletions

View File

@ -41,7 +41,7 @@ Weve added clarification on the performance impact bubbles that appear on the
- Improved performance of the osquery query used to collect software inventory for Linux hosts
- Host status on the summary page has been improved
- Improved tooltips in Fleet UI
- Update [Kinesis](https://fleetdm.com/docs/using-fleet/osquery-logs#kinesis) logging plugin to append newline characters to raw message bytes to properly format the Newline Delimited JSON (NDJSON) for downstream consumers
- Updated [Kinesis](https://fleetdm.com/docs/using-fleet/log-destinations#amazon-kinesis-data-streams) logging plugin to append newline characters to raw message bytes to properly format the Newline Delimited JSON (NDJSON) for downstream consumers
- Query packs are able to be applied to specific teams using fleetctl
- Added instructions for using plain osquery to add hosts to Fleet in the Fleet View these instructions by heading to **Hosts > Add hosts > Advanced**

View File

@ -967,7 +967,7 @@ Valid time units are `s`, `m`, `h`.
##### osquery_status_log_plugin
Which log output plugin should be used for osquery status logs received from clients. Check out the reference documentation for osquery logging options [here in the Fleet documentation](../Using-Fleet/Osquery-logs.md).
This is the log output plugin that should be used for osquery status logs received from clients. Check out the [reference documentation for log destinations](../Using-Fleet/Log-destinations.md).
Options are `filesystem`, `firehose`, `kinesis`, `lambda`, `pubsub`, `kafkarest`, and `stdout`.
@ -982,7 +982,7 @@ Options are `filesystem`, `firehose`, `kinesis`, `lambda`, `pubsub`, `kafkarest`
##### osquery_result_log_plugin
Which log output plugin should be used for osquery result logs received from clients. Check out the reference documentation for osquery logging options [here in the Fleet documentation](../Using-Fleet/Osquery-logs.md).
This is the log output plugin that should be used for osquery result logs received from clients. Check out the [reference documentation for log destinations](../Using-Fleet/Log-destinations.md).
Options are `filesystem`, `firehose`, `kinesis`, `lambda`, `pubsub`, `kafkarest`, and `stdout`.

View File

@ -46,7 +46,7 @@ The query may take several seconds to complete because Fleet has to wait for the
Fleet allows you to schedule queries. Scheduled queries will send data to your log destination automatically.
The default log destination, **filesystem**, is good to start. With this set, data is sent to the `/var/log/osquery/osqueryd.snapshots.log` file on each hosts filesystem. To see which log destinations are available in Fleet, head to the [osquery logs guide](../Using-Fleet/Osquery-logs.md).
The default log destination, **filesystem**, is good to start. With this set, data is sent to the `/var/log/osquery/osqueryd.snapshots.log` file on each hosts filesystem. To see which log destinations are available in Fleet, head to the [log destinations page](../Using-Fleet/Log-destinations.md).
How to schedule a query:

View File

@ -1,33 +1,33 @@
# Osquery logs
# Log destinations
This document provides instructions for working with each of the following log destinations in Fleet.
To configure each log destination, you must set the correct osquery logging configuration options in Fleet. Check out the reference documentation for osquery logging configuration options [here in the Fleet documentation](../Deploying/Configuration.md#osquery-status-log-plugin).
- [Firehose](#firehose)
- [Amazon Kinesis Data Firehose](#amazon-kinesis-data-firehose)
- [Snowflake](#snowflake)
- [Splunk](#splunk)
- [Kinesis](#kinesis)
- [Lambda](#lambda)
- [PubSub](#pubsub)
- [Kafka REST Proxy](#kafka)
- [Amazon Kinesis Data Streams](#amazon-kinesis-data-streams)
- [AWS Lambda](#aws-lambda)
- [Google Cloud Pub/Sub](#google-cloud-pubsub)
- [Apache Kafka](#apache-kafka)
- [Stdout](#stdout)
- [Filesystem](#filesystem)
### Firehose
This document provides a list of the supported log destinations in Fleet.
Logs are written to AWS Firehose streams.
To configure each log destination, you must set the correct osquery logging configuration options in Fleet. Check out the reference documentation for [osquery logging configuration options](../Deploying/Configuration.md#osquery-status-log-plugin).
### Amazon Kinesis Data Firehose
Logs are written to [Amazon Kinesis Data Firehose (Firehose)](https://aws.amazon.com/kinesis/data-firehose/).
- Plugin name: `firehose`
- Flag namespace: [firehose](../Deploying/Configuration.md#firehose)
With the Firehose plugin, osquery result and/or status logs are written to [Amazon Kinesis Data Firehose](https://aws.amazon.com/kinesis/data-firehose/). This is a very good method for aggregating osquery logs into AWS S3 storage.
This is a very good method for aggregating osquery logs into [Amazon S3](https://aws.amazon.com/s3/).
Note that Firehose logging has limits [discussed in the documentation](https://docs.aws.amazon.com/firehose/latest/dev/limits.html). When Fleet encounters logs that are too big for Firehose, notifications will be output in the Fleet logs and those logs _will not_ be sent to Firehose.
### Snowflake
To send logs to Snowflake, you must first configure Fleet to send logs to [Firehose](#firehose). This is because you'll use the Snowflake Snowpipe integration to direct logs to Snowflake.
To send logs to Snowflake, you must first configure Fleet to send logs to [Amazon Kinesis Data Firehose (Firehose)](#amazon-kinesis-data-firehose). This is because you'll use the Snowflake Snowpipe integration to direct logs to Snowflake.
If you're using Fleet's [terraform reference architecture](https://github.com/fleetdm/fleet/blob/main/infrastructure/dogfood/terraform/aws/firehose.tf), Firehose is already configured as your log destination.
@ -37,7 +37,7 @@ Snowflake provides instructions on setting up the destination tables and IAM rol
### Splunk
To send logs to Splunk, you must first configure Fleet to send logs to [Firehose](#firehose). This is because you'll enable Firehose to forward logs directly to Splunk.
To send logs to Splunk, you must first configure Fleet to send logs to [Amazon Kinesis Data Firehose (Firehose)](#amazon-kinesis-data-firehose). This is because you'll enable Firehose to forward logs directly to Splunk.
With Fleet configured to send logs to Firehose, you then want to load the data from Firehose into Splunk. AWS provides instructions on how to enable Firehose to forward directly to Splunk [here in the AWS documentation](https://docs.aws.amazon.com/firehose/latest/dev/create-destination.html#create-destination-splunk).
@ -45,31 +45,25 @@ If you're using Fleet's [terraform reference architecture](https://github.com/fl
Splunk provides instructions on how to prepare the Splunk platform for Firehose data [here in the Splunk documentation](https://docs.splunk.com/Documentation/AddOns/latest/Firehose/ConfigureFirehose).
### Kinesis
### Amazon Kinesis Data Streams
Logs are written to AWS Kinesis streams.
Logs are written to [Amazon Kinesis Data Streams (Kinesis)](https://aws.amazon.com/kinesis/data-streams).
- Plugin name: `kinesis`
- Flag namespace: [kinesis](../Deploying/Configuration.md#kinesis)
With the Kinesis plugin, osquery result and/or status logs are written to
[Amazon Kinesis Data Streams](https://aws.amazon.com/kinesis/data-streams).
Note that Kinesis logging has limits [discussed in the
documentation](https://docs.aws.amazon.com/kinesis/latest/dev/limits.html).
When Fleet encounters logs that are too big for Kinesis, notifications will be
output in the Fleet logs and those logs _will not_ be sent to Kinesis.
When Fleet encounters osquery logs that are too big for Kinesis, notifications appear
in the Fleet server logs. Those osquery logs **will not** be sent to Kinesis.
### Lambda
### AWS Lambda
Logs are written to AWS Lambda functions.
Logs are written to [AWS Lambda (Lambda)](https://aws.amazon.com/lambda/).
- Plugin name: `lambda`
- Flag namespace: [lambda](../Deploying/Configuration.md#lambda)
With the Lambda plugin, osquery result and/or status logs are written to
[AWS Lambda](https://aws.amazon.com/lambda/) functions.
Lambda processes logs from Fleet synchronously, so the Lambda function used must not take enough processing time that the osquery client times out while writing logs. If there is heavy processing to be done, use Lambda to store the logs in another datastore/queue before performing the long-running process.
Note that Lambda logging has limits [discussed in the
@ -83,26 +77,22 @@ Lambda is executed once per log line. As a result, queries with `differential` r
Keep this in mind when using Lambda, as you're charged based on the number of requests for your functions and the duration, the time it takes for your code to execute.
### PubSub
### Google Cloud Pub/Sub
Logs are written to Google Cloud PubSub topics.
Logs are written to [Google Cloud Pub/Sub (Pub/Sub)](https://cloud.google.com/pubsub).
- Plugin name: `pubsub`
- Flag namespace: [pubsub](../Deploying/Configuration.md#pubsub)
With the PubSub plugin, osquery result and/or status logs are written to [PubSub](https://cloud.google.com/pubsub/) topics.
Messages over 10MB will be dropped, with a notification sent to the Fleet logs, as these can never be processed by Pub/Sub.
Note that messages over 10MB will be dropped, with a notification sent to the fleet logs, as these can never be processed by PubSub.
### Apache Kafka
### Kafka
Logs are written to Apache Kafka topics.
Logs are written to [Apache Kafka (Kafka)](https://kafka.apache.org/) using the [Kafka REST proxy](https://github.com/confluentinc/kafka-rest).
- Plugin name: `kafkarest`
- Flag namespace: [kafka](../Deploying/Configuration.md#kafka)
With the Kafka REST plugin, osquery result and/or status logs are written to [Kafka](https://kafka.apache.org/) topics using the [Kafka REST proxy](https://github.com/confluentinc/kafka-rest).
Note that the REST proxy must be in place in order to send osquery logs to Kafka topics.
### Stdout

View File

@ -12,8 +12,8 @@ Provides resources for working with Fleet's API and includes example code for en
### [Adding hosts](./Adding-hosts.md)
Provides resources for enrolling your hosts to Fleet
### [Osquery logs](./Osquery-logs.md)
Includes documentation on the plugin options for working with osquery logs
### [Log destinations](./Log-destinations.md)
Includes documentation on the log destinations for sending with osquery logs
### [Osquery processes](./Osquery-process.md)
Includes documentation about osquery children processes and under which conditions they are terminated