mirror of
https://github.com/empayre/fleet.git
synced 2024-11-06 08:55:24 +00:00
Part 2 of documentation restructure. Using Fleet section. (#148)
This PR includes the Using Fleet section of the documentation restructure #144. It shouldn't be merged until changes are approved for the entire restructuring (part 1, part 2, and part 3). Update the naming convention for the files to number prefixes.
This commit is contained in:
parent
fa6ac424ca
commit
f9eae5e747
@ -0,0 +1,48 @@
|
||||
# Fleet UI
|
||||
- [Running queries](#running-queries)
|
||||
- [Scheduling queries](#scheduling-queries)
|
||||
|
||||
## Running queries
|
||||
|
||||
The Fleet application allows you to query hosts which you have installed osquery on. To run a new query, use the "Query" sidebar and select "New Query". From this page, you can compose your query, view SQL table documentation via the sidebar, select arbitrary hosts (or groups of hosts), and execute your query. As results are returned, they will populate the interface in real time. You can use the integrated filtering tool to perform useful initial analytics and easily export the entire dataset for offline analysis.
|
||||
|
||||
![Distributed new query with local filter](../images/distributed-new-query-with-local-filter.png)
|
||||
|
||||
After you've composed a query that returns the information you were looking for, you may choose to save the query. You can still continue to execute the query on whatever set of hosts you would like after you have saved the query.
|
||||
|
||||
![Distributed saved query with local filter](../images/distributed-saved-query-with-local-filter.png)
|
||||
|
||||
Saved queries can be accessed if you select "Manage Queries" from the "Query" section of the sidebar. Here, you will find all of the queries you've ever saved. You can filter the queries by query name, so name your queries something memorable!
|
||||
|
||||
![Manage Queries](../images/manage-queries.png)
|
||||
|
||||
To learn more about scheduling queries so that they run on an on-going basis, see the [Scheduling Queries](./scheduling-queries.md) guide.
|
||||
|
||||
|
||||
## Scheduling Queries
|
||||
|
||||
As discussed in the [Running Queries Documentation](./running-queries.md), you can use the Fleet application to create, execute, and save osquery queries. You can organize these queries into "Query Packs". To view all saved packs and perhaps create a new pack, select "Manage Packs" from the "Packs" sidebar. Packs are usually organized by the general class of instrumentation that you're trying to perform.
|
||||
|
||||
![Manage Packs](../images/manage-packs.png)
|
||||
|
||||
If you select a pack from the list, you can quickly enable and disable the entire pack, or you can configure it further.
|
||||
|
||||
![Manage Packs With Pack Selected](../images/manage-packs-with-pack-selected.png)
|
||||
|
||||
When you edit a pack, you can decide which targets you would like to execute the pack. This is a similar selection experience to the target selection process that you use to execute a new query.
|
||||
|
||||
![Edit Pack Targets](../images/edit-pack-targets.png)
|
||||
|
||||
To add queries to a pack, use the right-hand sidebar. You can take an existing scheduled query and add it to the pack. You must also define a few key details such as:
|
||||
|
||||
- interval: how often should the query be executed?
|
||||
- logging: which osquery logging format would you like to use?
|
||||
- platform: which operating system platforms should execute this query?
|
||||
- minimum osquery version: if the table was introduced in a newer version of osquery, you may want to ensure that only sufficiently recent version of osquery execute the query.
|
||||
- shard: from 0 to 100, what percent of hosts should execute this query?
|
||||
|
||||
![Schedule Query Sidebar](../images/schedule-query-sidebar.png)
|
||||
|
||||
|
||||
Once you've scheduled queries and curated your packs, you can read our guide to [Working With Osquery Logs](../infrastructure/working-with-osquery-logs.md).
|
||||
|
@ -0,0 +1,663 @@
|
||||
# fleetctl CLI
|
||||
- [Setting Up Fleet via the CLI](#setting-up-fleet-via-the-cli)
|
||||
- [Running Fleet](#running-fleet)
|
||||
- [`fleetctl config`](#fleetctl-config)
|
||||
- [`fleetctl setup`](#fleetctl-setup)
|
||||
- [Connecting a host](#connecting-a-host)
|
||||
- [Query hosts](#query-hosts)
|
||||
- [Update osquery options](#update-osquery-options)
|
||||
- [Logging in to an existing Fleet instance](#logging-in-to-an-existing-Fleet-instance)
|
||||
- [Logging in with SAML (SSO) authentication](#logging-in-with-SAML-(SSO)-authentication)
|
||||
- [Using fleetctl for configuration](#using-fleetctl-for-configuration)
|
||||
- [Convert osquery JSON](#convert-osquery-json)
|
||||
- [Osquery queries](#osquery-queries)
|
||||
- [Query packs](#query-packs)
|
||||
- [Host labels](#host-labels)
|
||||
- [Osquery configuration options](#osquery-configuration-options)
|
||||
- [Auto table construction](#auto-table-construction)
|
||||
- [Fleet configuration options](#fleet-configuration-options)
|
||||
- [Enroll secrets](#enroll-secrets)
|
||||
- [File carving with Fleet](#file-carving-with-fleet)
|
||||
- [Configuration](#configuration)
|
||||
- [Usage](#usage)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
|
||||
## Setting up Fleet via the CLI
|
||||
|
||||
This document walks through setting up and configuring Fleet via the CLI. If you already have a running fleet instance, skip ahead to [Logging In To An Existing Fleet Instance](#logging-in-to-an-existing-fleet-instance) to configure the `fleetctl` CLI.
|
||||
|
||||
This guide illustrates:
|
||||
|
||||
- A minimal CLI workflow for managing an osquery fleet
|
||||
- The set of API interactions that are required if you want to perform remote, automated management of a Fleet instance
|
||||
|
||||
### Running Fleet
|
||||
|
||||
For the sake of this tutorial, I will be using the local development Docker Compose infrastructure to run Fleet locally. This is documented in some detail in the [developer documentation](../3-Contribution-guide/(a)-Building-Fleet.md#development-infrastructure), but the following are the minimal set of commands that you can run from the root of the repository (assuming that you have a working Go/JavaScript toolchain installed along with Docker Compose):
|
||||
|
||||
```
|
||||
docker-compose up -d
|
||||
make deps
|
||||
make generate
|
||||
make
|
||||
./build/fleet prepare db
|
||||
./build/fleet serve --auth_jwt_key="insecure"
|
||||
```
|
||||
|
||||
The `fleet serve` command will be the long running command that runs the Fleet server.
|
||||
|
||||
### `fleetctl config`
|
||||
|
||||
At this point, the MySQL database doesn't have any users in it. Because of this, Fleet is exposing a one-time setup endpoint. Before we can hit that endpoint (by running `fleetctl setup`), we have to first configure the local `fleetctl` context.
|
||||
|
||||
Now, since our Fleet instance is local in this tutorial, we didn't get a valid TLS certificate, so we need to run the following to configure our Fleet context:
|
||||
|
||||
```
|
||||
fleetctl config set --address https://localhost:8080 --tls-skip-verify
|
||||
[+] Set the address config key to "https://localhost:8080" in the "default" context
|
||||
[+] Set the tls-skip-verify config key to "true" in the "default" context
|
||||
```
|
||||
|
||||
Now, if you were connecting to a Fleet instance for real, you wouldn't want to skip TLS certificate verification, so you might run something like:
|
||||
|
||||
```
|
||||
fleetctl config set --address https://fleet.corp.example.com
|
||||
[+] Set the address config key to "https://fleet.corp.example.com" in the "default" context
|
||||
```
|
||||
|
||||
### `fleetctl setup`
|
||||
|
||||
Now that we've configured our local CLI context, lets go ahead and create our admin account:
|
||||
|
||||
```
|
||||
fleetctl setup --email mike@arpaia.co
|
||||
Password:
|
||||
[+] Fleet setup successful and context configured!
|
||||
```
|
||||
|
||||
It's possible to specify the password via the `--password` flag or the `$PASSWORD` environment variable, but be cautious of the security implications of such an action. For local use, the interactive mode above is the most secure.
|
||||
|
||||
### Connecting a host
|
||||
|
||||
For the sake of this tutorial, I'm going to be using Kolide's osquery launcher to start osquery locally and connect it to Fleet. To learn more about connecting osquery to Fleet, see the [Adding Hosts to Fleet](../1-Deployment/(c)-Adding-hosts.md) documentation.
|
||||
|
||||
To get your osquery enroll secret, run the following:
|
||||
|
||||
```
|
||||
fleetctl get enroll-secret
|
||||
E7P6zs9D0mvY7ct08weZ7xvLtQfGYrdC
|
||||
```
|
||||
|
||||
You need to use this secret to connect a host. If you're running Fleet locally, you'd run:
|
||||
|
||||
```
|
||||
launcher \
|
||||
--hostname localhost:8080 \
|
||||
--enroll_secret E7P6zs9D0mvY7ct08weZ7xvLtQfGYrdC \
|
||||
--root_directory=$(mktemp -d) \
|
||||
--insecure
|
||||
```
|
||||
|
||||
### Query hosts
|
||||
|
||||
To run a simple query against all hosts, you might run something like the following:
|
||||
|
||||
```
|
||||
fleetctl query --query 'select * from osquery_info;' --labels='All Hosts' > results.json
|
||||
⠂ 100% responded (100% online) | 1/1 targeted hosts (1/1 online)
|
||||
^C
|
||||
```
|
||||
|
||||
When the query is done (or you have enough results), CTRL-C and look at the `results.json` file:
|
||||
|
||||
```json
|
||||
{
|
||||
"host": "marpaia",
|
||||
"rows": [
|
||||
{
|
||||
"build_distro": "10.13",
|
||||
"build_platform": "darwin",
|
||||
"config_hash": "d7cafcd183cc50c686b4c128263bd4eace5d89e1",
|
||||
"config_valid": "1",
|
||||
"extensions": "active",
|
||||
"host_hostname": "marpaia",
|
||||
"instance_id": "37840766-7182-4a68-a204-c7f577bd71e1",
|
||||
"pid": "22984",
|
||||
"start_time": "1527031727",
|
||||
"uuid": "B312055D-9209-5C89-9DDB-987299518FF7",
|
||||
"version": "3.2.3",
|
||||
"watcher": "-1"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Update osquery options
|
||||
|
||||
By default, each osquery node will check in with Fleet every 10 seconds. Let's say, for testing, you want to increase this to every 2 seconds. If this is the first time you've ever modified osquery options, let's download them locally:
|
||||
|
||||
```
|
||||
fleetctl get options > options.yaml
|
||||
```
|
||||
|
||||
The `options.yaml` file will look something like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: options
|
||||
spec:
|
||||
config:
|
||||
decorators:
|
||||
load:
|
||||
- SELECT uuid AS host_uuid FROM system_info;
|
||||
- SELECT hostname AS hostname FROM system_info;
|
||||
options:
|
||||
disable_distributed: false
|
||||
distributed_interval: 10
|
||||
distributed_plugin: tls
|
||||
distributed_tls_max_attempts: 3
|
||||
distributed_tls_read_endpoint: /api/v1/osquery/distributed/read
|
||||
distributed_tls_write_endpoint: /api/v1/osquery/distributed/write
|
||||
logger_plugin: tls
|
||||
logger_tls_endpoint: /api/v1/osquery/log
|
||||
logger_tls_period: 10
|
||||
pack_delimiter: /
|
||||
overrides: {}
|
||||
```
|
||||
|
||||
Let's edit the file so that the `distributed_interval` option is 2 instead of 10. Save the file and run:
|
||||
|
||||
```
|
||||
fleetctl apply -f ./options.yaml
|
||||
```
|
||||
|
||||
Now run a live query again. You should notice results coming back more quickly.
|
||||
|
||||
## Logging in to an existing Fleet instance
|
||||
|
||||
If you have an existing Fleet instance (version 2.0.0 or above), then simply run `fleetctl login` (after configuring your local CLI context):
|
||||
|
||||
```
|
||||
fleetctl config set --address https://fleet.corp.example.com
|
||||
[+] Set the address config key to "https://fleet.corp.example.com" in the "default" context
|
||||
|
||||
fleetctl login
|
||||
Log in using the standard Fleet credentials.
|
||||
Email: mike@arpaia.co
|
||||
Password:
|
||||
[+] Fleet login successful and context configured!
|
||||
```
|
||||
|
||||
Once your local context is configured, you can use the above `fleetctl` normally. See `fleetctl --help` for more information.
|
||||
|
||||
### Logging in with SAML (SSO) authentication
|
||||
|
||||
Users that authenticate to Fleet via SSO should retrieve their API token from the UI and set it manually in their `fleetctl` configuration (instead of logging in via `fleetctl login`).
|
||||
|
||||
1. Go to the "Account Settings" page in Fleet (https://fleet.corp.example.com/settings). Click the "Get API Token" button to bring up a modal with the API token.
|
||||
|
||||
2. Set the API token in the `~/.fleet/config` file. The file should look like the following:
|
||||
|
||||
```
|
||||
contexts:
|
||||
default:
|
||||
address: https://fleet.corp.example.com
|
||||
email: example@example.com
|
||||
token: your_token_here
|
||||
```
|
||||
|
||||
Note the token can also be set with `fleetctl config set --token`, but this may leak the token into a user's shell history.
|
||||
|
||||
## Using fleetctl for configuration
|
||||
|
||||
A Fleet configuration is defined using one or more declarative "messages" in yaml syntax. Each message can live in it's own file or multiple in one file, each separated by `---`. Each file/message contains a few required top-level keys:
|
||||
|
||||
- `apiVersion` - the API version of the file/request
|
||||
- `spec` - the "data" of the request
|
||||
- `kind ` - the type of file/object (i.e.: pack, query, config)
|
||||
|
||||
The file may optionally also include some `metadata` for more complex data types (i.e.: packs).
|
||||
|
||||
When you reason about how to manage these config files, consider following the [General Config Tips](https://kubernetes.io/docs/concepts/configuration/overview/#general-config-tips) published by the Kubernetes project. Some of the especially relevant tips are included here as well:
|
||||
|
||||
- When defining configurations, specify the latest stable API version.
|
||||
- Configuration files should be stored in version control before being pushed to the cluster. This allows quick roll-back of a configuration if needed. It also aids with cluster re-creation and restoration if necessary.
|
||||
- Group related objects into a single file whenever it makes sense. One file is often easier to manage than several. See the [config-single-file.yml](../../examples/config-single-file.yml) file as an example of this syntax.
|
||||
- Don’t specify default values unnecessarily – simple and minimal configs will reduce errors.
|
||||
|
||||
All of these files can be concatenated together into [one file](../../examples/config-single-file.yml) (seperated by `---`), or they can be in [individual files with a directory structure](../../examples/config-many-files) like the following:
|
||||
|
||||
```
|
||||
|-- config.yml
|
||||
|-- labels.yml
|
||||
|-- packs
|
||||
| `-- osquery-monitoring.yml
|
||||
`-- queries.yml
|
||||
```
|
||||
|
||||
### Convert osquery JSON
|
||||
|
||||
`fleetctl` includes easy tooling to convert osquery pack JSON into the
|
||||
`fleetctl` format. Use `fleetctl convert` with a path to the pack file:
|
||||
|
||||
```
|
||||
fleetctl convert -f test.json
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: pack
|
||||
spec:
|
||||
name: test
|
||||
queries:
|
||||
- description: "this is a test query"
|
||||
interval: 10
|
||||
name: processes
|
||||
query: processes
|
||||
removed: false
|
||||
targets:
|
||||
labels: null
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: query
|
||||
spec:
|
||||
name: processes
|
||||
query: select * from processes
|
||||
```
|
||||
|
||||
### Osquery queries
|
||||
|
||||
For especially long or complex queries, you may want to define one query in one file. Continued edits and applications to this file will update the query as long as the `metadata.name` does not change. If you want to change the name of a query, you must first create a new query with the new name and then delete the query with the old name. Make sure the old query name is not defined in any packs before deleting it or an error will occur.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: query
|
||||
spec:
|
||||
name: docker_processes
|
||||
description: The docker containers processes that are running on a system.
|
||||
query: select * from docker_container_processes;
|
||||
support:
|
||||
osquery: 2.9.0
|
||||
platforms:
|
||||
- linux
|
||||
- darwin
|
||||
```
|
||||
|
||||
To define multiple queries in a file, concatenate multiple `query` resources together in a single file with `---`. For example, consider a file that you might store at `queries/osquery_monitoring.yml`:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: query
|
||||
spec:
|
||||
name: osquery_version
|
||||
description: The version of the Launcher and Osquery process
|
||||
query: select launcher.version, osquery.version from kolide_launcher_info launcher, osquery_info osquery;
|
||||
support:
|
||||
launcher: 0.3.0
|
||||
osquery: 2.9.0
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: query
|
||||
spec:
|
||||
name: osquery_schedule
|
||||
description: Report performance stats for each file in the query schedule.
|
||||
query: select name, interval, executions, output_size, wall_time, (user_time/executions) as avg_user_time, (system_time/executions) as avg_system_time, average_memory, last_executed from osquery_schedule;
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: query
|
||||
spec:
|
||||
name: osquery_info
|
||||
description: A heartbeat counter that reports general performance (CPU, memory) and version.
|
||||
query: select i.*, p.resident_size, p.user_time, p.system_time, time.minutes as counter from osquery_info i, processes p, time where p.pid = i.pid;
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: query
|
||||
spec:
|
||||
name: osquery_events
|
||||
description: Report event publisher health and track event counters.
|
||||
query: select name, publisher, type, subscriptions, events, active from osquery_events;
|
||||
```
|
||||
|
||||
### Query packs
|
||||
|
||||
To define query packs, reference queries defined elsewhere by name. This is why the "name" of a query is so important. You can define many of these packs in many files.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: pack
|
||||
spec:
|
||||
name: osquery_monitoring
|
||||
disabled: false
|
||||
targets:
|
||||
labels:
|
||||
- All Hosts
|
||||
queries:
|
||||
- query: osquery_version
|
||||
name: osquery_version_differential
|
||||
interval: 7200
|
||||
- query: osquery_version
|
||||
name: osquery_version_snapshot
|
||||
interval: 7200
|
||||
snapshot: true
|
||||
- query: osquery_schedule
|
||||
interval: 7200
|
||||
removed: false
|
||||
- query: osquery_events
|
||||
interval: 86400
|
||||
removed: false
|
||||
- query: osquery_info
|
||||
interval: 600
|
||||
removed: false
|
||||
```
|
||||
|
||||
### Host labels
|
||||
|
||||
The following file describes the labels which hosts should be automatically grouped into. The label resource should include the actual SQL query so that the label is self-contained:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: label
|
||||
spec:
|
||||
name: slack_not_running
|
||||
query: >
|
||||
SELECT * from system_info
|
||||
WHERE NOT EXISTS (
|
||||
SELECT *
|
||||
FROM processes
|
||||
WHERE name LIKE "%Slack%"
|
||||
);
|
||||
```
|
||||
|
||||
Labels can also be "manually managed". When defining the label, reference hosts
|
||||
by hostname:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: label
|
||||
spec:
|
||||
name: Manually Managed Example
|
||||
label_membership_type: manual
|
||||
hosts:
|
||||
- hostname1
|
||||
- hostname2
|
||||
- hostname3
|
||||
```
|
||||
|
||||
|
||||
### Osquery configuration options
|
||||
|
||||
The following file describes options returned to osqueryd when it checks for configuration. See the [osquery documentation](https://osquery.readthedocs.io/en/stable/deployment/configuration/#options) for the available options. Existing options will be over-written by the application of this file.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: options
|
||||
spec:
|
||||
config:
|
||||
options:
|
||||
distributed_interval: 3
|
||||
distributed_tls_max_attempts: 3
|
||||
logger_plugin: tls
|
||||
logger_tls_endpoint: /api/v1/osquery/log
|
||||
logger_tls_period: 10
|
||||
decorators:
|
||||
load:
|
||||
- "SELECT version FROM osquery_info"
|
||||
- "SELECT uuid AS host_uuid FROM system_info"
|
||||
always:
|
||||
- "SELECT user AS username FROM logged_in_users WHERE user <> '' ORDER BY time LIMIT 1"
|
||||
interval:
|
||||
3600: "SELECT total_seconds AS uptime FROM uptime"
|
||||
overrides:
|
||||
# Note configs in overrides take precedence over the default config defined
|
||||
# under the config key above. Hosts receive overrides based on the platform
|
||||
# returned by `SELECT platform FROM os_version`. In this example, the base
|
||||
# config would be used for Windows and CentOS hosts, while Mac and Ubuntu
|
||||
# hosts would receive their respective overrides. Note, these overrides are
|
||||
# NOT merged with the top level configuration.
|
||||
platforms:
|
||||
darwin:
|
||||
options:
|
||||
distributed_interval: 10
|
||||
distributed_tls_max_attempts: 10
|
||||
logger_plugin: tls
|
||||
logger_tls_endpoint: /api/v1/osquery/log
|
||||
logger_tls_period: 300
|
||||
disable_tables: chrome_extensions
|
||||
docker_socket: /var/run/docker.sock
|
||||
file_paths:
|
||||
users:
|
||||
- /Users/%/Library/%%
|
||||
- /Users/%/Documents/%%
|
||||
etc:
|
||||
- /etc/%%
|
||||
|
||||
ubuntu:
|
||||
options:
|
||||
distributed_interval: 10
|
||||
distributed_tls_max_attempts: 3
|
||||
logger_plugin: tls
|
||||
logger_tls_endpoint: /api/v1/osquery/log
|
||||
logger_tls_period: 60
|
||||
schedule_timeout: 60
|
||||
docker_socket: /etc/run/docker.sock
|
||||
file_paths:
|
||||
homes:
|
||||
- /root/.ssh/%%
|
||||
- /home/%/.ssh/%%
|
||||
etc:
|
||||
- /etc/%%
|
||||
tmp:
|
||||
- /tmp/%%
|
||||
exclude_paths:
|
||||
homes:
|
||||
- /home/not_to_monitor/.ssh/%%
|
||||
tmp:
|
||||
- /tmp/too_many_events/
|
||||
decorators:
|
||||
load:
|
||||
- "SELECT * FROM cpuid"
|
||||
- "SELECT * FROM docker_info"
|
||||
interval:
|
||||
3600: "SELECT total_seconds AS uptime FROM uptime"
|
||||
```
|
||||
|
||||
### Auto table construction
|
||||
|
||||
You can use Fleet to query local SQLite databases as tables. For more information on creating ATC configuration from a SQLite database, see the [Osquery Automatic Table Construction documentation](https://osquery.readthedocs.io/en/stable/deployment/configuration/#automatic-table-construction)
|
||||
|
||||
If you already know what your ATC configuration needs to look like, you can add it to an options config file:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: options
|
||||
spec:
|
||||
overrides:
|
||||
platforms:
|
||||
darwin:
|
||||
auto_table_construction:
|
||||
tcc_system_entries:
|
||||
query: "select service, client, allowed, prompt_count, last_modified from access"
|
||||
path: "/Library/Application Support/com.apple.TCC/TCC.db"
|
||||
columns:
|
||||
- "service"
|
||||
- "client"
|
||||
- "allowed"
|
||||
- "prompt_count"
|
||||
- "last_modified"
|
||||
```
|
||||
|
||||
### Fleet configuration options
|
||||
The following file describes configuration options applied to the Fleet server.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: config
|
||||
spec:
|
||||
host_expiry_settings:
|
||||
host_expiry_enabled: true
|
||||
host_expiry_window: 10
|
||||
host_settings:
|
||||
# "additional" information to collect from hosts along with the host
|
||||
# details. This information will be updated at the same time as other host
|
||||
# details and is returned by the API when host objects are returned. Users
|
||||
# must take care to keep the data returned by these queries small in
|
||||
# order to mitigate potential performance impacts on the Fleet server.
|
||||
additional_queries:
|
||||
time: select * from time
|
||||
macs: select mac from interface_details
|
||||
org_info:
|
||||
org_logo_url: "https://example.org/logo.png"
|
||||
org_name: Example Org
|
||||
server_settings:
|
||||
kolide_server_url: https://fleet.example.org:8080
|
||||
smtp_settings:
|
||||
authentication_method: authmethod_plain
|
||||
authentication_type: authtype_username_password
|
||||
domain: example.org
|
||||
enable_smtp: true
|
||||
enable_ssl_tls: true
|
||||
enable_start_tls: true
|
||||
password: supersekretsmtppass
|
||||
port: 587
|
||||
sender_address: fleet@example.org
|
||||
server: mail.example.org
|
||||
user_name: test_user
|
||||
verify_ssl_certs: true
|
||||
sso_settings:
|
||||
enable_sso: false
|
||||
entity_id: 1234567890
|
||||
idp_image_url: https://idp.example.org/logo.png
|
||||
idp_name: IDP Vendor 1
|
||||
issuer_uri: https://idp.example.org/SAML2/SSO/POST
|
||||
metadata: "<md:EntityDescriptor entityID="https://idp.example.org/SAML2"> ... /md:EntityDescriptor>"
|
||||
metadata_url: https://idp.example.org/idp-meta.xml
|
||||
```
|
||||
#### SMTP authentication
|
||||
|
||||
**Warning:** Be careful not to store your SMTP credentials in source control. It is recommended to set the password through the web UI or `fleetctl` and then remove the line from the checked in version. Fleet will leave the password as-is if the field is missing from the applied configuration.
|
||||
|
||||
The following options are available when configuring SMTP authentication:
|
||||
|
||||
- `smtp_settings.authentication_type`
|
||||
- `authtype_none` - use this if your SMTP server is open
|
||||
- `authtype_username_password` - use this if your SMTP server requires authentication with a username and password
|
||||
- `smtp_settings.authentication_method` - required with authentication type `authtype_username_password`
|
||||
- `authmethod_cram_md5`
|
||||
- `authmethod_login`
|
||||
- `authmethod_plain`
|
||||
|
||||
### Enroll secrets
|
||||
|
||||
The following file shows how to configure enroll secrets. Note that secrets can be changed or made inactive, but not deleted. Hosts may not enroll with inactive secrets.
|
||||
|
||||
The name of the enroll secret used to authenticate is stored with the host and is included with API results.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: enroll_secret
|
||||
spec:
|
||||
secrets:
|
||||
- active: true
|
||||
name: default
|
||||
secret: RzTlxPvugG4o4O5IKS/HqEDJUmI1hwBoffff
|
||||
- active: true
|
||||
name: new_one
|
||||
secret: reallyworks
|
||||
- active: false
|
||||
name: inactive_secret
|
||||
secret: thissecretwontwork!
|
||||
```
|
||||
|
||||
## File carving with Fleet
|
||||
|
||||
Fleet supports osquery's file carving functionality as of Fleet 3.3.0. This allows the Fleet server to request files (and sets of files) from osquery agents, returning the full contents to Fleet.
|
||||
|
||||
File carving data can be either stored in Fleet's database or to an external S3 bucket. For information on how to configure the latter, consult the [configuration docs](https://github.com/fleetdm/fleet/blob/master/docs/infrastructure/configuring-the-fleet-binary.md#s3-file-carving-backend).
|
||||
|
||||
### Configuration
|
||||
|
||||
Given a working flagfile for connecting osquery agents to Fleet, add the following flags to enable carving:
|
||||
|
||||
```
|
||||
--disable_carver=false
|
||||
--carver_start_endpoint=/api/v1/osquery/carve/begin
|
||||
--carver_continue_endpoint=/api/v1/osquery/carve/block
|
||||
--carver_block_size=2000000
|
||||
```
|
||||
|
||||
The default flagfile provided in the "Add New Host" dialog also includes this configuration.
|
||||
|
||||
#### Carver block size
|
||||
|
||||
The `carver_block_size` flag should be configured in osquery. 2MB (`2000000`) is a good starting value.
|
||||
|
||||
The configured value must be less than the value of `max_allowed_packet` in the MySQL connection, allowing for some overhead. The default for MySQL 5.7 is 4MB and for MySQL 8 it is 64MB.
|
||||
|
||||
In case S3 is used as the storage backend, this value must be instead set to be at least 5MB due to the [constraints of S3's multipart uploads](https://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html).
|
||||
|
||||
Using a smaller value for `carver_block_size` will lead to more HTTP requests during the carving process, resulting in longer carve times and higher load on the Fleet server. If the value is too high, HTTP requests may run long enough to cause server timeouts.
|
||||
|
||||
#### Compression
|
||||
|
||||
Compression of the carve contents can be enabled with the `carver_compression` flag in osquery. When used, the carve results will be compressed with [Zstandard](https://facebook.github.io/zstd/) compression.
|
||||
|
||||
### Usage
|
||||
|
||||
File carves are initiated with osquery queries. Issue a query to the `carves` table, providing `carve = 1` along with the desired path(s) as constraints.
|
||||
|
||||
For example, to extract the `/etc/hosts` file on a host with hostname `mac-workstation`:
|
||||
|
||||
```
|
||||
fleetctl query --hosts mac-workstation --query 'SELECT * FROM carves WHERE carve = 1 AND path = "/etc/hosts"'
|
||||
```
|
||||
|
||||
The standard osquery file globbing syntax is also supported to carve entire directories or more:
|
||||
```
|
||||
fleetctl query --hosts mac-workstation --query 'SELECT * FROM carves WHERE carve = 1 AND path LIKE "/etc/%%"'
|
||||
```
|
||||
|
||||
#### Retrieving carves
|
||||
|
||||
List the non-expired (see below) carves with `fleetctl get carves`. Note that carves will not be available through this command until osquery checks in to the Fleet server with the first of the carve contents. This can take some time from initiation of the carve.
|
||||
|
||||
To also retrieve expired carves, use `fleetctl get carves --expired`.
|
||||
|
||||
Contents of carves are returned as .tar archives, and compressed if that option is configured.
|
||||
|
||||
To download the contents of a carve with ID 3, use
|
||||
|
||||
```
|
||||
fleetctl get carve 3 --outfile carve.tar
|
||||
```
|
||||
|
||||
It can also be useful to pipe the results directly into the tar command for unarchiving:
|
||||
|
||||
```
|
||||
fleetctl get carve 3 --stdout | tar -x
|
||||
```
|
||||
|
||||
#### Expiration
|
||||
|
||||
Carve contents remain available for 24 hours after the first data is provided from the osquery client. After this time, the carve contents are cleaned from the database and the carve is marked as "expired".
|
||||
|
||||
The same is not true if S3 is used as the storage backend. In that scenario, it is suggested to setup a [bucket lifecycle configuration](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html) to avoid retaining data in excess. Fleet, in an "eventual consistent" manner (i.e. by periodically performing comparisons), will keep the metadata relative to the files carves in sync with what it is actually available in the bucket.
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
#### Check carve status in osquery
|
||||
|
||||
Osquery can report on the status of carves through queries to the `carves` table.
|
||||
|
||||
The details provided by
|
||||
|
||||
```
|
||||
fleetctl query --labels 'All Hosts' --query 'SELECT * FROM carves'
|
||||
```
|
||||
|
||||
can be helpful to debug carving problems.
|
||||
|
||||
#### Ensure `carver_block_size` is set appropriately
|
||||
|
||||
This value must be less than the `max_allowed_packet` setting in MySQL. If it is too large, MySQL will reject the writes.
|
||||
|
||||
The value must be small enough that HTTP requests do not time out.
|
||||
|
||||
|
@ -0,0 +1,957 @@
|
||||
# REST API
|
||||
- [Overview](#overview)
|
||||
- [fleetctl](#fleetctl)
|
||||
- [Current API](#current-api)
|
||||
- [Authentication](#authentication)
|
||||
- [Log in](#log-in)
|
||||
- [Log out](#log-out)
|
||||
- [Forgot password](#forgot-password)
|
||||
- [Change password](#change-password)
|
||||
- [Me](#me)
|
||||
- [SSO config](#sso-config)
|
||||
- [Initiate SSO](#initiate-sso)
|
||||
- [Hosts](#hosts)
|
||||
- [List hosts](#list-hosts)
|
||||
- [Users](#users)
|
||||
- [List all users](#list-all-users)
|
||||
- [Create a user account with an invitation](#create-a-user-account-with-an-invitation)
|
||||
- [Create a user account without an invitation](#create-a-user-account-without-an-invitation)
|
||||
- [Get user information](#get-user-information)
|
||||
|
||||
## Overview
|
||||
|
||||
Fleet is powered by a Go API server which serves three types of endpoints:
|
||||
|
||||
- Endpoints starting with `/api/v1/osquery/` are osquery TLS server API endpoints. All of these endpoints are used for talking to osqueryd agents and that's it.
|
||||
- Endpoints starting with `/api/v1/kolide/` are endpoints to interact with the Fleet data model (packs, queries, scheduled queries, labels, hosts, etc) as well as application endpoints (configuring settings, logging in, session management, etc).
|
||||
- All other endpoints are served the React single page application bundle. The React app uses React Router to determine whether or not the URI is a valid route and what to do.
|
||||
|
||||
Only osquery agents should interact with the osquery API, but we'd like to support the eventual use of the Fleet API extensively. The API is not very well documented at all right now, but we have plans to:
|
||||
|
||||
- Generate and publish detailed documentation via a tool built using [test2doc](https://github.com/adams-sarah/test2doc) (or similar).
|
||||
- Release a JavaScript Fleet API client library (which would be derived from the [current](https://github.com/fleetdm/fleet/blob/master/frontend/kolide/index.js) JavaScript API client).
|
||||
- Commit to a stable, standardized API format.
|
||||
|
||||
### fleetctl
|
||||
|
||||
Many of the operations that a user may wish to perform with an API are currently best performed via the [fleetctl](./(b)-fleetctl-CLI) tooling. These CLI tools allow updating of the osquery configuration entities, as well as performing live queries.
|
||||
|
||||
### Current API
|
||||
|
||||
The general idea with the current API is that there are many entities throughout the Fleet application, such as:
|
||||
|
||||
- Queries
|
||||
- Packs
|
||||
- Labels
|
||||
- Hosts
|
||||
|
||||
Each set of objects follows a similar REST access pattern.
|
||||
|
||||
- You can `GET /api/v1/kolide/packs` to get all packs
|
||||
- You can `GET /api/v1/kolide/packs/1` to get a specific pack.
|
||||
- You can `DELETE /api/v1/kolide/packs/1` to delete a specific pack.
|
||||
- You can `POST /api/v1/kolide/packs` (with a valid body) to create a new pack.
|
||||
- You can `PATCH /api/v1/kolide/packs/1` (with a valid body) to modify a specific pack.
|
||||
|
||||
Queries, packs, scheduled queries, labels, invites, users, sessions all behave this way. Some objects, like invites, have additional HTTP methods for additional functionality. Some objects, such as scheduled queries, are merely a relationship between two other objects (in this case, a query and a pack) with some details attached.
|
||||
|
||||
All of these objects are put together and distributed to the appropriate osquery agents at the appropriate time. At this time, the best source of truth for the API is the [HTTP handler file](https://github.com/fleetdm/fleet/blob/master/server/service/handler.go) in the Go application. The REST API is exposed via a transport layer on top of an RPC service which is implemented using a micro-service library called [Go Kit](https://github.com/go-kit/kit). If using the Fleet API is important to you right now, being familiar with Go Kit would definitely be helpful.
|
||||
|
||||
|
||||
|
||||
## Authentication
|
||||
|
||||
Making authenticated requests to the Fleet server requires that you are granted permission to access data. The Fleet Authentication API enables you to receive an authorization token.
|
||||
|
||||
All Fleet API requests are authenticated unless noted in the documentation. This means that almost all Fleet API requests will require sending the API token in the request header.
|
||||
|
||||
The typical steps to making an authenticated API request are outlined below.
|
||||
|
||||
First, utilize the `/login` endpoint to receive an API token. For SSO users, username/password login is disabled and the API token can be retrieved from the "Settings" page in the UI.
|
||||
|
||||
`POST /api/v1/kolide/login`
|
||||
|
||||
Request body
|
||||
|
||||
```
|
||||
{
|
||||
"username": "janedoe@example.com",
|
||||
"passsword": "VArCjNW7CfsxGp67"
|
||||
}
|
||||
```
|
||||
|
||||
Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"user": {
|
||||
"created_at": "2020-11-13T22:57:12Z",
|
||||
"updated_at": "2020-11-13T22:57:12Z",
|
||||
"id": 1,
|
||||
"username": "jane",
|
||||
"name": "",
|
||||
"email": "janedoe@example.com",
|
||||
"admin": true,
|
||||
"enabled": true,
|
||||
"force_password_reset": false,
|
||||
"gravatar_url": "",
|
||||
"sso_enabled": false
|
||||
},
|
||||
"token": "{your token}"
|
||||
}
|
||||
```
|
||||
|
||||
Then, use the token returned from the `/login` endpoint to authenticate further API requests. The example below utilizes the `/hosts` endpoint.
|
||||
|
||||
`GET /api/v1/kolide/hosts`
|
||||
|
||||
Request header
|
||||
|
||||
```
|
||||
Authorization: Bearer <your token>
|
||||
```
|
||||
|
||||
Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"hosts": [
|
||||
{
|
||||
"created_at": "2020-11-05T05:09:44Z",
|
||||
"updated_at": "2020-11-05T06:03:39Z",
|
||||
"id": 1,
|
||||
"detail_updated_at": "2020-11-05T05:09:45Z",
|
||||
"label_updated_at": "2020-11-05T05:14:51Z",
|
||||
"seen_time": "2020-11-05T06:03:39Z",
|
||||
"hostname": "2ceca32fe484",
|
||||
"uuid": "392547dc-0000-0000-a87a-d701ff75bc65",
|
||||
"platform": "centos",
|
||||
"osquery_version": "2.7.0",
|
||||
"os_version": "CentOS Linux 7",
|
||||
"build": "",
|
||||
"platform_like": "rhel fedora",
|
||||
"code_name": "",
|
||||
"uptime": 8305000000000,
|
||||
"memory": 2084032512,
|
||||
"cpu_type": "6",
|
||||
"cpu_subtype": "142",
|
||||
"cpu_brand": "Intel(R) Core(TM) i5-8279U CPU @ 2.40GHz",
|
||||
"cpu_physical_cores": 4,
|
||||
"cpu_logical_cores": 4,
|
||||
"hardware_vendor": "",
|
||||
"hardware_model": "",
|
||||
"hardware_version": "",
|
||||
"hardware_serial": "",
|
||||
"computer_name": "2ceca32fe484",
|
||||
"primary_ip": "",
|
||||
"primary_mac": "",
|
||||
"distributed_interval": 10,
|
||||
"config_tls_refresh": 10,
|
||||
"logger_tls_period": 8,
|
||||
"additional": {},
|
||||
"enroll_secret_name": "default",
|
||||
"status": "offline",
|
||||
"display_text": "2ceca32fe484"
|
||||
},
|
||||
{
|
||||
"created_at": "2020-11-05T05:09:44Z",
|
||||
"updated_at": "2020-11-05T06:03:39Z",
|
||||
"id": 2,
|
||||
"detail_updated_at": "2020-11-05T05:09:45Z",
|
||||
"label_updated_at": "2020-11-05T05:14:52Z",
|
||||
"seen_time": "2020-11-05T06:03:40Z",
|
||||
"hostname": "4cc885c20110",
|
||||
"uuid": "392547dc-0000-0000-a87a-d701ff75bc65",
|
||||
"platform": "centos",
|
||||
"osquery_version": "2.7.0",
|
||||
"os_version": "CentOS 6.8.0",
|
||||
"build": "",
|
||||
"platform_like": "rhel",
|
||||
"code_name": "",
|
||||
"uptime": 8305000000000,
|
||||
"memory": 2084032512,
|
||||
"cpu_type": "6",
|
||||
"cpu_subtype": "142",
|
||||
"cpu_brand": "Intel(R) Core(TM) i5-8279U CPU @ 2.40GHz",
|
||||
"cpu_physical_cores": 4,
|
||||
"cpu_logical_cores": 4,
|
||||
"hardware_vendor": "",
|
||||
"hardware_model": "",
|
||||
"hardware_version": "",
|
||||
"hardware_serial": "",
|
||||
"computer_name": "4cc885c20110",
|
||||
"primary_ip": "",
|
||||
"primary_mac": "",
|
||||
"distributed_interval": 10,
|
||||
"config_tls_refresh": 10,
|
||||
"logger_tls_period": 8,
|
||||
"additional": {},
|
||||
"enroll_secret_name": "default",
|
||||
"status": "offline",
|
||||
"display_text": "4cc885c20110"
|
||||
},
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Log in
|
||||
|
||||
Authenticates the user with the specified credentials. Use the token returned from this endpoint to authenticate further API requests.
|
||||
|
||||
`POST /api/v1/kolide/login`
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | In | Description |
|
||||
| -------- | ------ | ---- | --------------------------------------------- |
|
||||
| username | string | body | **Required**. The user's email. |
|
||||
| password | string | body | **Required**. The user's plain text password. |
|
||||
|
||||
#### Example
|
||||
|
||||
`POST /api/v1/kolide/login`
|
||||
|
||||
##### Request body
|
||||
|
||||
```
|
||||
{
|
||||
"username": "janedoe@example.com",
|
||||
"passsword": "VArCjNW7CfsxGp67"
|
||||
}
|
||||
```
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"user": {
|
||||
"created_at": "2020-11-13T22:57:12Z",
|
||||
"updated_at": "2020-11-13T22:57:12Z",
|
||||
"id": 1,
|
||||
"username": "jane",
|
||||
"name": "",
|
||||
"email": "janedoe@example.com",
|
||||
"admin": true,
|
||||
"enabled": true,
|
||||
"force_password_reset": false,
|
||||
"gravatar_url": "",
|
||||
"sso_enabled": false
|
||||
},
|
||||
"token": "{your token}"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Log out
|
||||
|
||||
Logs out the authenticated user.
|
||||
|
||||
`POST /api/v1/kolide/logout`
|
||||
|
||||
#### Example
|
||||
|
||||
`POST /api/v1/kolide/logout`
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
---
|
||||
|
||||
### Forgot password
|
||||
|
||||
Sends a password reset email to the specified email. Requires that SMTP is configured for your Fleet server.
|
||||
|
||||
`POST /api/v1/kolide/forgot_password`
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | In | Description |
|
||||
| ----- | ------ | ---- | ----------------------------------------------------------------------- |
|
||||
| email | string | body | **Required**. The email of the user requesting the reset password link. |
|
||||
|
||||
#### Example
|
||||
|
||||
`POST /api/v1/kolide/forgot_password`
|
||||
|
||||
##### Request body
|
||||
|
||||
```
|
||||
{
|
||||
"email": "janedoe@example.com"
|
||||
}
|
||||
```
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
##### Unknown error
|
||||
|
||||
`Status: 500`
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Unknown Error",
|
||||
"errors": [
|
||||
{
|
||||
"name": "base",
|
||||
"reason": "email not configured",
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Change password
|
||||
|
||||
`POST /api/v1/kolide/change_password`
|
||||
|
||||
Changes the password for the authenticated user.
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | In | Description |
|
||||
| ------------ | ------ | ---- | -------------------------------------- |
|
||||
| old_password | string | body | **Required**. The user's old password. |
|
||||
| new_password | string | body | **Required**. The user's new password. |
|
||||
|
||||
#### Example
|
||||
|
||||
`POST /api/v1/kolide/change_password`
|
||||
|
||||
##### Request body
|
||||
|
||||
```
|
||||
{
|
||||
"old_password": "VArCjNW7CfsxGp67",
|
||||
"new_password": "zGq7mCLA6z4PzArC",
|
||||
}
|
||||
```
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
##### Validation failed
|
||||
|
||||
`Status: 422 Unprocessable entity`
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Validation Failed",
|
||||
"errors": [
|
||||
{
|
||||
"name": "old_password",
|
||||
"reason": "old password does not match"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Me
|
||||
|
||||
Retrieves the user data for the authenticated user.
|
||||
|
||||
`POST /api/v1/kolide/me`
|
||||
|
||||
#### Example
|
||||
|
||||
`POST /api/v1/kolide/me`
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"user": {
|
||||
"created_at": "2020-11-13T22:57:12Z",
|
||||
"updated_at": "2020-11-16T23:49:41Z",
|
||||
"id": 1,
|
||||
"username": "jane",
|
||||
"name": "",
|
||||
"email": "janedoe@example.com",
|
||||
"admin": true,
|
||||
"enabled": true,
|
||||
"force_password_reset": false,
|
||||
"gravatar_url": "",
|
||||
"sso_enabled": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Perform required password reset
|
||||
|
||||
Resets the password of the authenticated user. Requires that `force_password_reset` is set to `true` prior to the request.
|
||||
|
||||
`POST /api/v1/kolide/perform_require_password_reset`
|
||||
|
||||
#### Example
|
||||
|
||||
`POST /api/v1/kolide/perform_required_password_reset`
|
||||
|
||||
##### Request body
|
||||
|
||||
```
|
||||
{
|
||||
"new_password": "sdPz8CV5YhzH47nK"
|
||||
}
|
||||
```
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"user": {
|
||||
"created_at": "2020-11-13T22:57:12Z",
|
||||
"updated_at": "2020-11-17T00:09:23Z",
|
||||
"id": 1,
|
||||
"username": "jane",
|
||||
"name": "",
|
||||
"email": "janedoe@example.com",
|
||||
"admin": true,
|
||||
"enabled": true,
|
||||
"force_password_reset": false,
|
||||
"gravatar_url": "",
|
||||
"sso_enabled": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### SSO config
|
||||
|
||||
Gets the current SSO configuration.
|
||||
|
||||
`GET /api/v1/kolide/sso`
|
||||
|
||||
#### Example
|
||||
|
||||
`GET /api/v1/kolide/sso`
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"settings": {
|
||||
"idp_name": "IDP Vendor 1",
|
||||
"idp_image_url": "",
|
||||
"sso_enabled": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Initiate SSO
|
||||
|
||||
`POST /api/v1/kolide/sso`
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | In | Description |
|
||||
| --------- | ------ | ---- | -------------------------------------------------------------------------- |
|
||||
| relay_url | string | body | **Required**. The relative url to be navigated to after succesful sign in. |
|
||||
|
||||
#### Example
|
||||
|
||||
`POST /api/v1/kolide/sso`
|
||||
|
||||
##### Request body
|
||||
|
||||
```
|
||||
{
|
||||
"relay_url": "/hosts/manage"
|
||||
}
|
||||
```
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
##### Unknown error
|
||||
|
||||
`Status: 500`
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Unknown Error",
|
||||
"errors": [
|
||||
{
|
||||
"name": "base",
|
||||
"reason": "InitiateSSO getting metadata: Get \"https://idp.example.org/idp-meta.xml\": dial tcp: lookup idp.example.org on [2001:558:feed::1]:53: no such host"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Hosts
|
||||
|
||||
### List hosts
|
||||
|
||||
`GET /api/v1/kolide/hosts`
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | In | Description |
|
||||
| ----------------------- | ------- | ----- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| page | integer | query | Page number of the results to fetch. |
|
||||
| per_page | integer | query | Results per page. |
|
||||
| order_key | string | query | What to order results by. Can be any column in the hosts table. |
|
||||
| status | string | query | Indicates the status of the hosts to return. Can either be `new`, `online`, `offline`, or `mia`. |
|
||||
| additional_info_filters | string | query | A comma-delimited list of fields to include in each host's additional information object. See [Fleet Configuration Options](https://github.com/fleetdm/fleet/blob/master/docs/cli/file-format.md#fleet-configuration-options) for an example configuration with hosts' additional information. |
|
||||
|
||||
#### Example
|
||||
|
||||
`GET /api/v1/kolide/hosts?page=0&per_page=100&order_key=host_name`
|
||||
|
||||
##### Request query parameters
|
||||
|
||||
```
|
||||
{
|
||||
"page": 0,
|
||||
"per_page": 100,
|
||||
"order_key": "host_name",
|
||||
}
|
||||
```
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"hosts": [
|
||||
{
|
||||
"created_at": "2020-11-05T05:09:44Z",
|
||||
"updated_at": "2020-11-05T06:03:39Z",
|
||||
"id": 1,
|
||||
"detail_updated_at": "2020-11-05T05:09:45Z",
|
||||
"label_updated_at": "2020-11-05T05:14:51Z",
|
||||
"seen_time": "2020-11-05T06:03:39Z",
|
||||
"hostname": "2ceca32fe484",
|
||||
"uuid": "392547dc-0000-0000-a87a-d701ff75bc65",
|
||||
"platform": "centos",
|
||||
"osquery_version": "2.7.0",
|
||||
"os_version": "CentOS Linux 7",
|
||||
"build": "",
|
||||
"platform_like": "rhel fedora",
|
||||
"code_name": "",
|
||||
"uptime": 8305000000000,
|
||||
"memory": 2084032512,
|
||||
"cpu_type": "6",
|
||||
"cpu_subtype": "142",
|
||||
"cpu_brand": "Intel(R) Core(TM) i5-8279U CPU @ 2.40GHz",
|
||||
"cpu_physical_cores": 4,
|
||||
"cpu_logical_cores": 4,
|
||||
"hardware_vendor": "",
|
||||
"hardware_model": "",
|
||||
"hardware_version": "",
|
||||
"hardware_serial": "",
|
||||
"computer_name": "2ceca32fe484",
|
||||
"primary_ip": "",
|
||||
"primary_mac": "",
|
||||
"distributed_interval": 10,
|
||||
"config_tls_refresh": 10,
|
||||
"logger_tls_period": 8,
|
||||
"additional": {},
|
||||
"enroll_secret_name": "default",
|
||||
"status": "offline",
|
||||
"display_text": "2ceca32fe484"
|
||||
},
|
||||
{
|
||||
"created_at": "2020-11-05T05:09:44Z",
|
||||
"updated_at": "2020-11-05T06:03:39Z",
|
||||
"id": 2,
|
||||
"detail_updated_at": "2020-11-05T05:09:45Z",
|
||||
"label_updated_at": "2020-11-05T05:14:52Z",
|
||||
"seen_time": "2020-11-05T06:03:40Z",
|
||||
"hostname": "4cc885c20110",
|
||||
"uuid": "392547dc-0000-0000-a87a-d701ff75bc65",
|
||||
"platform": "centos",
|
||||
"osquery_version": "2.7.0",
|
||||
"os_version": "CentOS 6.8.0",
|
||||
"build": "",
|
||||
"platform_like": "rhel",
|
||||
"code_name": "",
|
||||
"uptime": 8305000000000,
|
||||
"memory": 2084032512,
|
||||
"cpu_type": "6",
|
||||
"cpu_subtype": "142",
|
||||
"cpu_brand": "Intel(R) Core(TM) i5-8279U CPU @ 2.40GHz",
|
||||
"cpu_physical_cores": 4,
|
||||
"cpu_logical_cores": 4,
|
||||
"hardware_vendor": "",
|
||||
"hardware_model": "",
|
||||
"hardware_version": "",
|
||||
"hardware_serial": "",
|
||||
"computer_name": "4cc885c20110",
|
||||
"primary_ip": "",
|
||||
"primary_mac": "",
|
||||
"distributed_interval": 10,
|
||||
"config_tls_refresh": 10,
|
||||
"logger_tls_period": 8,
|
||||
"additional": {},
|
||||
"enroll_secret_name": "default",
|
||||
"status": "offline",
|
||||
"display_text": "4cc885c20110"
|
||||
},
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Users
|
||||
|
||||
The Fleet server exposes a handful of API endpoints that handles common user management operations. All the following endpoints require prior authentication meaning you must first log in successfully before calling any of the endpoints documented below.
|
||||
|
||||
### List all users
|
||||
|
||||
Returns a list of all enabled users
|
||||
|
||||
`GET /api/v1/kolide/users`
|
||||
|
||||
#### Parameters
|
||||
|
||||
None.
|
||||
|
||||
#### Example
|
||||
|
||||
`GET /api/v1/kolide/users`
|
||||
|
||||
##### Request query parameters
|
||||
|
||||
None.
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"users": [
|
||||
{
|
||||
"created_at": "2020-12-10T03:52:53Z",
|
||||
"updated_at": "2020-12-10T03:52:53Z",
|
||||
"id": 1,
|
||||
"username": "janedoe",
|
||||
"name": "",
|
||||
"email": "janedoe@example.com",
|
||||
"admin": true,
|
||||
"enabled": true,
|
||||
"force_password_reset": false,
|
||||
"gravatar_url": "",
|
||||
"sso_enabled": false
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
##### Failed authentication
|
||||
|
||||
`Status: 401 Authentication Failed`
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Authentication Failed",
|
||||
"errors": [
|
||||
{
|
||||
"name": "base",
|
||||
"reason": "username or email and password do not match"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Create a user account with an invitation
|
||||
|
||||
Creates a user account after an invited user provides registration information and submits the form.
|
||||
|
||||
`POST /api/v1/kolide/users`
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | In | Description |
|
||||
| --------------------- | ------ | ---- | --------------------------------------------------------------- |
|
||||
| email | string | body | **Required**. The email address of the user. |
|
||||
| invite_token | string | body | **Required**. Token provided to the user in the invitation email. |
|
||||
| name | string | body | The name of the user. |
|
||||
| username | string | body | **Required**. The username chosen by the user |
|
||||
| password | string | body | **Required**. The password chosen by the user. |
|
||||
| password_confirmation | string | body | **Required**. Confirmation of the password chosen by the user. |
|
||||
|
||||
#### Example
|
||||
|
||||
`POST /api/v1/kolide/users`
|
||||
|
||||
##### Request query parameters
|
||||
|
||||
```
|
||||
{
|
||||
"email": "janedoe@example.com",
|
||||
"invite_token": "SjdReDNuZW5jd3dCbTJtQTQ5WjJTc2txWWlEcGpiM3c=",
|
||||
"name": "janedoe",
|
||||
"username": "janedoe",
|
||||
"password": "test-123",
|
||||
"password_confirmation": "test-123"
|
||||
}
|
||||
```
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"user": {
|
||||
"created_at": "0001-01-01T00:00:00Z",
|
||||
"updated_at": "0001-01-01T00:00:00Z",
|
||||
"id": 2,
|
||||
"username": "janedoe",
|
||||
"name": "janedoe",
|
||||
"email": "janedoe@example.com",
|
||||
"admin": false,
|
||||
"enabled": true,
|
||||
"force_password_reset": false,
|
||||
"gravatar_url": "",
|
||||
"sso_enabled": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
##### Failed authentication
|
||||
|
||||
`Status: 401 Authentication Failed`
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Authentication Failed",
|
||||
"errors": [
|
||||
{
|
||||
"name": "base",
|
||||
"reason": "username or email and password do not match"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
##### Expired or used invite code
|
||||
|
||||
`Status: 404 Resource Not Found`
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Resource Not Found",
|
||||
"errors": [
|
||||
{
|
||||
"name": "base",
|
||||
"reason": "Invite with token SjdReDNuZW5jd3dCbTJtQTQ5WjJTc2txWWlEcGpiM3c= was not found in the datastore"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
##### Validation failed
|
||||
|
||||
`Status: 422 Validation Failed`
|
||||
|
||||
The same error will be returned whenever one of the required parameters fails the validation.
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Validation Failed",
|
||||
"errors": [
|
||||
{
|
||||
"name": "username",
|
||||
"reason": "cannot be empty"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Create a user account without an invitation
|
||||
|
||||
Creates a user account without requiring an invitation, the user is enabled immediately.
|
||||
|
||||
`POST /api/v1/kolide/users/admin`
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | In | Description |
|
||||
| ---------- | ------- | ---- | ------------------------------------------------ |
|
||||
| username | string | body | **Required**. The user's username. |
|
||||
| email | string | body | **Required**. The user's email address. |
|
||||
| password | string | body | **Required**. The user's password. |
|
||||
| invited_by | integer | body | **Required**. ID of the admin creating the user. |
|
||||
| admin | boolean | body | **Required**. Whether the user has admin privileges. |
|
||||
|
||||
#### Example
|
||||
|
||||
`POST /api/v1/kolide/users/admin`
|
||||
|
||||
##### Request query parameters
|
||||
|
||||
```
|
||||
{
|
||||
"username": "janedoe",
|
||||
"email": "janedoe@example.com",
|
||||
"password": "test-123",
|
||||
"invited_by":1,
|
||||
"admin":true
|
||||
}
|
||||
```
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"user": {
|
||||
"created_at": "0001-01-01T00:00:00Z",
|
||||
"updated_at": "0001-01-01T00:00:00Z",
|
||||
"id": 5,
|
||||
"username": "janedoe",
|
||||
"name": "",
|
||||
"email": "janedoe@example.com",
|
||||
"admin": false,
|
||||
"enabled": true,
|
||||
"force_password_reset": false,
|
||||
"gravatar_url": "",
|
||||
"sso_enabled": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
##### Failed authentication
|
||||
|
||||
`Status: 401 Authentication Failed`
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Authentication Failed",
|
||||
"errors": [
|
||||
{
|
||||
"name": "base",
|
||||
"reason": "username or email and password do not match"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
##### User doesn't exist
|
||||
|
||||
`Status: 404 Resource Not Found`
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Resource Not Found",
|
||||
"errors": [
|
||||
{
|
||||
"name": "base",
|
||||
"reason": "User with id=1 was not found in the datastore"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Get user information
|
||||
|
||||
Returns all information about a specific user.
|
||||
|
||||
`GET /api/v1/kolide/users/{id}`
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | In | Description |
|
||||
| ---- | ------- | ----- | ---------------------------- |
|
||||
| id | integer | query | **Required**. The user's id. |
|
||||
|
||||
#### Example
|
||||
|
||||
`GET /api/v1/kolide/users/2`
|
||||
|
||||
##### Request query parameters
|
||||
|
||||
```
|
||||
{
|
||||
"id": 1
|
||||
}
|
||||
```
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"user": {
|
||||
"created_at": "2020-12-10T05:20:25Z",
|
||||
"updated_at": "2020-12-10T05:24:27Z",
|
||||
"id": 2,
|
||||
"username": "janedoe",
|
||||
"name": "janedoe",
|
||||
"email": "janedoe@example.com",
|
||||
"admin": true,
|
||||
"enabled": true,
|
||||
"force_password_reset": false,
|
||||
"gravatar_url": "",
|
||||
"sso_enabled": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
##### Failed authentication
|
||||
|
||||
`Status: 401 Authentication Failed`
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Authentication Failed",
|
||||
"errors": [
|
||||
{
|
||||
"name": "base",
|
||||
"reason": "username or email and password do not match"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
##### User doesn't exist
|
||||
|
||||
`Status: 404 Resource Not Found`
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Resource Not Found",
|
||||
"errors": [
|
||||
{
|
||||
"name": "base",
|
||||
"reason": "User with id=5 was not found in the datastore"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
@ -0,0 +1,74 @@
|
||||
# Osquery Logs
|
||||
- [Osquery logging plugins](#osquery-logging-plugins)
|
||||
- [Filesystem](#filesystem)
|
||||
- [Firehose](#firehose)
|
||||
- [Kinesis](#kinesis)
|
||||
- [PubSub](#pubsub)
|
||||
- [Stdout](#stdout)
|
||||
|
||||
Osquery agents are typically configured to send logs to the Fleet server (`--logger_plugin=tls`). This is not a requirement, and any other logger plugin can be used even when osquery clients are connecting to the Fleet server to retrieve configuration or run live queries. See the [osquery logging documentation](https://osquery.readthedocs.io/en/stable/deployment/logging/) for more about configuring logging on the agent.
|
||||
|
||||
If `--logger_plugin=tls` is used with osquery clients, the following configuration can be applied on the Fleet server for handling the incoming logs.
|
||||
|
||||
## Osquery logging plugins
|
||||
|
||||
Fleet supports the following logging plugins for osquery logs:
|
||||
|
||||
- [Filesystem](#filesystem) - Logs are written to the local Fleet server filesystem.
|
||||
- [Firehose](#firehose) - Logs are written to AWS Firehose streams.
|
||||
- [Kinesis](#kinesis) - Logs are written to AWS Kinesis streams.
|
||||
- [PubSub](#pubsub) - Logs are written to Google Cloud PubSub topics.
|
||||
- [Stdout](#stdout) - Logs are written to stdout.
|
||||
|
||||
To set the osquery logging plugins, use the `--osquery_result_log_plugin` and `--osquery_status_log_plugin` flags (or [equivalents for environment variables or configuration files](../1-Deployment/(b)-Configuration.md#options)).
|
||||
|
||||
### Filesystem
|
||||
|
||||
The default logging plugin.
|
||||
|
||||
- Plugin name: `filesystem`
|
||||
- Flag namespace: [filesystem](../1-Deployment/(b)-Configuration.md#filesystem)
|
||||
|
||||
With the filesystem plugin, osquery result and/or status logs are written to the local filesystem on the Fleet server. This is typically used with a log forwarding agent on the Fleet server that will push the logs into a logging pipeline. Note that if multiple load-balanced Fleet servers are used, the logs will be load-balanced across those servers (not duplicated).
|
||||
|
||||
### Firehose
|
||||
|
||||
- Plugin name: `firehose`
|
||||
- Flag namespace: [firehose](../1-Deployment/(b)-Configuration.md#firehose)
|
||||
|
||||
With the Firehose plugin, osquery result and/or status logs are written to [AWS Firehose](https://aws.amazon.com/kinesis/data-firehose/) streams. This is a very good method for aggregating osquery logs into AWS S3 storage.
|
||||
|
||||
Note that Firehose logging has limits [discussed in the documentation](https://docs.aws.amazon.com/firehose/latest/dev/limits.html). When Fleet encounters logs that are too big for Firehose, notifications will be output in the Fleet logs and those logs _will not_ be sent to Firehose.
|
||||
|
||||
### Kinesis
|
||||
|
||||
- Plugin name: `kinesis`
|
||||
- Flag namespace: [kinesis](../1-Deployment/(b)-Configuration.md#kinesis)
|
||||
|
||||
With the Kinesis plugin, osquery result and/or status logs are written to
|
||||
[AWS Kinesis](https://aws.amazon.com/kinesis/data-streams) streams.
|
||||
|
||||
Note that Kinesis logging has limits [discussed in the
|
||||
documentation](https://docs.aws.amazon.com/kinesis/latest/dev/limits.html).
|
||||
When Fleet encounters logs that are too big for Kinesis, notifications will be
|
||||
output in the Fleet logs and those logs _will not_ be sent to Kinesis.
|
||||
|
||||
### PubSub
|
||||
|
||||
- Plugin name: `pubsub`
|
||||
- Flag namespace: [pubsub](../1-Deployment/(b)-Configuration.md#pubsub)
|
||||
|
||||
With the PubSub plugin, osquery result and/or status logs are written to [PubSub](https://cloud.google.com/pubsub/) topics.
|
||||
|
||||
Note that messages over 10MB will be dropped, with a notification sent to the fleet logs, as these can never be processed by PubSub.
|
||||
|
||||
### Stdout
|
||||
|
||||
- Plugin name: `stdout`
|
||||
- Flag namespace: [stdout](../1-Deployment/(b)-Configuration.md#stdout)
|
||||
|
||||
With the stdout plugin, osquery result and/or status logs are written to stdout
|
||||
on the Fleet server. This is typically used for debugging or with a log
|
||||
forwarding setup that will capture and forward stdout logs into a logging
|
||||
pipeline. Note that if multiple load-balanced Fleet servers are used, the logs
|
||||
will be load-balanced across those servers (not duplicated).
|
@ -1,38 +1,82 @@
|
||||
# Fleet Server Performance
|
||||
# Monitoring Fleet
|
||||
- [Health checks](#health-checks)
|
||||
- [Metrics](#metrics)
|
||||
- [Alerting](#alerting)
|
||||
- [Graphing](#graphing)
|
||||
- [Fleet server performance](#fleet-server-performance)
|
||||
- [Horizontal scaling](#horizontal-scaling)
|
||||
- [Availability](#availability)
|
||||
- [Monitoring](#monitoring)
|
||||
- [Debugging performance issues](#debugging-performance-issues)
|
||||
- [MySQL & Redis](#mysql-&-redis)
|
||||
- [Fleet server](#fleet-server)
|
||||
|
||||
## Health checks
|
||||
|
||||
Fleet exposes a basic health check at the `/healthz` endpoint. This is the interface to use for simple monitoring and load-balancer health checks.
|
||||
|
||||
The `/healthz` endpoint will return an `HTTP 200` status if the server is running and has healthy connections to MySQL and Redis. If there are any problems, the endpoint will return an `HTTP 500` status.
|
||||
|
||||
## Metrics
|
||||
|
||||
Fleet exposes server metrics in a format compatible with [Prometheus](https://prometheus.io/). A simple example Prometheus configuration is available in [tools/app/prometheus.yml](/tools/app/prometheus.yml).
|
||||
|
||||
Prometheus can be configured to use a wide range of service discovery mechanisms within AWS, GCP, Azure, Kubernetes, and more. See the Prometheus [configuration documentation](https://prometheus.io/docs/prometheus/latest/configuration/configuration/) for more information on configuring the
|
||||
|
||||
### Alerting
|
||||
|
||||
Prometheus has built-in support for alerting through [Alertmanager](https://prometheus.io/docs/alerting/latest/overview/).
|
||||
|
||||
Consider building alerts for
|
||||
|
||||
- Changes from expected levels of host enrollment
|
||||
- Increased latency on HTTP endpoints
|
||||
- Increased error levels on HTTP endpoints
|
||||
|
||||
```
|
||||
TODO (Seeking Contributors)
|
||||
Add example alerting configurations
|
||||
```
|
||||
|
||||
### Graphing
|
||||
|
||||
Prometheus provides basic graphing capabilities, and integrates tightly with [Grafana](https://prometheus.io/docs/visualization/grafana/) for sophisticated visualizations.
|
||||
|
||||
## Fleet server performance
|
||||
|
||||
Fleet is designed to scale to hundreds of thousands of online hosts. The Fleet server scales horizontally to support higher load.
|
||||
|
||||
## Horizontal Scaling
|
||||
### Horizontal scaling
|
||||
|
||||
Scaling Fleet horizontally is as simple as running more Fleet server processes connected to the same MySQL and Redis backing stores. Typically, operators front Fleet server nodes with a load balancer that will distribute requests to the servers. All APIs in Fleet are designed to work in this arrangement by simply configuring clients to connect to the load balancer.
|
||||
|
||||
## Availability
|
||||
### Availability
|
||||
|
||||
The Fleet/osquery system is resilient to loss of availability. Osquery agents will continue executing the existing configuration and buffering result logs during downtime due to lack of network connectivity, server maintenance, or any other reason. Buffering in osquery can be configured with the `--buffered_log_max` flag.
|
||||
|
||||
Note that short downtimes are expected during [Fleet server upgrades](./updating-fleet.md) that require database migrations.
|
||||
Note that short downtimes are expected during [Fleet server upgrades](./(g)-Updating.md)-fleet.md) that require database migrations.
|
||||
|
||||
## Monitoring
|
||||
### Monitoring
|
||||
|
||||
More information on monitoring Fleet servers with Prometheus and other tools is available in the [Monitoring Fleet](./monitoring-alerting.md) documentation.
|
||||
More information on monitoring Fleet servers with Prometheus and other tools is available in the [Monitoring Fleet](./(e)-Monitoring-Fleet.md) documentation.
|
||||
|
||||
## Debugging Performance Issues
|
||||
### Debugging performance issues
|
||||
|
||||
### MySQL & Redis
|
||||
#### MySQL & Redis
|
||||
|
||||
If performance issues are encountered with the MySQL and Redis servers, use the extensive resources available online to optimize and understand these problems. Please [file an issue](https://github.com/fleetdm/fleet/issues/new/choose) with details about the problem so that Fleet developers can work to fix them.
|
||||
|
||||
### Fleet Server
|
||||
#### Fleet server
|
||||
|
||||
For performance issues in the Fleet server process, please [file an issue](https://github.com/fleetdm/fleet/issues/new/choose) with details about the scenario, and attach a debug archive. Debug archives can also be submitted confidentially through other support channels.
|
||||
|
||||
#### Generate Debug Archive (Fleet 3.4.0+)
|
||||
##### Generate debug archive (Fleet 3.4.0+)
|
||||
|
||||
Use the `fleetctl archive` command to generate an archive of Fleet's full suite of debug profiles. See the [fleetctl setup guide](../cli/setup-guide.md) for details on configuring `fleetctl`.
|
||||
Use the `fleetctl archive` command to generate an archive of Fleet's full suite of debug profiles. See the [fleetctl setup guide](./(b)-fleetctl-CLI.md)) for details on configuring `fleetctl`.
|
||||
|
||||
The generated `.tar.gz` archive will be available in the current directory.
|
||||
|
||||
##### Targeting Individual Servers
|
||||
###### Targeting individual servers
|
||||
|
||||
In most configurations, the `fleetctl` client is configured to make requests to a load balancer that will proxy the requests to each server instance. This can be problematic when trying to debug a performance issue on a specific server. To target an individual server, create a new `fleetctl` context that uses the direct address of the server.
|
||||
|
||||
@ -44,6 +88,6 @@ fleetctl login --context server-a
|
||||
fleetctl debug archive --context server-a
|
||||
```
|
||||
|
||||
##### Confidential Information
|
||||
###### Confidential information
|
||||
|
||||
The `fleetctl archive` command retrieves information generated by Go's [`net/http/pprof`](https://golang.org/pkg/net/http/pprof/) package. In most scenarios this should not include sensitive information, however it does include command line arguments to the Fleet server. If the Fleet server receives sensitive credentials via CLI argument (not environment variables or config file), this information should be scrubbed from the archive in the `cmdline` file.
|
@ -0,0 +1,93 @@
|
||||
# Monitoring Fleet
|
||||
- [Health checks](#health-checks)
|
||||
- [Metrics](#metrics)
|
||||
- [Alerting](#alerting)
|
||||
- [Graphing](#graphing)
|
||||
- [Fleet server performance](#fleet-server-performance)
|
||||
- [Horizontal scaling](#horizontal-scaling)
|
||||
- [Availability](#availability)
|
||||
- [Monitoring](#monitoring)
|
||||
- [Debugging performance issues](#debugging-performance-issues)
|
||||
- [MySQL & Redis](#mysql-&-redis)
|
||||
- [Fleet server](#fleet-server)
|
||||
|
||||
## Health checks
|
||||
|
||||
Fleet exposes a basic health check at the `/healthz` endpoint. This is the interface to use for simple monitoring and load-balancer health checks.
|
||||
|
||||
The `/healthz` endpoint will return an `HTTP 200` status if the server is running and has healthy connections to MySQL and Redis. If there are any problems, the endpoint will return an `HTTP 500` status.
|
||||
|
||||
## Metrics
|
||||
|
||||
Fleet exposes server metrics in a format compatible with [Prometheus](https://prometheus.io/). A simple example Prometheus configuration is available in [tools/app/prometheus.yml](/tools/app/prometheus.yml).
|
||||
|
||||
Prometheus can be configured to use a wide range of service discovery mechanisms within AWS, GCP, Azure, Kubernetes, and more. See the Prometheus [configuration documentation](https://prometheus.io/docs/prometheus/latest/configuration/configuration/) for more information on configuring the
|
||||
|
||||
### Alerting
|
||||
|
||||
Prometheus has built-in support for alerting through [Alertmanager](https://prometheus.io/docs/alerting/latest/overview/).
|
||||
|
||||
Consider building alerts for
|
||||
|
||||
- Changes from expected levels of host enrollment
|
||||
- Increased latency on HTTP endpoints
|
||||
- Increased error levels on HTTP endpoints
|
||||
|
||||
```
|
||||
TODO (Seeking Contributors)
|
||||
Add example alerting configurations
|
||||
```
|
||||
|
||||
### Graphing
|
||||
|
||||
Prometheus provides basic graphing capabilities, and integrates tightly with [Grafana](https://prometheus.io/docs/visualization/grafana/) for sophisticated visualizations.
|
||||
|
||||
## Fleet server performance
|
||||
|
||||
Fleet is designed to scale to hundreds of thousands of online hosts. The Fleet server scales horizontally to support higher load.
|
||||
|
||||
### Horizontal scaling
|
||||
|
||||
Scaling Fleet horizontally is as simple as running more Fleet server processes connected to the same MySQL and Redis backing stores. Typically, operators front Fleet server nodes with a load balancer that will distribute requests to the servers. All APIs in Fleet are designed to work in this arrangement by simply configuring clients to connect to the load balancer.
|
||||
|
||||
### Availability
|
||||
|
||||
The Fleet/osquery system is resilient to loss of availability. Osquery agents will continue executing the existing configuration and buffering result logs during downtime due to lack of network connectivity, server maintenance, or any other reason. Buffering in osquery can be configured with the `--buffered_log_max` flag.
|
||||
|
||||
Note that short downtimes are expected during [Fleet server upgrades](./(g)-Updating.md)-fleet.md) that require database migrations.
|
||||
|
||||
### Monitoring
|
||||
|
||||
More information on monitoring Fleet servers with Prometheus and other tools is available in the [Monitoring Fleet](./(e)-Monitoring-Fleet.md) documentation.
|
||||
|
||||
### Debugging performance issues
|
||||
|
||||
#### MySQL & Redis
|
||||
|
||||
If performance issues are encountered with the MySQL and Redis servers, use the extensive resources available online to optimize and understand these problems. Please [file an issue](https://github.com/fleetdm/fleet/issues/new/choose) with details about the problem so that Fleet developers can work to fix them.
|
||||
|
||||
#### Fleet server
|
||||
|
||||
For performance issues in the Fleet server process, please [file an issue](https://github.com/fleetdm/fleet/issues/new/choose) with details about the scenario, and attach a debug archive. Debug archives can also be submitted confidentially through other support channels.
|
||||
|
||||
##### Generate debug archive (Fleet 3.4.0+)
|
||||
|
||||
Use the `fleetctl archive` command to generate an archive of Fleet's full suite of debug profiles. See the [fleetctl setup guide](./(b)-fleetctl-CLI.md)) for details on configuring `fleetctl`.
|
||||
|
||||
The generated `.tar.gz` archive will be available in the current directory.
|
||||
|
||||
###### Targeting individual servers
|
||||
|
||||
In most configurations, the `fleetctl` client is configured to make requests to a load balancer that will proxy the requests to each server instance. This can be problematic when trying to debug a performance issue on a specific server. To target an individual server, create a new `fleetctl` context that uses the direct address of the server.
|
||||
|
||||
For example:
|
||||
|
||||
```sh
|
||||
fleetctl config set --context server-a --address https://server-a:8080
|
||||
fleetctl login --context server-a
|
||||
fleetctl debug archive --context server-a
|
||||
```
|
||||
|
||||
###### Confidential information
|
||||
|
||||
The `fleetctl archive` command retrieves information generated by Go's [`net/http/pprof`](https://golang.org/pkg/net/http/pprof/) package. In most scenarios this should not include sensitive information, however it does include command line arguments to the Fleet server. If the Fleet server receives sensitive credentials via CLI argument (not environment variables or config file), this information should be scrubbed from the archive in the `cmdline` file.
|
@ -0,0 +1,45 @@
|
||||
# Security best practices
|
||||
- [Describe your secure coding practices](#describe-your-secure-coding-practices,-including-code-reviews,-use-of-static/dynamic-security-testing-tools,-3rd-party-scans/reviews)
|
||||
- [SQL injection](#sql-injection)
|
||||
- [Broken authentication](#broken-authentication-–-authentication,-session-management-flaws-that-compromise-passwords,-keys,-session-tokens-etc.)
|
||||
- [Passwords](#passwords)
|
||||
- [Authentication tokens](#authentication-tokens)
|
||||
- [Sensitive data exposure](#sensitive-data-exposure-–-encryption-in-transit,-at-rest,-improperly-implemented-APIs.)
|
||||
- [Cross-site scripting](#cross-site-scripting-–-ensure-an-attacker-can’t-execute-scripts-in-the-user’s-browser)
|
||||
- [Components with known vulnerabilities](#components-with-known-vulnerabilities-–-prevent-the-use-of-libraries,-frameworks,-other-software-with-existing-vulnerabilities.)
|
||||
|
||||
The Fleet community follows best practices when coding. Here are some of the ways we mitigate against the OWASP top 10 issues:
|
||||
|
||||
## Describe your secure coding practices, including code reviews, use of static/dynamic security testing tools, 3rd party scans/reviews.
|
||||
|
||||
Every piece of code that is merged into Fleet is reviewed by at least one other engineer before merging. We don't use any security-specific testing tools.
|
||||
|
||||
The server backend is built in Golang, which (besides for language-level vulnerabilities) eliminates buffer overflow and other memory related attacks.
|
||||
|
||||
We use standard library cryptography wherever possible, and all cryptography is using well-known standards.
|
||||
|
||||
## SQL injection
|
||||
All queries are parameterized with MySQL placeholders, so MySQL itself guards against SQL injection and the Fleet code does not need to perform any escaping.
|
||||
|
||||
## Broken authentication – authentication, session management flaws that compromise passwords, keys, session tokens etc.
|
||||
### Passwords
|
||||
Fleet supports SAML auth which means that it can be configured such that it never sees passwords.
|
||||
|
||||
Passwords are never stored in plaintext in the database. We store a `bcrypt`ed hash of the password along with a randomly generated salt. The `bcrypt` iteration count and salt key size are admin-configurable.
|
||||
### Authentication tokens
|
||||
The size and expiration time of session tokens is admin-configurable. See [https://github.com/fleetdm/fleet/blob/master/docs/infrastructure/configuring-the-fleet-binary.md#session_duration](./1-Deployment/(b)-Configuration.md#session_duration).
|
||||
|
||||
It is possible to revoke all session tokens for a user by forcing a password reset.
|
||||
|
||||
|
||||
## Sensitive data exposure – encryption in transit, at rest, improperly implemented APIs.
|
||||
By default, all traffic between user clients (such as the web browser and fleetctl) and the Fleet server is encrypted with TLS. By default, all traffic between osqueryd clients and the Fleet server is encrypted with TLS. Fleet does not encrypt any data at rest (*however a user could separately configure encryption for the MySQL database and logs that Fleet writes*).
|
||||
|
||||
## Broken access controls – how restrictions on what authorized users are allowed to do/access are enforced.
|
||||
Each session is associated with a viewer context that is used to determine the access granted to that user. Access controls can easily be applied as middleware in the routing table, so the access to a route is clearly defined in the same place where the route is attached to the server see [https://github.com/fleetdm/fleet/blob/master/server/service/handler.go#L114-L189](https://github.com/fleetdm/fleet/blob/master/server/service/handler.go#L114-L189).
|
||||
|
||||
## Cross-site scripting – ensure an attacker can’t execute scripts in the user’s browser
|
||||
We render the frontend with React and benefit from built-in XSS protection in React's rendering. This is not sufficient to prevent all XSS, so we also follow additional best practices as discussed in [https://stackoverflow.com/a/51852579/491710](https://stackoverflow.com/a/51852579/491710).
|
||||
|
||||
## Components with known vulnerabilities – prevent the use of libraries, frameworks, other software with existing vulnerabilities.
|
||||
We rely on Github's automated vulnerability checks, community news, and direct reports to discover vulnerabilities in our dependencies. We endeavor to fix these immediately and would almost always do so within a week of a report.
|
@ -1,7 +1,13 @@
|
||||
Updating Fleet
|
||||
==============
|
||||
# Updating Fleet
|
||||
- [Overview](#overview)
|
||||
- [Updating the Fleet binary](#updating-the-fleet-binary)
|
||||
- [Raw binaries](#raw-binaries)
|
||||
- [Docker container](#docker-container)
|
||||
- [Running database migrations](#running-database-migrations)
|
||||
|
||||
This guide explains how to update and run new versions of Fleet. For initial installation instructions, see [Installing Fleet](./installing-fleet.md).
|
||||
## Overview
|
||||
|
||||
This guide explains how to update and run new versions of Fleet. For initial installation instructions, see [Installing Fleet](./1-Deployment/(a)-Installation.md).
|
||||
|
||||
There are two steps to perform a typical Fleet update. If any other steps are required, they will be noted in the release notes.
|
||||
|
||||
@ -14,7 +20,7 @@ As with any enterprise software update, it's a good idea to back up your MySQL d
|
||||
|
||||
Follow the binary update instructions corresponding to the original installation method used to install Fleet.
|
||||
|
||||
#### Raw binaries
|
||||
### Raw binaries
|
||||
|
||||
Download the latest raw Fleet binaries:
|
||||
|
||||
@ -36,7 +42,7 @@ unzip fleet.zip 'linux/*' -d fleet
|
||||
|
||||
Replace the existing Fleet binary with the newly unzipped binary.
|
||||
|
||||
#### Docker container
|
||||
### Docker container
|
||||
|
||||
Pull the latest Fleet docker image:
|
||||
|
@ -0,0 +1,70 @@
|
||||
# Using Fleet FAQ
|
||||
- [Has anyone stress tested Fleet? How many clients can the Fleet server handle?](#has-anyone-stress-tested-fleet-how-many-clients-can-the-fleet-server-handle)
|
||||
- [How often do labels refresh? Is the refresh frequency configurable?](#how-often-do-labels-refresh-is-the-refresh-frequency-configurable)
|
||||
- [How do I revoke the authorization tokens for a user?](#how-do-i-revoke-the-authorization-tokens-for-a-user)
|
||||
- [How do I monitor the performance of my queries?](#how-do-i-monitor-the-performance-of-my-queries)
|
||||
- [How do I monitor a Fleet server?](#how-do-i-monitor-a-fleet-server)
|
||||
- [Why is the “Add User” button disabled?](#why-is-the-"add-user"-button-disabled)
|
||||
- [Where are my query results?](#where-are-my-query-results)
|
||||
- [Why aren’t my live queries being logged?](#why-aren’t-my-live-queries-being-logged)
|
||||
|
||||
## Has anyone stress tested Fleet? How many clients can the Fleet server handle?
|
||||
|
||||
Fleet has been stress tested to 150,000 online hosts and 400,000 total enrolled hosts. There are numerous production deployments in the thousands, in the tens of thousands of hosts range, and there are production deployments in the high tens of thousands of hosts range.
|
||||
|
||||
It’s standard deployment practice to have multiple Fleet servers behind a load balancer. However, typically the MySQL database is the bottleneck and an individual Fleet server can handle tens of thousands of hosts.
|
||||
|
||||
## How often do labels refresh? Is the refresh frequency configurable?
|
||||
|
||||
The update frequency for labels is configurable with the [—osquery_label_update_interval](https://github.com/fleetdm/fleet/blob/master/docs/infrastructure/configuring-the-fleet-binary.md#osquery_label_update_interval) flag (default 1 hour).
|
||||
|
||||
## How do I revoke the authorization tokens for a user?
|
||||
|
||||
Authorization tokens are revoked when the “require password reset” action is selected for that user. User-initiated password resets do not expire the existing tokens.
|
||||
|
||||
## How do I monitor the performance of my queries?
|
||||
|
||||
Fleet can live query the `osquery_schedule` table. Performing this live query allows you to get the performance data for your scheduled queries. Also consider scheduling a query to the `osquery_schedule` table to get these logs into your logging pipeline.
|
||||
|
||||
## How do I monitor a Fleet server?
|
||||
|
||||
Fleet provides standard interfaces for monitoring and alerting. See the [Monitoring Fleet](./(e)-Monitoring-fleet.md) documentation for details.
|
||||
|
||||
|
||||
## Why is the “Add User” button disabled?
|
||||
|
||||
The “Add User” button is disabled if SMTP (email) has not been configured for the Fleet server. Currently, there is no way to add new users without email capabilities.
|
||||
|
||||
One way to hack around this is to use a simulated mailserver like [Mailhog](https://github.com/mailhog/MailHog). You can retrieve the email that was “sent” in the Mailhog UI, and provide users with the invite URL manually.
|
||||
|
||||
## Where are my query results?
|
||||
|
||||
### Live Queries
|
||||
|
||||
Live query results (executed in the web UI or `fleetctl query`) are pushed directly to the UI where the query is running. The results never go to a file unless you as the user manually save them.
|
||||
|
||||
### Scheduled Queries
|
||||
|
||||
Scheduled query results (queries that are scheduled to run in Packs) are typically sent to the Fleet server, and will be available on the filesystem of the server at the path configurable by [`--osquery_result_log_file`](../1-Deployment/(b)-Configuration.md#osquery_result_log_file). This defaults to `/tmp/osquery_result`.
|
||||
|
||||
It is possible to configure osqueryd to log query results outside of Fleet. For results to go to Fleet, the `--logger_plugin` flag must be set to `tls`.
|
||||
|
||||
### What are my options for storing the osquery logs?
|
||||
|
||||
Folks typically use Fleet to ship logs to data aggregation systems like Splunk, the ELK stack, and Graylog.
|
||||
|
||||
The [logger configuration options](https://github.com/fleetdm/fleet/blob/master/docs/infrastructure/configuring-the-fleet-binary.md#osquery_status_log_plugin) allow you to select the log output plugin. Using the log outputs you can route the logs to your chosen aggregation system.
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
Expecting results, but not seeing anything in the logs?
|
||||
|
||||
- Try scheduling a query that always returns results (eg. `SELECT * FROM time`).
|
||||
- Check whether the query is scheduled in differential mode. If so, new results will only be logged when the result set changes.
|
||||
- Ensure that the query is scheduled to run on the intended platforms, and that the tables queried are supported by those platforms.
|
||||
- Use live query to `SELECT * FROM osquery_schedule` to check whether the query has been scheduled on the host.
|
||||
- Look at the status logs provided by osquery. In a standard configuration these are available on the filesystem of the Fleet server at the path configurable by [`--filesystem_status_log_file`](../1-Deployment/(b)-Configuration.md#filesystem_status_log_file). This defaults to `/tmp/osquery_status`. The host will output a status log each time it executes the query.
|
||||
|
||||
## Why aren’t my live queries being logged?
|
||||
|
||||
Live query results are never logged to the filesystem of the Fleet server. See [Where are my query results?](#where-are-my-query-results).
|
@ -1 +1,25 @@
|
||||
Using fleet README
|
||||
# Using Fleet
|
||||
|
||||
### [Fleet UI](./1-Fleet-UI.md)
|
||||
Provides documentation about running and scheduling queries from within the Fleet UI
|
||||
|
||||
### [fleetctl CLI](./2-fleetctl-CLI.md)
|
||||
Includes resources for setting up and configuring Fleet via the fleetctl CLI
|
||||
|
||||
### [REST API](./3-REST-API.md)
|
||||
Provides resources for working with Fleet's API and includes example code for endpoints
|
||||
|
||||
### [Osquery logs](./4-Osquery-logs.md)
|
||||
Includes documentation on the plugin options for working with osquery logs
|
||||
|
||||
### [Monitoring Fleet](./5-Monitoring-Fleet.md)
|
||||
Provides documentation for load balancer health checks and working with Fleet server metrics and performance
|
||||
|
||||
### [Security best practices](./6-Security-best-practices.md)
|
||||
Includes resources for ways to mitigate against the OWASP top 10 issues
|
||||
|
||||
### [Updating Fleet](./7-Updating-Fleet.md)
|
||||
Includes a guide for how to update and run new versions of Fleet
|
||||
|
||||
### [FAQ](./FAQ.md)
|
||||
Includes commonly asked questions and answers about using Fleet from the Fleet community
|
||||
|
@ -2,8 +2,8 @@
|
||||
|
||||
Welcome to the documentation for the Fleet osquery fleet manager.
|
||||
|
||||
- Resources for installing Fleet's infrastructure dependencies, configuring Fleet, deploying osquery to hosts, and viewing example deployment scenarios can all be found in [Deployment](./1-Deployment/README.md).
|
||||
- Resources for using the Fleet UI, fleetctl CLI, and Fleet REST API can all be found in [Using fleet](./2-Using-fleet/README.md).
|
||||
- Finally, if you're interested in interacting with the Fleet source code, you will find information on modifying and building the code in [Contribution guide](./3-Contribution-guide/README.md).
|
||||
- Resources for using the Fleet UI, fleetctl CLI, and Fleet REST API can all be found in [Using Fleet](./1-Using-Fleet/README.md).
|
||||
- Resources for installing Fleet's infrastructure dependencies, configuring Fleet, deploying osquery to hosts, and viewing example deployment scenarios can all be found in [Deployment](./2-Deployment/README.md).
|
||||
- Finally, if you're interested in interacting with the Fleet source code, you will find information on modifying and building the code in [Contribution](./3-Contribution/README.md).
|
||||
|
||||
If you have any questions, please don't hesitate to [File a GitHub issue](https://github.com/fleetdm/fleet/issues) or [join us on Slack](https://osquery.slack.com/join/shared_invite/zt-h29zm0gk-s2DBtGUTW4CFel0f0IjTEw#/). You can find us in the `#fleet` channel.
|
||||
|
@ -1,39 +0,0 @@
|
||||
API Documentation
|
||||
=================
|
||||
|
||||
Fleet is powered by a Go API server which serves three types of endpoints:
|
||||
|
||||
- Endpoints starting with `/api/v1/osquery/` are osquery TLS server API endpoints. All of these endpoints are used for talking to osqueryd agents and that's it.
|
||||
- Endpoints starting with `/api/v1/kolide/` are endpoints to interact with the Fleet data model (packs, queries, scheduled queries, labels, hosts, etc) as well as application endpoints (configuring settings, logging in, session management, etc).
|
||||
- All other endpoints are served the React single page application bundle. The React app uses React Router to determine whether or not the URI is a valid route and what to do.
|
||||
|
||||
Only osquery agents should interact with the osquery API, but we'd like to support the eventual use of the Fleet API extensively. The API is not very well documented at all right now, but we have plans to:
|
||||
|
||||
- Generate and publish detailed documentation via a tool built using [test2doc](https://github.com/adams-sarah/test2doc) (or similar).
|
||||
- Release a JavaScript Fleet API client library (which would be derived from the [current](https://github.com/fleetdm/fleet/blob/master/frontend/kolide/index.js) JavaScript API client).
|
||||
- Commit to a stable, standardized API format.
|
||||
|
||||
## Fleetctl
|
||||
|
||||
Many of the operations that a user may wish to perform with an API are currently best performed via the [fleetctl](../cli/README.md) tooling. These CLI tools allow updating of the osquery configuration entities, as well as performing live queries.
|
||||
|
||||
## Current API
|
||||
|
||||
The general idea with the current API is that there are many entities throughout the Fleet application, such as:
|
||||
|
||||
- Queries
|
||||
- Packs
|
||||
- Labels
|
||||
- Hosts
|
||||
|
||||
Each set of objects follows a similar REST access pattern.
|
||||
|
||||
- You can `GET /api/v1/kolide/packs` to get all packs
|
||||
- You can `GET /api/v1/kolide/packs/1` to get a specific pack.
|
||||
- You can `DELETE /api/v1/kolide/packs/1` to delete a specific pack.
|
||||
- You can `POST /api/v1/kolide/packs` (with a valid body) to create a new pack.
|
||||
- You can `PATCH /api/v1/kolide/packs/1` (with a valid body) to modify a specific pack.
|
||||
|
||||
Queries, packs, scheduled queries, labels, invites, users, sessions all behave this way. Some objects, like invites, have additional HTTP methods for additional functionality. Some objects, such as scheduled queries, are merely a relationship between two other objects (in this case, a query and a pack) with some details attached.
|
||||
|
||||
All of these objects are put together and distributed to the appropriate osquery agents at the appropriate time. At this time, the best source of truth for the API is the [HTTP handler file](https://github.com/fleetdm/fleet/blob/master/server/service/handler.go) in the Go application. The REST API is exposed via a transport layer on top of an RPC service which is implemented using a micro-service library called [Go Kit](https://github.com/go-kit/kit). If using the Fleet API is important to you right now, being familiar with Go Kit would definitely be helpful.
|
@ -1,898 +0,0 @@
|
||||
# Fleet REST API endpoints
|
||||
|
||||
## Authentication
|
||||
|
||||
Making authenticated requests to the Fleet server requires that you are granted permission to access data. The Fleet Authentication API enables you to receive an authorization token.
|
||||
|
||||
All Fleet API requests are authenticated unless noted in the documentation. This means that almost all Fleet API requests will require sending the API token in the request header.
|
||||
|
||||
The typical steps to making an authenticated API request are outlined below.
|
||||
|
||||
First, utilize the `/login` endpoint to receive an API token. For SSO users, username/password login is disabled and the API token can be retrieved from the "Settings" page in the UI.
|
||||
|
||||
`POST /api/v1/kolide/login`
|
||||
|
||||
Request body
|
||||
|
||||
```
|
||||
{
|
||||
"username": "janedoe@example.com",
|
||||
"passsword": "VArCjNW7CfsxGp67"
|
||||
}
|
||||
```
|
||||
|
||||
Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"user": {
|
||||
"created_at": "2020-11-13T22:57:12Z",
|
||||
"updated_at": "2020-11-13T22:57:12Z",
|
||||
"id": 1,
|
||||
"username": "jane",
|
||||
"name": "",
|
||||
"email": "janedoe@example.com",
|
||||
"admin": true,
|
||||
"enabled": true,
|
||||
"force_password_reset": false,
|
||||
"gravatar_url": "",
|
||||
"sso_enabled": false
|
||||
},
|
||||
"token": "{your token}"
|
||||
}
|
||||
```
|
||||
|
||||
Then, use the token returned from the `/login` endpoint to authenticate further API requests. The example below utilizes the `/hosts` endpoint.
|
||||
|
||||
`GET /api/v1/kolide/hosts`
|
||||
|
||||
Request header
|
||||
|
||||
```
|
||||
Authorization: Bearer <your token>
|
||||
```
|
||||
|
||||
Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"hosts": [
|
||||
{
|
||||
"created_at": "2020-11-05T05:09:44Z",
|
||||
"updated_at": "2020-11-05T06:03:39Z",
|
||||
"id": 1,
|
||||
"detail_updated_at": "2020-11-05T05:09:45Z",
|
||||
"label_updated_at": "2020-11-05T05:14:51Z",
|
||||
"seen_time": "2020-11-05T06:03:39Z",
|
||||
"hostname": "2ceca32fe484",
|
||||
"uuid": "392547dc-0000-0000-a87a-d701ff75bc65",
|
||||
"platform": "centos",
|
||||
"osquery_version": "2.7.0",
|
||||
"os_version": "CentOS Linux 7",
|
||||
"build": "",
|
||||
"platform_like": "rhel fedora",
|
||||
"code_name": "",
|
||||
"uptime": 8305000000000,
|
||||
"memory": 2084032512,
|
||||
"cpu_type": "6",
|
||||
"cpu_subtype": "142",
|
||||
"cpu_brand": "Intel(R) Core(TM) i5-8279U CPU @ 2.40GHz",
|
||||
"cpu_physical_cores": 4,
|
||||
"cpu_logical_cores": 4,
|
||||
"hardware_vendor": "",
|
||||
"hardware_model": "",
|
||||
"hardware_version": "",
|
||||
"hardware_serial": "",
|
||||
"computer_name": "2ceca32fe484",
|
||||
"primary_ip": "",
|
||||
"primary_mac": "",
|
||||
"distributed_interval": 10,
|
||||
"config_tls_refresh": 10,
|
||||
"logger_tls_period": 8,
|
||||
"additional": {},
|
||||
"enroll_secret_name": "default",
|
||||
"status": "offline",
|
||||
"display_text": "2ceca32fe484"
|
||||
},
|
||||
{
|
||||
"created_at": "2020-11-05T05:09:44Z",
|
||||
"updated_at": "2020-11-05T06:03:39Z",
|
||||
"id": 2,
|
||||
"detail_updated_at": "2020-11-05T05:09:45Z",
|
||||
"label_updated_at": "2020-11-05T05:14:52Z",
|
||||
"seen_time": "2020-11-05T06:03:40Z",
|
||||
"hostname": "4cc885c20110",
|
||||
"uuid": "392547dc-0000-0000-a87a-d701ff75bc65",
|
||||
"platform": "centos",
|
||||
"osquery_version": "2.7.0",
|
||||
"os_version": "CentOS 6.8.0",
|
||||
"build": "",
|
||||
"platform_like": "rhel",
|
||||
"code_name": "",
|
||||
"uptime": 8305000000000,
|
||||
"memory": 2084032512,
|
||||
"cpu_type": "6",
|
||||
"cpu_subtype": "142",
|
||||
"cpu_brand": "Intel(R) Core(TM) i5-8279U CPU @ 2.40GHz",
|
||||
"cpu_physical_cores": 4,
|
||||
"cpu_logical_cores": 4,
|
||||
"hardware_vendor": "",
|
||||
"hardware_model": "",
|
||||
"hardware_version": "",
|
||||
"hardware_serial": "",
|
||||
"computer_name": "4cc885c20110",
|
||||
"primary_ip": "",
|
||||
"primary_mac": "",
|
||||
"distributed_interval": 10,
|
||||
"config_tls_refresh": 10,
|
||||
"logger_tls_period": 8,
|
||||
"additional": {},
|
||||
"enroll_secret_name": "default",
|
||||
"status": "offline",
|
||||
"display_text": "4cc885c20110"
|
||||
},
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Log in
|
||||
|
||||
Authenticates the user with the specified credentials. Use the token returned from this endpoint to authenticate further API requests.
|
||||
|
||||
`POST /api/v1/kolide/login`
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | In | Description |
|
||||
| -------- | ------ | ---- | --------------------------------------------- |
|
||||
| username | string | body | **Required**. The user's email. |
|
||||
| password | string | body | **Required**. The user's plain text password. |
|
||||
|
||||
#### Example
|
||||
|
||||
`POST /api/v1/kolide/login`
|
||||
|
||||
##### Request body
|
||||
|
||||
```
|
||||
{
|
||||
"username": "janedoe@example.com",
|
||||
"passsword": "VArCjNW7CfsxGp67"
|
||||
}
|
||||
```
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"user": {
|
||||
"created_at": "2020-11-13T22:57:12Z",
|
||||
"updated_at": "2020-11-13T22:57:12Z",
|
||||
"id": 1,
|
||||
"username": "jane",
|
||||
"name": "",
|
||||
"email": "janedoe@example.com",
|
||||
"admin": true,
|
||||
"enabled": true,
|
||||
"force_password_reset": false,
|
||||
"gravatar_url": "",
|
||||
"sso_enabled": false
|
||||
},
|
||||
"token": "{your token}"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Log out
|
||||
|
||||
Logs out the authenticated user.
|
||||
|
||||
`POST /api/v1/kolide/logout`
|
||||
|
||||
#### Example
|
||||
|
||||
`POST /api/v1/kolide/logout`
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
---
|
||||
|
||||
### Forgot password
|
||||
|
||||
Sends a password reset email to the specified email. Requires that SMTP is configured for your Fleet server.
|
||||
|
||||
`POST /api/v1/kolide/forgot_password`
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | In | Description |
|
||||
| ----- | ------ | ---- | ----------------------------------------------------------------------- |
|
||||
| email | string | body | **Required**. The email of the user requesting the reset password link. |
|
||||
|
||||
#### Example
|
||||
|
||||
`POST /api/v1/kolide/forgot_password`
|
||||
|
||||
##### Request body
|
||||
|
||||
```
|
||||
{
|
||||
"email": "janedoe@example.com"
|
||||
}
|
||||
```
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
##### Unknown error
|
||||
|
||||
`Status: 500`
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Unknown Error",
|
||||
"errors": [
|
||||
{
|
||||
"name": "base",
|
||||
"reason": "email not configured",
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Change password
|
||||
|
||||
`POST /api/v1/kolide/change_password`
|
||||
|
||||
Changes the password for the authenticated user.
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | In | Description |
|
||||
| ------------ | ------ | ---- | -------------------------------------- |
|
||||
| old_password | string | body | **Required**. The user's old password. |
|
||||
| new_password | string | body | **Required**. The user's new password. |
|
||||
|
||||
#### Example
|
||||
|
||||
`POST /api/v1/kolide/change_password`
|
||||
|
||||
##### Request body
|
||||
|
||||
```
|
||||
{
|
||||
"old_password": "VArCjNW7CfsxGp67",
|
||||
"new_password": "zGq7mCLA6z4PzArC",
|
||||
}
|
||||
```
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
##### Validation failed
|
||||
|
||||
`Status: 422 Unprocessable entity`
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Validation Failed",
|
||||
"errors": [
|
||||
{
|
||||
"name": "old_password",
|
||||
"reason": "old password does not match"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Me
|
||||
|
||||
Retrieves the user data for the authenticated user.
|
||||
|
||||
`POST /api/v1/kolide/me`
|
||||
|
||||
#### Example
|
||||
|
||||
`POST /api/v1/kolide/me`
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"user": {
|
||||
"created_at": "2020-11-13T22:57:12Z",
|
||||
"updated_at": "2020-11-16T23:49:41Z",
|
||||
"id": 1,
|
||||
"username": "jane",
|
||||
"name": "",
|
||||
"email": "janedoe@example.com",
|
||||
"admin": true,
|
||||
"enabled": true,
|
||||
"force_password_reset": false,
|
||||
"gravatar_url": "",
|
||||
"sso_enabled": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Perform required password reset
|
||||
|
||||
Resets the password of the authenticated user. Requires that `force_password_reset` is set to `true` prior to the request.
|
||||
|
||||
`POST /api/v1/kolide/perform_require_password_reset`
|
||||
|
||||
#### Example
|
||||
|
||||
`POST /api/v1/kolide/perform_required_password_reset`
|
||||
|
||||
##### Request body
|
||||
|
||||
```
|
||||
{
|
||||
"new_password": "sdPz8CV5YhzH47nK"
|
||||
}
|
||||
```
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"user": {
|
||||
"created_at": "2020-11-13T22:57:12Z",
|
||||
"updated_at": "2020-11-17T00:09:23Z",
|
||||
"id": 1,
|
||||
"username": "jane",
|
||||
"name": "",
|
||||
"email": "janedoe@example.com",
|
||||
"admin": true,
|
||||
"enabled": true,
|
||||
"force_password_reset": false,
|
||||
"gravatar_url": "",
|
||||
"sso_enabled": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### SSO config
|
||||
|
||||
Gets the current SSO configuration.
|
||||
|
||||
`GET /api/v1/kolide/sso`
|
||||
|
||||
#### Example
|
||||
|
||||
`GET /api/v1/kolide/sso`
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"settings": {
|
||||
"idp_name": "IDP Vendor 1",
|
||||
"idp_image_url": "",
|
||||
"sso_enabled": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Initiate SSO
|
||||
|
||||
`POST /api/v1/kolide/sso`
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | In | Description |
|
||||
| --------- | ------ | ---- | -------------------------------------------------------------------------- |
|
||||
| relay_url | string | body | **Required**. The relative url to be navigated to after succesful sign in. |
|
||||
|
||||
#### Example
|
||||
|
||||
`POST /api/v1/kolide/sso`
|
||||
|
||||
##### Request body
|
||||
|
||||
```
|
||||
{
|
||||
"relay_url": "/hosts/manage"
|
||||
}
|
||||
```
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
##### Unknown error
|
||||
|
||||
`Status: 500`
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Unknown Error",
|
||||
"errors": [
|
||||
{
|
||||
"name": "base",
|
||||
"reason": "InitiateSSO getting metadata: Get \"https://idp.example.org/idp-meta.xml\": dial tcp: lookup idp.example.org on [2001:558:feed::1]:53: no such host"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Hosts
|
||||
|
||||
### List hosts
|
||||
|
||||
`GET /api/v1/kolide/hosts`
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | In | Description |
|
||||
| ----------------------- | ------- | ----- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| page | integer | query | Page number of the results to fetch. |
|
||||
| per_page | integer | query | Results per page. |
|
||||
| order_key | string | query | What to order results by. Can be any column in the hosts table. |
|
||||
| status | string | query | Indicates the status of the hosts to return. Can either be `new`, `online`, `offline`, or `mia`. |
|
||||
| additional_info_filters | string | query | A comma-delimited list of fields to include in each host's additional information object. See [Fleet Configuration Options](https://github.com/fleetdm/fleet/blob/master/docs/cli/file-format.md#fleet-configuration-options) for an example configuration with hosts' additional information. |
|
||||
|
||||
#### Example
|
||||
|
||||
`GET /api/v1/kolide/hosts?page=0&per_page=100&order_key=host_name`
|
||||
|
||||
##### Request query parameters
|
||||
|
||||
```
|
||||
{
|
||||
"page": 0,
|
||||
"per_page": 100,
|
||||
"order_key": "host_name",
|
||||
}
|
||||
```
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"hosts": [
|
||||
{
|
||||
"created_at": "2020-11-05T05:09:44Z",
|
||||
"updated_at": "2020-11-05T06:03:39Z",
|
||||
"id": 1,
|
||||
"detail_updated_at": "2020-11-05T05:09:45Z",
|
||||
"label_updated_at": "2020-11-05T05:14:51Z",
|
||||
"seen_time": "2020-11-05T06:03:39Z",
|
||||
"hostname": "2ceca32fe484",
|
||||
"uuid": "392547dc-0000-0000-a87a-d701ff75bc65",
|
||||
"platform": "centos",
|
||||
"osquery_version": "2.7.0",
|
||||
"os_version": "CentOS Linux 7",
|
||||
"build": "",
|
||||
"platform_like": "rhel fedora",
|
||||
"code_name": "",
|
||||
"uptime": 8305000000000,
|
||||
"memory": 2084032512,
|
||||
"cpu_type": "6",
|
||||
"cpu_subtype": "142",
|
||||
"cpu_brand": "Intel(R) Core(TM) i5-8279U CPU @ 2.40GHz",
|
||||
"cpu_physical_cores": 4,
|
||||
"cpu_logical_cores": 4,
|
||||
"hardware_vendor": "",
|
||||
"hardware_model": "",
|
||||
"hardware_version": "",
|
||||
"hardware_serial": "",
|
||||
"computer_name": "2ceca32fe484",
|
||||
"primary_ip": "",
|
||||
"primary_mac": "",
|
||||
"distributed_interval": 10,
|
||||
"config_tls_refresh": 10,
|
||||
"logger_tls_period": 8,
|
||||
"additional": {},
|
||||
"enroll_secret_name": "default",
|
||||
"status": "offline",
|
||||
"display_text": "2ceca32fe484"
|
||||
},
|
||||
{
|
||||
"created_at": "2020-11-05T05:09:44Z",
|
||||
"updated_at": "2020-11-05T06:03:39Z",
|
||||
"id": 2,
|
||||
"detail_updated_at": "2020-11-05T05:09:45Z",
|
||||
"label_updated_at": "2020-11-05T05:14:52Z",
|
||||
"seen_time": "2020-11-05T06:03:40Z",
|
||||
"hostname": "4cc885c20110",
|
||||
"uuid": "392547dc-0000-0000-a87a-d701ff75bc65",
|
||||
"platform": "centos",
|
||||
"osquery_version": "2.7.0",
|
||||
"os_version": "CentOS 6.8.0",
|
||||
"build": "",
|
||||
"platform_like": "rhel",
|
||||
"code_name": "",
|
||||
"uptime": 8305000000000,
|
||||
"memory": 2084032512,
|
||||
"cpu_type": "6",
|
||||
"cpu_subtype": "142",
|
||||
"cpu_brand": "Intel(R) Core(TM) i5-8279U CPU @ 2.40GHz",
|
||||
"cpu_physical_cores": 4,
|
||||
"cpu_logical_cores": 4,
|
||||
"hardware_vendor": "",
|
||||
"hardware_model": "",
|
||||
"hardware_version": "",
|
||||
"hardware_serial": "",
|
||||
"computer_name": "4cc885c20110",
|
||||
"primary_ip": "",
|
||||
"primary_mac": "",
|
||||
"distributed_interval": 10,
|
||||
"config_tls_refresh": 10,
|
||||
"logger_tls_period": 8,
|
||||
"additional": {},
|
||||
"enroll_secret_name": "default",
|
||||
"status": "offline",
|
||||
"display_text": "4cc885c20110"
|
||||
},
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Users
|
||||
|
||||
The Fleet server exposes a handful of API endpoints that handles common user management operations. All the following endpoints require prior authentication meaning you must first log in successfully before calling any of the endpoints documented below.
|
||||
|
||||
### List All Users
|
||||
|
||||
Returns a list of all enabled users
|
||||
|
||||
`GET /api/v1/kolide/users`
|
||||
|
||||
#### Parameters
|
||||
|
||||
None.
|
||||
|
||||
#### Example
|
||||
|
||||
`GET /api/v1/kolide/users`
|
||||
|
||||
##### Request query parameters
|
||||
|
||||
None.
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"users": [
|
||||
{
|
||||
"created_at": "2020-12-10T03:52:53Z",
|
||||
"updated_at": "2020-12-10T03:52:53Z",
|
||||
"id": 1,
|
||||
"username": "janedoe",
|
||||
"name": "",
|
||||
"email": "janedoe@example.com",
|
||||
"admin": true,
|
||||
"enabled": true,
|
||||
"force_password_reset": false,
|
||||
"gravatar_url": "",
|
||||
"sso_enabled": false
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
##### Failed authentication
|
||||
|
||||
`Status: 401 Authentication Failed`
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Authentication Failed",
|
||||
"errors": [
|
||||
{
|
||||
"name": "base",
|
||||
"reason": "username or email and password do not match"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Create a user account with an invitation
|
||||
|
||||
Creates a user account after an invited user provides registration information and submits the form.
|
||||
|
||||
`POST /api/v1/kolide/users`
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | In | Description |
|
||||
| --------------------- | ------ | ---- | --------------------------------------------------------------- |
|
||||
| email | string | body | **Required**. The email address of the user. |
|
||||
| invite_token | string | body | **Required**. Token provided to the user in the invitation email. |
|
||||
| name | string | body | The name of the user. |
|
||||
| username | string | body | **Required**. The username chosen by the user |
|
||||
| password | string | body | **Required**. The password chosen by the user. |
|
||||
| password_confirmation | string | body | **Required**. Confirmation of the password chosen by the user. |
|
||||
|
||||
#### Example
|
||||
|
||||
`POST /api/v1/kolide/users`
|
||||
|
||||
##### Request query parameters
|
||||
|
||||
```
|
||||
{
|
||||
"email": "janedoe@example.com",
|
||||
"invite_token": "SjdReDNuZW5jd3dCbTJtQTQ5WjJTc2txWWlEcGpiM3c=",
|
||||
"name": "janedoe",
|
||||
"username": "janedoe",
|
||||
"password": "test-123",
|
||||
"password_confirmation": "test-123"
|
||||
}
|
||||
```
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"user": {
|
||||
"created_at": "0001-01-01T00:00:00Z",
|
||||
"updated_at": "0001-01-01T00:00:00Z",
|
||||
"id": 2,
|
||||
"username": "janedoe",
|
||||
"name": "janedoe",
|
||||
"email": "janedoe@example.com",
|
||||
"admin": false,
|
||||
"enabled": true,
|
||||
"force_password_reset": false,
|
||||
"gravatar_url": "",
|
||||
"sso_enabled": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
##### Failed authentication
|
||||
|
||||
`Status: 401 Authentication Failed`
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Authentication Failed",
|
||||
"errors": [
|
||||
{
|
||||
"name": "base",
|
||||
"reason": "username or email and password do not match"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
##### Expired or used invite code
|
||||
|
||||
`Status: 404 Resource Not Found`
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Resource Not Found",
|
||||
"errors": [
|
||||
{
|
||||
"name": "base",
|
||||
"reason": "Invite with token SjdReDNuZW5jd3dCbTJtQTQ5WjJTc2txWWlEcGpiM3c= was not found in the datastore"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
##### Validation failed
|
||||
|
||||
`Status: 422 Validation Failed`
|
||||
|
||||
The same error will be returned whenever one of the required parameters fails the validation.
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Validation Failed",
|
||||
"errors": [
|
||||
{
|
||||
"name": "username",
|
||||
"reason": "cannot be empty"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Create a user account without an invitation
|
||||
|
||||
Creates a user account without requiring an invitation, the user is enabled immediately.
|
||||
|
||||
`POST /api/v1/kolide/users/admin`
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | In | Description |
|
||||
| ---------- | ------- | ---- | ------------------------------------------------ |
|
||||
| username | string | body | **Required**. The user's username. |
|
||||
| email | string | body | **Required**. The user's email address. |
|
||||
| password | string | body | **Required**. The user's password. |
|
||||
| invited_by | integer | body | **Required**. ID of the admin creating the user. |
|
||||
| admin | boolean | body | **Required**. Whether the user has admin privileges. |
|
||||
|
||||
#### Example
|
||||
|
||||
`POST /api/v1/kolide/users/admin`
|
||||
|
||||
##### Request query parameters
|
||||
|
||||
```
|
||||
{
|
||||
"username": "janedoe",
|
||||
"email": "janedoe@example.com",
|
||||
"password": "test-123",
|
||||
"invited_by":1,
|
||||
"admin":true
|
||||
}
|
||||
```
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"user": {
|
||||
"created_at": "0001-01-01T00:00:00Z",
|
||||
"updated_at": "0001-01-01T00:00:00Z",
|
||||
"id": 5,
|
||||
"username": "janedoe",
|
||||
"name": "",
|
||||
"email": "janedoe@example.com",
|
||||
"admin": false,
|
||||
"enabled": true,
|
||||
"force_password_reset": false,
|
||||
"gravatar_url": "",
|
||||
"sso_enabled": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
##### Failed authentication
|
||||
|
||||
`Status: 401 Authentication Failed`
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Authentication Failed",
|
||||
"errors": [
|
||||
{
|
||||
"name": "base",
|
||||
"reason": "username or email and password do not match"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
##### User doesn't exist
|
||||
|
||||
`Status: 404 Resource Not Found`
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Resource Not Found",
|
||||
"errors": [
|
||||
{
|
||||
"name": "base",
|
||||
"reason": "User with id=1 was not found in the datastore"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Get user information
|
||||
|
||||
Returns all information about a specific user.
|
||||
|
||||
`GET /api/v1/kolide/users/{id}`
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Name | Type | In | Description |
|
||||
| ---- | ------- | ----- | ---------------------------- |
|
||||
| id | integer | query | **Required**. The user's id. |
|
||||
|
||||
#### Example
|
||||
|
||||
`GET /api/v1/kolide/users/2`
|
||||
|
||||
##### Request query parameters
|
||||
|
||||
```
|
||||
{
|
||||
"id": 1
|
||||
}
|
||||
```
|
||||
|
||||
##### Default response
|
||||
|
||||
`Status: 200`
|
||||
|
||||
```
|
||||
{
|
||||
"user": {
|
||||
"created_at": "2020-12-10T05:20:25Z",
|
||||
"updated_at": "2020-12-10T05:24:27Z",
|
||||
"id": 2,
|
||||
"username": "janedoe",
|
||||
"name": "janedoe",
|
||||
"email": "janedoe@example.com",
|
||||
"admin": true,
|
||||
"enabled": true,
|
||||
"force_password_reset": false,
|
||||
"gravatar_url": "",
|
||||
"sso_enabled": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
##### Failed authentication
|
||||
|
||||
`Status: 401 Authentication Failed`
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Authentication Failed",
|
||||
"errors": [
|
||||
{
|
||||
"name": "base",
|
||||
"reason": "username or email and password do not match"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
##### User doesn't exist
|
||||
|
||||
`Status: 404 Resource Not Found`
|
||||
|
||||
```
|
||||
{
|
||||
"message": "Resource Not Found",
|
||||
"errors": [
|
||||
{
|
||||
"name": "base",
|
||||
"reason": "User with id=5 was not found in the datastore"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
@ -1,53 +0,0 @@
|
||||
CLI Documentation
|
||||
=================
|
||||
|
||||
Fleet provides a server which allows you to manage and orchestrate an osquery deployment across of a set of workstations and servers. For certain use-cases, it makes sense to maintain the configuration and data of an osquery deployment in source-controlled files. It is also desirable to be able to manage these files with a familiar command-line tool. To facilitate this, Fleet includes a `fleetctl` CLI for managing osquery fleets in this way.
|
||||
|
||||
For more information, see:
|
||||
|
||||
- [Documentation on the file format](./file-format.md)
|
||||
- [The setup guide for new CLI users](./setup-guide.md)
|
||||
|
||||
## Inspiration
|
||||
|
||||
Inspiration for the `fleetctl` command-line experience as well as the file format has been principally derived from the [Kubernetes](https://kubernetes.io/) container orchestration tool. This is for a few reasons:
|
||||
|
||||
- Format Familiarity: At Kolide, we love Kubernetes and we think it is the future of production infrastructure management. We believe that many of the people that use this interface to manage Fleet will also be Kubernetes operators. By using a familiar command-line interface and file format, the cognitive overhead can be reduced since the operator is already familiar with how these tools work and behave.
|
||||
- Established Best Practices: Kubernetes deployments can easily become very complex. Because of this, Kubernetes operators have an established set of best practices that they often follow when writing and maintaining config files. Some of these best practices and tips are documented on the [official Kubernetes website](https://kubernetes.io/docs/concepts/configuration/overview/#general-config-tips) and some are documented by [the community](https://www.mirantis.com/blog/introduction-to-yaml-creating-a-kubernetes-deployment/). Since the file format and workflow is so similar, we can re-use these best practices when managing Fleet configurations.
|
||||
|
||||
### `fleetctl` - The CLI
|
||||
|
||||
The `fleetctl` tool is heavily inspired by the [`kubectl`](https://kubernetes.io/docs/user-guide/kubectl-overview/) tool. If you are familiar with `kubectl`, this will all feel very familiar to you. If not, some further explanation would likely be helpful.
|
||||
|
||||
Fleet exposes the aspects of an osquery deployment as a set of "objects". Objects may be a query, a pack, a set of configuration options, etc. The documentation for [Declarative Management of Kubernetes Objects Using Configuration Files](https://kubernetes.io/docs/tutorials/object-management-kubectl/declarative-object-management-configuration/) says the following about the object lifecycle:
|
||||
|
||||
> Objects can be created, updated, and deleted by storing multiple object configuration files in a directory and using `kubectl apply` to recursively create and update those objects as needed.
|
||||
|
||||
Similarly, Fleet objects can be created, updated, and deleted by storing multiple object configuration files in a directory and using `fleetctl apply` to recursively create and update those objects as needed.
|
||||
|
||||
### Using goquery with `fleetctl`
|
||||
|
||||
Fleet and `fleetctl` have built in support for [goquery](https://github.com/AbGuthrie/goquery).
|
||||
|
||||
Use `fleetctl goquery` to open up the goquery console. When used with Fleet, goquery can connect using either a hostname or UUID.
|
||||
|
||||
```
|
||||
./build/fleetctl get hosts
|
||||
+--------------------------------------+--------------+----------+---------+
|
||||
| UUID | HOSTNAME | PLATFORM | STATUS |
|
||||
+--------------------------------------+--------------+----------+---------+
|
||||
| 192343D5-0000-0000-B85B-58F656BED4C7 | 6523f89187f8 | centos | online |
|
||||
+--------------------------------------+--------------+----------+---------+
|
||||
./build/fleetctl goquery
|
||||
goquery> .connect 6523f89187f8
|
||||
Verified Host(6523f89187f8) Exists.
|
||||
.
|
||||
goquery | 6523f89187f8:> .query select unix_time from time
|
||||
...
|
||||
------------------------------
|
||||
| host_hostname | unix_time |
|
||||
------------------------------
|
||||
| 6523f89187f8 | 1579842569 |
|
||||
------------------------------
|
||||
goquery | 6523f89187f8:>
|
||||
```
|
@ -1,357 +0,0 @@
|
||||
# Configuration File Format
|
||||
|
||||
A Fleet configuration is defined using one or more declarative "messages" in yaml syntax. Each message can live in it's own file or multiple in one file, each separated by `---`. Each file/message contains a few required top-level keys:
|
||||
|
||||
- `apiVersion` - the API version of the file/request
|
||||
- `spec` - the "data" of the request
|
||||
- `kind ` - the type of file/object (i.e.: pack, query, config)
|
||||
|
||||
The file may optionally also include some `metadata` for more complex data types (i.e.: packs).
|
||||
|
||||
When you reason about how to manage these config files, consider following the [General Config Tips](https://kubernetes.io/docs/concepts/configuration/overview/#general-config-tips) published by the Kubernetes project. Some of the especially relevant tips are included here as well:
|
||||
|
||||
- When defining configurations, specify the latest stable API version.
|
||||
- Configuration files should be stored in version control before being pushed to the cluster. This allows quick roll-back of a configuration if needed. It also aids with cluster re-creation and restoration if necessary.
|
||||
- Group related objects into a single file whenever it makes sense. One file is often easier to manage than several. See the [config-single-file.yml](../../examples/config-single-file.yml) file as an example of this syntax.
|
||||
- Don’t specify default values unnecessarily – simple and minimal configs will reduce errors.
|
||||
|
||||
All of these files can be concatenated together into [one file](../../examples/config-single-file.yml) (seperated by `---`), or they can be in [individual files with a directory structure](../../examples/config-many-files) like the following:
|
||||
|
||||
```
|
||||
|-- config.yml
|
||||
|-- labels.yml
|
||||
|-- packs
|
||||
| `-- osquery-monitoring.yml
|
||||
`-- queries.yml
|
||||
```
|
||||
|
||||
## Convert Osquery JSON
|
||||
|
||||
`fleetctl` includes easy tooling to convert osquery pack JSON into the
|
||||
`fleetctl` format. Use `fleetctl convert` with a path to the pack file:
|
||||
|
||||
```
|
||||
fleetctl convert -f test.json
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: pack
|
||||
spec:
|
||||
name: test
|
||||
queries:
|
||||
- description: "this is a test query"
|
||||
interval: 10
|
||||
name: processes
|
||||
query: processes
|
||||
removed: false
|
||||
targets:
|
||||
labels: null
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: query
|
||||
spec:
|
||||
name: processes
|
||||
query: select * from processes
|
||||
```
|
||||
|
||||
## Osquery Queries
|
||||
|
||||
For especially long or complex queries, you may want to define one query in one file. Continued edits and applications to this file will update the query as long as the `metadata.name` does not change. If you want to change the name of a query, you must first create a new query with the new name and then delete the query with the old name. Make sure the old query name is not defined in any packs before deleting it or an error will occur.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: query
|
||||
spec:
|
||||
name: docker_processes
|
||||
description: The docker containers processes that are running on a system.
|
||||
query: select * from docker_container_processes;
|
||||
support:
|
||||
osquery: 2.9.0
|
||||
platforms:
|
||||
- linux
|
||||
- darwin
|
||||
```
|
||||
|
||||
To define multiple queries in a file, concatenate multiple `query` resources together in a single file with `---`. For example, consider a file that you might store at `queries/osquery_monitoring.yml`:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: query
|
||||
spec:
|
||||
name: osquery_version
|
||||
description: The version of the Launcher and Osquery process
|
||||
query: select launcher.version, osquery.version from kolide_launcher_info launcher, osquery_info osquery;
|
||||
support:
|
||||
launcher: 0.3.0
|
||||
osquery: 2.9.0
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: query
|
||||
spec:
|
||||
name: osquery_schedule
|
||||
description: Report performance stats for each file in the query schedule.
|
||||
query: select name, interval, executions, output_size, wall_time, (user_time/executions) as avg_user_time, (system_time/executions) as avg_system_time, average_memory, last_executed from osquery_schedule;
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: query
|
||||
spec:
|
||||
name: osquery_info
|
||||
description: A heartbeat counter that reports general performance (CPU, memory) and version.
|
||||
query: select i.*, p.resident_size, p.user_time, p.system_time, time.minutes as counter from osquery_info i, processes p, time where p.pid = i.pid;
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: query
|
||||
spec:
|
||||
name: osquery_events
|
||||
description: Report event publisher health and track event counters.
|
||||
query: select name, publisher, type, subscriptions, events, active from osquery_events;
|
||||
```
|
||||
|
||||
## Query Packs
|
||||
|
||||
To define query packs, reference queries defined elsewhere by name. This is why the "name" of a query is so important. You can define many of these packs in many files.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: pack
|
||||
spec:
|
||||
name: osquery_monitoring
|
||||
disabled: false
|
||||
targets:
|
||||
labels:
|
||||
- All Hosts
|
||||
queries:
|
||||
- query: osquery_version
|
||||
name: osquery_version_differential
|
||||
interval: 7200
|
||||
- query: osquery_version
|
||||
name: osquery_version_snapshot
|
||||
interval: 7200
|
||||
snapshot: true
|
||||
- query: osquery_schedule
|
||||
interval: 7200
|
||||
removed: false
|
||||
- query: osquery_events
|
||||
interval: 86400
|
||||
removed: false
|
||||
- query: osquery_info
|
||||
interval: 600
|
||||
removed: false
|
||||
```
|
||||
|
||||
## Host Labels
|
||||
|
||||
The following file describes the labels which hosts should be automatically grouped into. The label resource should include the actual SQL query so that the label is self-contained:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: label
|
||||
spec:
|
||||
name: slack_not_running
|
||||
query: >
|
||||
SELECT * from system_info
|
||||
WHERE NOT EXISTS (
|
||||
SELECT *
|
||||
FROM processes
|
||||
WHERE name LIKE "%Slack%"
|
||||
);
|
||||
```
|
||||
|
||||
Labels can also be "manually managed". When defining the label, reference hosts
|
||||
by hostname:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: label
|
||||
spec:
|
||||
name: Manually Managed Example
|
||||
label_membership_type: manual
|
||||
hosts:
|
||||
- hostname1
|
||||
- hostname2
|
||||
- hostname3
|
||||
```
|
||||
|
||||
|
||||
## Osquery Configuration Options
|
||||
|
||||
The following file describes options returned to osqueryd when it checks for configuration. See the [osquery documentation](https://osquery.readthedocs.io/en/stable/deployment/configuration/#options) for the available options. Existing options will be over-written by the application of this file.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: options
|
||||
spec:
|
||||
config:
|
||||
options:
|
||||
distributed_interval: 3
|
||||
distributed_tls_max_attempts: 3
|
||||
logger_plugin: tls
|
||||
logger_tls_endpoint: /api/v1/osquery/log
|
||||
logger_tls_period: 10
|
||||
decorators:
|
||||
load:
|
||||
- "SELECT version FROM osquery_info"
|
||||
- "SELECT uuid AS host_uuid FROM system_info"
|
||||
always:
|
||||
- "SELECT user AS username FROM logged_in_users WHERE user <> '' ORDER BY time LIMIT 1"
|
||||
interval:
|
||||
3600: "SELECT total_seconds AS uptime FROM uptime"
|
||||
overrides:
|
||||
# Note configs in overrides take precedence over the default config defined
|
||||
# under the config key above. Hosts receive overrides based on the platform
|
||||
# returned by `SELECT platform FROM os_version`. In this example, the base
|
||||
# config would be used for Windows and CentOS hosts, while Mac and Ubuntu
|
||||
# hosts would receive their respective overrides. Note, these overrides are
|
||||
# NOT merged with the top level configuration.
|
||||
platforms:
|
||||
darwin:
|
||||
options:
|
||||
distributed_interval: 10
|
||||
distributed_tls_max_attempts: 10
|
||||
logger_plugin: tls
|
||||
logger_tls_endpoint: /api/v1/osquery/log
|
||||
logger_tls_period: 300
|
||||
disable_tables: chrome_extensions
|
||||
docker_socket: /var/run/docker.sock
|
||||
file_paths:
|
||||
users:
|
||||
- /Users/%/Library/%%
|
||||
- /Users/%/Documents/%%
|
||||
etc:
|
||||
- /etc/%%
|
||||
|
||||
ubuntu:
|
||||
options:
|
||||
distributed_interval: 10
|
||||
distributed_tls_max_attempts: 3
|
||||
logger_plugin: tls
|
||||
logger_tls_endpoint: /api/v1/osquery/log
|
||||
logger_tls_period: 60
|
||||
schedule_timeout: 60
|
||||
docker_socket: /etc/run/docker.sock
|
||||
file_paths:
|
||||
homes:
|
||||
- /root/.ssh/%%
|
||||
- /home/%/.ssh/%%
|
||||
etc:
|
||||
- /etc/%%
|
||||
tmp:
|
||||
- /tmp/%%
|
||||
exclude_paths:
|
||||
homes:
|
||||
- /home/not_to_monitor/.ssh/%%
|
||||
tmp:
|
||||
- /tmp/too_many_events/
|
||||
decorators:
|
||||
load:
|
||||
- "SELECT * FROM cpuid"
|
||||
- "SELECT * FROM docker_info"
|
||||
interval:
|
||||
3600: "SELECT total_seconds AS uptime FROM uptime"
|
||||
```
|
||||
|
||||
### Auto Table Construction
|
||||
|
||||
You can use Fleet to query local SQLite databases as tables. For more information on creating ATC configuration from a SQLite database, see the [Osquery Automatic Table Construction documentation](https://osquery.readthedocs.io/en/stable/deployment/configuration/#automatic-table-construction)
|
||||
|
||||
If you already know what your ATC configuration needs to look like, you can add it to an options config file:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: options
|
||||
spec:
|
||||
overrides:
|
||||
platforms:
|
||||
darwin:
|
||||
auto_table_construction:
|
||||
tcc_system_entries:
|
||||
query: "select service, client, allowed, prompt_count, last_modified from access"
|
||||
path: "/Library/Application Support/com.apple.TCC/TCC.db"
|
||||
columns:
|
||||
- "service"
|
||||
- "client"
|
||||
- "allowed"
|
||||
- "prompt_count"
|
||||
- "last_modified"
|
||||
```
|
||||
|
||||
## Fleet Configuration Options
|
||||
The following file describes configuration options applied to the Fleet server.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: config
|
||||
spec:
|
||||
host_expiry_settings:
|
||||
host_expiry_enabled: true
|
||||
host_expiry_window: 10
|
||||
host_settings:
|
||||
# "additional" information to collect from hosts along with the host
|
||||
# details. This information will be updated at the same time as other host
|
||||
# details and is returned by the API when host objects are returned. Users
|
||||
# must take care to keep the data returned by these queries small in
|
||||
# order to mitigate potential performance impacts on the Fleet server.
|
||||
additional_queries:
|
||||
time: select * from time
|
||||
macs: select mac from interface_details
|
||||
org_info:
|
||||
org_logo_url: "https://example.org/logo.png"
|
||||
org_name: Example Org
|
||||
server_settings:
|
||||
kolide_server_url: https://fleet.example.org:8080
|
||||
smtp_settings:
|
||||
authentication_method: authmethod_plain
|
||||
authentication_type: authtype_username_password
|
||||
domain: example.org
|
||||
enable_smtp: true
|
||||
enable_ssl_tls: true
|
||||
enable_start_tls: true
|
||||
password: supersekretsmtppass
|
||||
port: 587
|
||||
sender_address: fleet@example.org
|
||||
server: mail.example.org
|
||||
user_name: test_user
|
||||
verify_ssl_certs: true
|
||||
sso_settings:
|
||||
enable_sso: false
|
||||
entity_id: 1234567890
|
||||
idp_image_url: https://idp.example.org/logo.png
|
||||
idp_name: IDP Vendor 1
|
||||
issuer_uri: https://idp.example.org/SAML2/SSO/POST
|
||||
metadata: "<md:EntityDescriptor entityID="https://idp.example.org/SAML2"> ... /md:EntityDescriptor>"
|
||||
metadata_url: https://idp.example.org/idp-meta.xml
|
||||
```
|
||||
### SMTP Authentication
|
||||
|
||||
**Warning:** Be careful not to store your SMTP credentials in source control. It is recommended to set the password through the web UI or `fleetctl` and then remove the line from the checked in version. Fleet will leave the password as-is if the field is missing from the applied configuration.
|
||||
|
||||
The following options are available when configuring SMTP authentication:
|
||||
|
||||
- `smtp_settings.authentication_type`
|
||||
- `authtype_none` - use this if your SMTP server is open
|
||||
- `authtype_username_password` - use this if your SMTP server requires authentication with a username and password
|
||||
- `smtp_settings.authentication_method` - required with authentication type `authtype_username_password`
|
||||
- `authmethod_cram_md5`
|
||||
- `authmethod_login`
|
||||
- `authmethod_plain`
|
||||
|
||||
## Enroll Secrets
|
||||
|
||||
The following file shows how to configure enroll secrets. Note that secrets can be changed or made inactive, but not deleted. Hosts may not enroll with inactive secrets.
|
||||
|
||||
The name of the enroll secret used to authenticate is stored with the host and is included with API results.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: enroll_secret
|
||||
spec:
|
||||
secrets:
|
||||
- active: true
|
||||
name: default
|
||||
secret: RzTlxPvugG4o4O5IKS/HqEDJUmI1hwBoffff
|
||||
- active: true
|
||||
name: new_one
|
||||
secret: reallyworks
|
||||
- active: false
|
||||
name: inactive_secret
|
||||
secret: thissecretwontwork!
|
||||
```
|
@ -1,185 +0,0 @@
|
||||
# Setting Up Fleet via the CLI
|
||||
|
||||
This document walks through setting up and configuring Fleet via the CLI. If you already have a running fleet instance, skip ahead to [Logging In To An Existing Fleet Instance](#logging-in-to-an-existing-fleet-instance) to configure the `fleetctl` CLI.
|
||||
|
||||
This guide illustrates:
|
||||
|
||||
- A minimal CLI workflow for managing an osquery fleet
|
||||
- The set of API interactions that are required if you want to perform remote, automated management of a Fleet instance
|
||||
|
||||
## Running Fleet
|
||||
|
||||
For the sake of this tutorial, I will be using the local development Docker Compose infrastructure to run Fleet locally. This is documented in some detail in the [developer documentation](../development/development-infrastructure.md), but the following are the minimal set of commands that you can run from the root of the repository (assuming that you have a working Go/JavaScript toolchain installed along with Docker Compose):
|
||||
|
||||
```
|
||||
docker-compose up -d
|
||||
make deps
|
||||
make generate
|
||||
make
|
||||
./build/fleet prepare db
|
||||
./build/fleet serve --auth_jwt_key="insecure"
|
||||
```
|
||||
|
||||
The `fleet serve` command will be the long running command that runs the Fleet server.
|
||||
|
||||
## `fleetctl config`
|
||||
|
||||
At this point, the MySQL database doesn't have any users in it. Because of this, Fleet is exposing a one-time setup endpoint. Before we can hit that endpoint (by running `fleetctl setup`), we have to first configure the local `fleetctl` context.
|
||||
|
||||
Now, since our Fleet instance is local in this tutorial, we didn't get a valid TLS certificate, so we need to run the following to configure our Fleet context:
|
||||
|
||||
```
|
||||
fleetctl config set --address https://localhost:8080 --tls-skip-verify
|
||||
[+] Set the address config key to "https://localhost:8080" in the "default" context
|
||||
[+] Set the tls-skip-verify config key to "true" in the "default" context
|
||||
```
|
||||
|
||||
Now, if you were connecting to a Fleet instance for real, you wouldn't want to skip TLS certificate verification, so you might run something like:
|
||||
|
||||
```
|
||||
fleetctl config set --address https://fleet.corp.example.com
|
||||
[+] Set the address config key to "https://fleet.corp.example.com" in the "default" context
|
||||
```
|
||||
|
||||
## `fleetctl setup`
|
||||
|
||||
Now that we've configured our local CLI context, lets go ahead and create our admin account:
|
||||
|
||||
```
|
||||
fleetctl setup --email mike@arpaia.co
|
||||
Password:
|
||||
[+] Fleet setup successful and context configured!
|
||||
```
|
||||
|
||||
It's possible to specify the password via the `--password` flag or the `$PASSWORD` environment variable, but be cautious of the security implications of such an action. For local use, the interactive mode above is the most secure.
|
||||
|
||||
## Connecting a Host
|
||||
|
||||
For the sake of this tutorial, I'm going to be using Kolide's osquery launcher to start osquery locally and connect it to Fleet. To learn more about connecting osquery to Fleet, see the [Adding Hosts to Fleet](../infrastructure/adding-hosts-to-fleet.md) documentation.
|
||||
|
||||
To get your osquery enroll secret, run the following:
|
||||
|
||||
```
|
||||
fleetctl get enroll-secret
|
||||
E7P6zs9D0mvY7ct08weZ7xvLtQfGYrdC
|
||||
```
|
||||
|
||||
You need to use this secret to connect a host. If you're running Fleet locally, you'd run:
|
||||
|
||||
```
|
||||
launcher \
|
||||
--hostname localhost:8080 \
|
||||
--enroll_secret E7P6zs9D0mvY7ct08weZ7xvLtQfGYrdC \
|
||||
--root_directory=$(mktemp -d) \
|
||||
--insecure
|
||||
```
|
||||
|
||||
## Query Hosts
|
||||
|
||||
To run a simple query against all hosts, you might run something like the following:
|
||||
|
||||
```
|
||||
fleetctl query --query 'select * from osquery_info;' --labels='All Hosts' > results.json
|
||||
⠂ 100% responded (100% online) | 1/1 targeted hosts (1/1 online)
|
||||
^C
|
||||
```
|
||||
|
||||
When the query is done (or you have enough results), CTRL-C and look at the `results.json` file:
|
||||
|
||||
```json
|
||||
{
|
||||
"host": "marpaia",
|
||||
"rows": [
|
||||
{
|
||||
"build_distro": "10.13",
|
||||
"build_platform": "darwin",
|
||||
"config_hash": "d7cafcd183cc50c686b4c128263bd4eace5d89e1",
|
||||
"config_valid": "1",
|
||||
"extensions": "active",
|
||||
"host_hostname": "marpaia",
|
||||
"instance_id": "37840766-7182-4a68-a204-c7f577bd71e1",
|
||||
"pid": "22984",
|
||||
"start_time": "1527031727",
|
||||
"uuid": "B312055D-9209-5C89-9DDB-987299518FF7",
|
||||
"version": "3.2.3",
|
||||
"watcher": "-1"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Update Osquery Options
|
||||
|
||||
By default, each osquery node will check in with Fleet every 10 seconds. Let's say, for testing, you want to increase this to every 2 seconds. If this is the first time you've ever modified osquery options, let's download them locally:
|
||||
|
||||
```
|
||||
fleetctl get options > options.yaml
|
||||
```
|
||||
|
||||
The `options.yaml` file will look something like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: options
|
||||
spec:
|
||||
config:
|
||||
decorators:
|
||||
load:
|
||||
- SELECT uuid AS host_uuid FROM system_info;
|
||||
- SELECT hostname AS hostname FROM system_info;
|
||||
options:
|
||||
disable_distributed: false
|
||||
distributed_interval: 10
|
||||
distributed_plugin: tls
|
||||
distributed_tls_max_attempts: 3
|
||||
distributed_tls_read_endpoint: /api/v1/osquery/distributed/read
|
||||
distributed_tls_write_endpoint: /api/v1/osquery/distributed/write
|
||||
logger_plugin: tls
|
||||
logger_tls_endpoint: /api/v1/osquery/log
|
||||
logger_tls_period: 10
|
||||
pack_delimiter: /
|
||||
overrides: {}
|
||||
```
|
||||
|
||||
Let's edit the file so that the `distributed_interval` option is 2 instead of 10. Save the file and run:
|
||||
|
||||
```
|
||||
fleetctl apply -f ./options.yaml
|
||||
```
|
||||
|
||||
Now run a live query again. You should notice results coming back more quickly.
|
||||
|
||||
# Logging In To An Existing Fleet Instance
|
||||
|
||||
If you have an existing Fleet instance (version 2.0.0 or above), then simply run `fleetctl login` (after configuring your local CLI context):
|
||||
|
||||
```
|
||||
fleetctl config set --address https://fleet.corp.example.com
|
||||
[+] Set the address config key to "https://fleet.corp.example.com" in the "default" context
|
||||
|
||||
fleetctl login
|
||||
Log in using the standard Fleet credentials.
|
||||
Email: mike@arpaia.co
|
||||
Password:
|
||||
[+] Fleet login successful and context configured!
|
||||
```
|
||||
|
||||
Once your local context is configured, you can use the above `fleetctl` normally. See `fleetctl --help` for more information.
|
||||
|
||||
## Logging In with SAML (SSO) Authentication
|
||||
|
||||
Users that authenticate to Fleet via SSO should retrieve their API token from the UI and set it manually in their `fleetctl` configuration (instead of logging in via `fleetctl login`).
|
||||
|
||||
1. Go to the "Account Settings" page in Fleet (https://fleet.corp.example.com/settings). Click the "Get API Token" button to bring up a modal with the API token.
|
||||
|
||||
2. Set the API token in the `~/.fleet/config` file. The file should look like the following:
|
||||
|
||||
```
|
||||
contexts:
|
||||
default:
|
||||
address: https://fleet.corp.example.com
|
||||
email: example@example.com
|
||||
token: your_token_here
|
||||
```
|
||||
|
||||
Note the token can also be set with `fleetctl config set --token`, but this may leak the token into a user's shell history.
|
@ -1,9 +0,0 @@
|
||||
Dashboard Documentation
|
||||
=========================
|
||||
|
||||
Fleet is an application that allows you to take advantage of the power of osquery in order to maintain constant insight into the state of your infrastructure (security, health, stability, performance, compliance, etc). The dashboard documentation contains documents on the following topics:
|
||||
|
||||
## Using the Fleet Dashboard
|
||||
|
||||
- For information on running osquery queries on hosts in your infrastructure, you can refer to the [Running Queries](./running-queries.md) page.
|
||||
- For information on configuring SSO for logging in to Fleet, see the guide on [Configuring Single Sign On](./single-sign-on.md).
|
@ -1,16 +0,0 @@
|
||||
Running Queries
|
||||
===============
|
||||
|
||||
The Fleet application allows you to query hosts which you have installed osquery on. To run a new query, use the "Query" sidebar and select "New Query". From this page, you can compose your query, view SQL table documentation via the sidebar, select arbitrary hosts (or groups of hosts), and execute your query. As results are returned, they will populate the interface in real time. You can use the integrated filtering tool to perform useful initial analytics and easily export the entire dataset for offline analysis.
|
||||
|
||||
![Distributed new query with local filter](../images/distributed-new-query-with-local-filter.png)
|
||||
|
||||
After you've composed a query that returns the information you were looking for, you may choose to save the query. You can still continue to execute the query on whatever set of hosts you would like after you have saved the query.
|
||||
|
||||
![Distributed saved query with local filter](../images/distributed-saved-query-with-local-filter.png)
|
||||
|
||||
Saved queries can be accessed if you select "Manage Queries" from the "Query" section of the sidebar. Here, you will find all of the queries you've ever saved. You can filter the queries by query name, so name your queries something memorable!
|
||||
|
||||
![Manage Queries](../images/manage-queries.png)
|
||||
|
||||
To learn more about scheduling queries so that they run on an on-going basis, see the [Scheduling Queries](./scheduling-queries.md) guide.
|
@ -1,28 +0,0 @@
|
||||
Scheduling Queries
|
||||
==================
|
||||
|
||||
As discussed in the [Running Queries Documentation](./running-queries.md), you can use the Fleet application to create, execute, and save osquery queries. You can organize these queries into "Query Packs". To view all saved packs and perhaps create a new pack, select "Manage Packs" from the "Packs" sidebar. Packs are usually organized by the general class of instrumentation that you're trying to perform.
|
||||
|
||||
![Manage Packs](../images/manage-packs.png)
|
||||
|
||||
If you select a pack from the list, you can quickly enable and disable the entire pack, or you can configure it further.
|
||||
|
||||
![Manage Packs With Pack Selected](../images/manage-packs-with-pack-selected.png)
|
||||
|
||||
When you edit a pack, you can decide which targets you would like to execute the pack. This is a similar selection experience to the target selection process that you use to execute a new query.
|
||||
|
||||
![Edit Pack Targets](../images/edit-pack-targets.png)
|
||||
|
||||
To add queries to a pack, use the right-hand sidebar. You can take an existing scheduled query and add it to the pack. You must also define a few key details such as:
|
||||
|
||||
- interval: how often should the query be executed?
|
||||
- logging: which osquery logging format would you like to use?
|
||||
- platform: which operating system platforms should execute this query?
|
||||
- minimum osquery version: if the table was introduced in a newer version of osquery, you may want to ensure that only sufficiently recent version of osquery execute the query.
|
||||
- shard: from 0 to 100, what percent of hosts should execute this query?
|
||||
|
||||
![Schedule Query Sidebar](../images/schedule-query-sidebar.png)
|
||||
|
||||
|
||||
Once you've scheduled queries and curated your packs, you can read our guide to [Working With Osquery Logs](../infrastructure/working-with-osquery-logs.md).
|
||||
|
@ -1,40 +0,0 @@
|
||||
Infrastructure Documentation
|
||||
============================
|
||||
|
||||
Fleet is an infrastructure instrumentation application which has it's own infrastructure dependencies and requirements. The infrastructure documentation contains documents on the following topics:
|
||||
|
||||
## Deploying and configuring osquery
|
||||
|
||||
- For information on installing osquery on hosts that you own, see our [Adding Hosts To Fleet](./adding-hosts-to-fleet.md) document, which complements existing [osquery documentation](https://osquery.readthedocs.io/en/stable/).
|
||||
- To add hosts to Fleet, you will need to provide a minimum set of configuration to the osquery agent on each host. These configurations are defined in the aforementioned [Adding Hosts To Fleet](./adding-hosts-to-fleet.md) document. If you'd like to further customize the osquery configurations and options, this can be done via fleetctl. You can find more documentation on this feature in the [fleetctl documentation](../cli/file-format.md#osquery-configuration-options).
|
||||
- To manage osquery configurations at your organization, we strongly suggest using some form of configuration management tooling. For more information on configuration management, see the [Managing Osquery Configurations](./managing-osquery-configurations.md) document.
|
||||
|
||||
## Installing Fleet and its dependencies
|
||||
|
||||
The Fleet server has a few infrastructure dependencies. To learn more about installing the Fleet server and it's dependencies, see the [Installing Fleet](./installing-fleet.md) guide.
|
||||
|
||||
## Managing a Fleet server
|
||||
|
||||
We're prepared a brief guide to help you manage and maintain your Fleet server. Check out the guide for setting up and running [Fleet on Ubuntu](./fleet-on-ubuntu.md) and [Fleet on CentOS](./fleet-on-centos.md).
|
||||
|
||||
For more information, you can also read the [Configuring The Fleet Binary](./configuring-the-fleet-binary.md) guide for information on how to configure and customize Fleet for your organization.
|
||||
|
||||
Once the Fleet server is installed and configured, take a look at the [Monitoring & Alerting](./monitoring-alerting.md) documentation.
|
||||
|
||||
## Working with osquery logs
|
||||
|
||||
Fleet allows users to schedule queries, curate packs, and generate a lot of osquery logs. For more information on how you can access these logs as well as examples on what you can do with them, see the [Working With Osquery Logs](./working-with-osquery-logs.md) documentation.
|
||||
|
||||
## File Carving
|
||||
|
||||
Learn how to work with the osquery file carving functionality to extract file contents in the [File Carving](./file-carving.md) documentation.
|
||||
|
||||
## Troubleshooting & FAQ
|
||||
|
||||
Check out the [Frequently Asked Questions](./faq.md), which include troubleshooting steps for the most common issues experience by Fleet users.
|
||||
|
||||
For performance concerns, see the [performance guide](./performance.md).
|
||||
|
||||
## Security
|
||||
|
||||
Fleet developers have documented how Fleet handles the [OWASP Top 10](./owasp-top-10.md).
|
@ -1,123 +0,0 @@
|
||||
# FAQ for using/operating Fleet
|
||||
|
||||
## How do I get support for working with Fleet?
|
||||
|
||||
For bug reports, please use the [Github issue tracker](https://github.com/fleetdm/fleet/issues).
|
||||
|
||||
For questions and discussion, please join us in the #fleet channel of [osquery Slack](https://osquery.slack.com/join/shared_invite/zt-h29zm0gk-s2DBtGUTW4CFel0f0IjTEw#/).
|
||||
|
||||
## Can multiple instances of the Fleet server be run behind a load-balancer?
|
||||
|
||||
Yes. Fleet scales horizontally out of the box as long as all of the Fleet servers are connected to the same MySQL and Redis instances.
|
||||
|
||||
Note that osquery logs will be distributed across the Fleet servers.
|
||||
|
||||
Read the [performance docs](./performance.md) for more.
|
||||
|
||||
## Where are my query results?
|
||||
|
||||
### Live Queries
|
||||
|
||||
Live query results (executed in the web UI or `fleetctl query`) are pushed directly to the UI where the query is running. The results never go to a file unless you as the user manually save them.
|
||||
|
||||
### Scheduled Queries
|
||||
|
||||
Scheduled query results (queries that are scheduled to run in Packs) are typically sent to the Fleet server, and will be available on the filesystem of the server at the path configurable by [`--osquery_result_log_file`](./configuring-the-fleet-binary.md#osquery_result_log_file). This defaults to `/tmp/osquery_result`.
|
||||
|
||||
It is possible to configure osqueryd to log query results outside of Fleet. For results to go to Fleet, the `--logger_plugin` flag must be set to `tls`.
|
||||
|
||||
### What are my options for storing the osquery logs?
|
||||
|
||||
Folks typically use Fleet to ship logs to data aggregation systems like Splunk, the ELK stack, and Graylog.
|
||||
|
||||
The [logger configuration options](https://github.com/fleetdm/fleet/blob/master/docs/infrastructure/configuring-the-fleet-binary.md#osquery_status_log_plugin) allow you to select the log output plugin. Using the log outputs you can route the logs to your chosen aggregation system.
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
Expecting results, but not seeing anything in the logs?
|
||||
|
||||
- Try scheduling a query that always returns results (eg. `SELECT * FROM time`).
|
||||
- Check whether the query is scheduled in differential mode. If so, new results will only be logged when the result set changes.
|
||||
- Ensure that the query is scheduled to run on the intended platforms, and that the tables queried are supported by those platforms.
|
||||
- Use live query to `SELECT * FROM osquery_schedule` to check whether the query has been scheduled on the host.
|
||||
- Look at the status logs provided by osquery. In a standard configuration these are available on the filesystem of the Fleet server at the path configurable by [`--filesystem_status_log_file`](./configuring-the-fleet-binary.md#filesystem_status_log_file). This defaults to `/tmp/osquery_status`. The host will output a status log each time it executes the query.
|
||||
|
||||
## Why aren’t my live queries being logged?
|
||||
|
||||
Live query results are never logged to the filesystem of the Fleet server. See [Where are my query results?](#where-are-my-query-results).
|
||||
|
||||
## Why aren't my osquery agents connecting to Fleet?
|
||||
|
||||
This can be caused by a variety of problems. The best way to debug is usually to add `--verbose --tls_dump` to the arguments provided to `osqueryd` and look at the logs for the server communication.
|
||||
|
||||
### Common problems
|
||||
|
||||
- `Connection refused`: The server is not running, or is not listening on the address specified. Is the server listening on an address that is available from the host running osquery? Do you have a load balancer that might be blocking connections? Try testing with `curl`.
|
||||
- `No node key returned`: Typically this indicates that the osquery client sent an incorrect enroll secret that was rejected by the server. Check what osquery is sending by looking in the logs near this error.
|
||||
- `certificate verify failed`: See [How do I fix "certificate verify failed" errors from osqueryd](#how-do-i-fix-certificate-verify-failed-errors-from-osqueryd).
|
||||
- `bad record MAC`: When generating your certificate for your Fleet server, ensure you set the hostname to the FQDN or the IP of the server. This error is common when setting up Fleet servers and accepting defaults when generating certificates using `openssl`.
|
||||
|
||||
## How do I fix "certificate verify failed" errors from osqueryd?
|
||||
|
||||
Osquery requires that all communication between the agent and Fleet are over a secure TLS connection. For the safety of osquery deployments, there is no (convenient) way to circumvent this check.
|
||||
|
||||
- Try specifying the path to the full certificate chain used by the server using the `--tls_server_certs` flag in `osqueryd`. This is often unnecessary when using a certificate signed by an authority trusted by the system, but is mandatory when working with self-signed certificates. In all cases it can be a useful debugging step.
|
||||
- Ensure that the CNAME on the certificate matches the address at which the server is being accessed. If I try connect osquery via `https://localhost:443`, but my certificate is for `https://fleet.example.com`, the verification will fail.
|
||||
- Is Fleet behind a load-balancer? Ensure that if the load-balancer is terminating TLS that this is the certificate provided to osquery.
|
||||
- Does the certificate verify with `curl`? Try `curl -v -X POST https://kolideserver:port/api/v1/osquery/enroll`.
|
||||
|
||||
## What do I do about "too many open files" errors?
|
||||
|
||||
This error usually indicates that the Fleet server has run out of file descriptors. Fix this by increasing the `ulimit` on the Fleet process. See the `LimitNOFILE` setting in the [example systemd unit file](./systemd.md) for an example of how to do this with systemd.
|
||||
|
||||
## I upgraded my database, but Fleet is still running slowly. What could be going on?
|
||||
|
||||
This could be caused by a mismatched connection limit between the Fleet server and the MySQL server that prevents Fleet from fully utilizing the database. First [determine how many open connections your MySQL server supports](https://dev.mysql.com/doc/refman/8.0/en/too-many-connections.html). Now set the [`--mysql_max_open_conns`](./configuring-the-fleet-binary.md#mysql_max_open_conns) and [`--mysql_max_idle_conns`](./configuring-the-fleet-binary.md#mysql_max_idle_conns) flags appropriately.
|
||||
|
||||
## How do I monitor the performance of my queries?
|
||||
|
||||
Fleet can live query the `osquery_schedule` table. Performing this live query allows you to get the performance data for your scheduled queries. Also consider scheduling a query to the `osquery_schedule` table to get these logs into your logging pipeline.
|
||||
|
||||
## Why am I receiving a database connection error when attempting to "prepare" the database?
|
||||
|
||||
First, check if you have a version of MySQL installed that is at least 5.7. Then, make sure that you currently have a MySQL server running.
|
||||
|
||||
The next step is to make sure the credentials for the database match what is expected. Test your ability to connect to the database with `mysql -u<username> -h<hostname_or_ip> -P<port> -D<database_name> -p`.
|
||||
|
||||
If you're successful connecting to the database and still receive a database connection error, you may need to specify your database credentials when running `fleet prepare db`. It's encouraged to put your database credentials in environment variables or a config file.
|
||||
|
||||
```
|
||||
fleet prepare db \
|
||||
--mysql_address=<database_address> \
|
||||
--mysql_database=<database_name> \
|
||||
--mysql_username=<username> \
|
||||
--mysql_password=<database_password>
|
||||
```
|
||||
|
||||
## How do I monitor a Fleet server?
|
||||
|
||||
Fleet provides standard interfaces for monitoring and alerting. See the [Monitoring & Alerting](./monitoring-alerting.md) documentation for details.
|
||||
|
||||
## Why is the "Add User" button disabled?
|
||||
|
||||
The "Add User" button is disabled if SMTP (email) has not been configured for the Fleet server. Currently, there is no way to add new users without email capabilities.
|
||||
|
||||
One way to hack around this is to use a simulated mailserver like [Mailhog](https://github.com/mailhog/MailHog). You can retrieve the email that was "sent" in the Mailhog UI, and provide users with the invite URL manually.
|
||||
|
||||
## Is Fleet available as a SaaS product?
|
||||
|
||||
No. Currently, Fleet is only available for self-hosting on premises or in the cloud.
|
||||
|
||||
## Has anyone stress tested Fleet? How many clients can the Fleet server handle?
|
||||
|
||||
Fleet has been stress tested to 150,000 online hosts and 400,000 total enrolled hosts. There are numerous production deployments in the thousands, in the tens of thousands of hosts range, and there are production deployments in the high tens of thousands of hosts range.
|
||||
|
||||
It's standard deployment practice to have multiple Fleet servers behind a load balancer. However, typically the MySQL database is the bottleneck and an individual Fleet server can handle tens of thousands of hosts.
|
||||
|
||||
## How often do labels refresh? Is the refresh frequency configurable?
|
||||
|
||||
The update frequency for labels is configurable with the [--osquery_label_update_interval](https://github.com/fleetdm/fleet/blob/master/docs/infrastructure/configuring-the-fleet-binary.md#osquery_label_update_interval) flag (default 1 hour).
|
||||
|
||||
## How do I revoke the authorization tokens for a user?
|
||||
|
||||
Authorization tokens are revoked when the "require password reset" action is selected for that user. User-initiated password resets do not expire the existing tokens.
|
@ -1,94 +0,0 @@
|
||||
# File Carving with Fleet
|
||||
|
||||
Fleet supports osquery's file carving functionality as of Fleet 3.3.0. This allows the Fleet server to request files (and sets of files) from osquery agents, returning the full contents to Fleet.
|
||||
|
||||
File carving data can be either stored in Fleet's database or to an external S3 bucket. For information on how to configure the latter, consult the [configuration docs](https://github.com/fleetdm/fleet/blob/master/docs/infrastructure/configuring-the-fleet-binary.md#s3-file-carving-backend).
|
||||
|
||||
## Configuration
|
||||
|
||||
Given a working flagfile for connecting osquery agents to Fleet, add the following flags to enable carving:
|
||||
|
||||
```
|
||||
--disable_carver=false
|
||||
--carver_start_endpoint=/api/v1/osquery/carve/begin
|
||||
--carver_continue_endpoint=/api/v1/osquery/carve/block
|
||||
--carver_block_size=2000000
|
||||
```
|
||||
|
||||
The default flagfile provided in the "Add New Host" dialog also includes this configuration.
|
||||
|
||||
### Carver Block Size
|
||||
|
||||
The `carver_block_size` flag should be configured in osquery. 2MB (`2000000`) is a good starting value.
|
||||
|
||||
The configured value must be less than the value of `max_allowed_packet` in the MySQL connection, allowing for some overhead. The default for MySQL 5.7 is 4MB and for MySQL 8 it is 64MB.
|
||||
|
||||
In case S3 is used as the storage backend, this value must be instead set to be at least 5MB due to the [constraints of S3's multipart uploads](https://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html).
|
||||
|
||||
Using a smaller value for `carver_block_size` will lead to more HTTP requests during the carving process, resulting in longer carve times and higher load on the Fleet server. If the value is too high, HTTP requests may run long enough to cause server timeouts.
|
||||
|
||||
### Compression
|
||||
|
||||
Compression of the carve contents can be enabled with the `carver_compression` flag in osquery. When used, the carve results will be compressed with [Zstandard](https://facebook.github.io/zstd/) compression.
|
||||
|
||||
## Usage
|
||||
|
||||
File carves are initiated with osquery queries. Issue a query to the `carves` table, providing `carve = 1` along with the desired path(s) as constraints.
|
||||
|
||||
For example, to extract the `/etc/hosts` file on a host with hostname `mac-workstation`:
|
||||
|
||||
```
|
||||
fleetctl query --hosts mac-workstation --query 'SELECT * FROM carves WHERE carve = 1 AND path = "/etc/hosts"'
|
||||
```
|
||||
|
||||
The standard osquery file globbing syntax is also supported to carve entire directories or more:
|
||||
```
|
||||
fleetctl query --hosts mac-workstation --query 'SELECT * FROM carves WHERE carve = 1 AND path LIKE "/etc/%%"'
|
||||
```
|
||||
|
||||
### Retrieving Carves
|
||||
|
||||
List the non-expired (see below) carves with `fleetctl get carves`. Note that carves will not be available through this command until osquery checks in to the Fleet server with the first of the carve contents. This can take some time from initiation of the carve.
|
||||
|
||||
To also retrieve expired carves, use `fleetctl get carves --expired`.
|
||||
|
||||
Contents of carves are returned as .tar archives, and compressed if that option is configured.
|
||||
|
||||
To download the contents of a carve with ID 3, use
|
||||
|
||||
```
|
||||
fleetctl get carve 3 --outfile carve.tar
|
||||
```
|
||||
|
||||
It can also be useful to pipe the results directly into the tar command for unarchiving:
|
||||
|
||||
```
|
||||
fleetctl get carve 3 --stdout | tar -x
|
||||
```
|
||||
|
||||
### Expiration
|
||||
|
||||
Carve contents remain available for 24 hours after the first data is provided from the osquery client. After this time, the carve contents are cleaned from the database and the carve is marked as "expired".
|
||||
|
||||
The same is not true if S3 is used as the storage backend. In that scenario, it is suggested to setup a [bucket lifecycle configuration](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html) to avoid retaining data in excess. Fleet, in an "eventual consistent" manner (i.e. by periodically performing comparisons), will keep the metadata relative to the files carves in sync with what it is actually available in the bucket.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Check carve status in osquery
|
||||
|
||||
Osquery can report on the status of carves through queries to the `carves` table.
|
||||
|
||||
The details provided by
|
||||
|
||||
```
|
||||
fleetctl query --labels 'All Hosts' --query 'SELECT * FROM carves'
|
||||
```
|
||||
|
||||
can be helpful to debug carving problems.
|
||||
|
||||
### Ensure `carver_block_size` is set appropriately
|
||||
|
||||
This value must be less than the `max_allowed_packet` setting in MySQL. If it is too large, MySQL will reject the writes.
|
||||
|
||||
The value must be small enough that HTTP requests do not time out.
|
||||
|
@ -1,32 +0,0 @@
|
||||
# Monitoring Fleet
|
||||
|
||||
## Health Checks
|
||||
|
||||
Fleet exposes a basic health check at the `/healthz` endpoint. This is the interface to use for simple monitoring and load-balancer health checks.
|
||||
|
||||
The `/healthz` endpoint will return an `HTTP 200` status if the server is running and has healthy connections to MySQL and Redis. If there are any problems, the endpoint will return an `HTTP 500` status.
|
||||
|
||||
## Metrics
|
||||
|
||||
Fleet exposes server metrics in a format compatible with [Prometheus](https://prometheus.io/). A simple example Prometheus configuration is available in [tools/app/prometheus.yml](/tools/app/prometheus.yml).
|
||||
|
||||
Prometheus can be configured to use a wide range of service discovery mechanisms within AWS, GCP, Azure, Kubernetes, and more. See the Prometheus [configuration documentation](https://prometheus.io/docs/prometheus/latest/configuration/configuration/) for more information on configuring the
|
||||
|
||||
### Alerting
|
||||
|
||||
Prometheus has built-in support for alerting through [Alertmanager](https://prometheus.io/docs/alerting/latest/overview/).
|
||||
|
||||
Consider building alerts for
|
||||
|
||||
- Changes from expected levels of host enrollment
|
||||
- Increased latency on HTTP endpoints
|
||||
- Increased error levels on HTTP endpoints
|
||||
|
||||
```
|
||||
TODO (Seeking Contributors)
|
||||
Add example alerting configurations
|
||||
```
|
||||
|
||||
### Graphing
|
||||
|
||||
Prometheus provides basic graphing capabilities, and integrates tightly with [Grafana](https://prometheus.io/docs/visualization/grafana/) for sophisticated visualizations.
|
@ -1,33 +0,0 @@
|
||||
# OWASP Top 10
|
||||
|
||||
The Fleet community follows best practices when coding. Here are some of the ways we mitigate against the OWASP top 10 issues:
|
||||
|
||||
### Describe your secure coding practices, including code reviews, use of static/dynamic security testing tools, 3rd party scans/reviews.
|
||||
|
||||
- Every piece of code that is merged into Fleet is reviewed by at least one other engineer before merging. We don't use any security-specific testing tools.
|
||||
- The server backend is built in Golang, which (besides for language-level vulnerabilities) eliminates buffer overflow and other memory related attacks.
|
||||
- We use standard library cryptography wherever possible, and all cryptography is using well-known standards.
|
||||
|
||||
### SQL Injection
|
||||
- All queries are parameterized with MySQL placeholders, so MySQL itself guards against SQL injection and the Fleet code does not need to perform any escaping.
|
||||
|
||||
### Broken authentication – authentication, session management flaws that compromise passwords, keys, session tokens etc.
|
||||
#### Passwords
|
||||
- Fleet supports SAML auth which means that it can be configured such that it never sees passwords.
|
||||
- Passwords are never stored in plaintext in the database. We store a `bcrypt`ed hash of the password along with a randomly generated salt. The `bcrypt` iteration count and salt key size are admin-configurable.
|
||||
#### Authentication tokens
|
||||
- The size and expiration time of session tokens is admin-configurable. See [https://github.com/fleetdm/fleet/blob/master/docs/infrastructure/configuring-the-fleet-binary.md#session_duration](https://github.com/fleetdm/fleet/blob/master/docs/infrastructure/configuring-the-fleet-binary.md#session_duration).
|
||||
- It is possible to revoke all session tokens for a user by forcing a password reset.
|
||||
|
||||
|
||||
### Sensitive data exposure – encryption in transit, at rest, improperly implemented APIs.
|
||||
- By default, all traffic between user clients (such as the web browser and fleetctl) and the Fleet server is encrypted with TLS. By default, all traffic between osqueryd clients and the Fleet server is encrypted with TLS. Fleet does not encrypt any data at rest (*however a user could separately configure encryption for the MySQL database and logs that Fleet writes*).
|
||||
|
||||
### Broken access controls – how restrictions on what authorized users are allowed to do/access are enforced.
|
||||
- Each session is associated with a viewer context that is used to determine the access granted to that user. Access controls can easily be applied as middleware in the routing table, so the access to a route is clearly defined in the same place where the route is attached to the server see [https://github.com/fleetdm/fleet/blob/master/server/service/handler.go#L114-L189](https://github.com/fleetdm/fleet/blob/master/server/service/handler.go#L114-L189).
|
||||
|
||||
### Cross-site scripting – ensure an attacker can’t execute scripts in the user’s browser
|
||||
- We render the frontend with React and benefit from built-in XSS protection in React's rendering. This is not sufficient to prevent all XSS, so we also follow additional best practices as discussed in [https://stackoverflow.com/a/51852579/491710](https://stackoverflow.com/a/51852579/491710).
|
||||
|
||||
### Components with known vulnerabilities – prevent the use of libraries, frameworks, other software with existing vulnerabilities.
|
||||
- We rely on Github's automated vulnerability checks, community news, and direct reports to discover vulnerabilities in our dependencies. We endeavor to fix these immediately and would almost always do so within a week of a report.
|
@ -1,68 +0,0 @@
|
||||
# Working With Osquery Logs
|
||||
|
||||
Osquery agents are typically configured to send logs to the Fleet server (`--logger_plugin=tls`). This is not a requirement, and any other logger plugin can be used even when osquery clients are connecting to the Fleet server to retrieve configuration or run live queries. See the [osquery logging documentation](https://osquery.readthedocs.io/en/stable/deployment/logging/) for more about configuring logging on the agent.
|
||||
|
||||
If `--logger_plugin=tls` is used with osquery clients, the following configuration can be applied on the Fleet server for handling the incoming logs.
|
||||
|
||||
## Osquery Logging Plugins
|
||||
|
||||
Fleet supports the following logging plugins for osquery logs:
|
||||
|
||||
- [Filesystem](#filesystem) - Logs are written to the local Fleet server filesystem.
|
||||
- [Firehose](#firehose) - Logs are written to AWS Firehose streams.
|
||||
- [Kinesis](#kinesis) - Logs are written to AWS Kinesis streams.
|
||||
- [PubSub](#pubsub) - Logs are written to Google Cloud PubSub topics.
|
||||
- [Stdout](#stdout) - Logs are written to stdout.
|
||||
|
||||
To set the osquery logging plugins, use the `--osquery_result_log_plugin` and `--osquery_status_log_plugin` flags (or [equivalents for environment variables or configuration files](../infrastructure/configuring-the-fleet-binary.md#options)).
|
||||
|
||||
### Filesystem
|
||||
|
||||
The default logging plugin.
|
||||
|
||||
- Plugin name: `filesystem`
|
||||
- Flag namespace: [filesystem](../infrastructure/configuring-the-fleet-binary.md#filesystem)
|
||||
|
||||
With the filesystem plugin, osquery result and/or status logs are written to the local filesystem on the Fleet server. This is typically used with a log forwarding agent on the Fleet server that will push the logs into a logging pipeline. Note that if multiple load-balanced Fleet servers are used, the logs will be load-balanced across those servers (not duplicated).
|
||||
|
||||
### Firehose
|
||||
|
||||
- Plugin name: `firehose`
|
||||
- Flag namespace: [firehose](../infrastructure/configuring-the-fleet-binary.md#firehose)
|
||||
|
||||
With the Firehose plugin, osquery result and/or status logs are written to [AWS Firehose](https://aws.amazon.com/kinesis/data-firehose/) streams. This is a very good method for aggregating osquery logs into AWS S3 storage.
|
||||
|
||||
Note that Firehose logging has limits [discussed in the documentation](https://docs.aws.amazon.com/firehose/latest/dev/limits.html). When Fleet encounters logs that are too big for Firehose, notifications will be output in the Fleet logs and those logs _will not_ be sent to Firehose.
|
||||
|
||||
### Kinesis
|
||||
|
||||
- Plugin name: `kinesis`
|
||||
- Flag namespace: [kinesis](../infrastructure/configuring-the-fleet-binary.md#kinesis)
|
||||
|
||||
With the Kinesis plugin, osquery result and/or status logs are written to
|
||||
[AWS Kinesis](https://aws.amazon.com/kinesis/data-streams) streams.
|
||||
|
||||
Note that Kinesis logging has limits [discussed in the
|
||||
documentation](https://docs.aws.amazon.com/kinesis/latest/dev/limits.html).
|
||||
When Fleet encounters logs that are too big for Kinesis, notifications will be
|
||||
output in the Fleet logs and those logs _will not_ be sent to Kinesis.
|
||||
|
||||
### PubSub
|
||||
|
||||
- Plugin name: `pubsub`
|
||||
- Flag namespace: [pubsub](../infrastructure/configuring-the-fleet-binary.md#pubsub)
|
||||
|
||||
With the PubSub plugin, osquery result and/or status logs are written to [PubSub](https://cloud.google.com/pubsub/) topics.
|
||||
|
||||
Note that messages over 10MB will be dropped, with a notification sent to the fleet logs, as these can never be processed by PubSub.
|
||||
|
||||
### Stdout
|
||||
|
||||
- Plugin name: `stdout`
|
||||
- Flag namespace: [stdout](../infrastructure/configuring-the-fleet-binary.md#stdout)
|
||||
|
||||
With the stdout plugin, osquery result and/or status logs are written to stdout
|
||||
on the Fleet server. This is typically used for debugging or with a log
|
||||
forwarding setup that will capture and forward stdout logs into a logging
|
||||
pipeline. Note that if multiple load-balanced Fleet servers are used, the logs
|
||||
will be load-balanced across those servers (not duplicated).
|
Loading…
Reference in New Issue
Block a user