mirror of
https://github.com/empayre/fleet.git
synced 2024-11-06 00:45:19 +00:00
Documentation: Spelling and grammar fixes (#16403)
--------- Co-authored-by: Rachael Shaw <r@rachael.wtf>
This commit is contained in:
parent
653d4be7f9
commit
f076769ee2
@ -4,7 +4,7 @@ The Fleet Terraform module is the recommended way to quickly get Fleet up and ru
|
||||
|
||||
## Required resources
|
||||
|
||||
Starting at the BYO-VPC level has all of the same initial [requirements](https://fleetdm.com/docs/deploy/deploy-on-aws-with-terraform#bring-your-own-nothing) as the root (BYO-Nothing) Terraform module. We will need to include these in our examle here as well for visibilty:
|
||||
Starting at the BYO-VPC level has all of the same initial [requirements](https://fleetdm.com/docs/deploy/deploy-on-aws-with-terraform#bring-your-own-nothing) as the root (BYO-Nothing) Terraform module. We will need to include these in our example here as well for visibility:
|
||||
|
||||
```hcl
|
||||
|
||||
@ -94,7 +94,7 @@ Since it is likely that an organization wanting to leverage BYO-VPC did not use
|
||||
- An elasticache subnet group for Redis (optional): `module.vpc.elasticache_subnet_group_name`
|
||||
- A public subnet for the load balancer: `module.vpc.public_subnets`
|
||||
|
||||
While Fleet recommends that each private subnet be unique as a best practice, it is techincally possible to place Fleet/ECS, RDS, and Redis all in the same private subnet. Just provide the same subnet ID in each of the respective locations below. If an elasticache subnet group is not already created for your VPC, it can be omitted and will be automatically generated by the downstream module.
|
||||
While Fleet recommends that each private subnet be unique as a best practice, it is technically possible to place Fleet/ECS, RDS, and Redis all in the same private subnet. Just provide the same subnet ID in each of the respective locations below. If an elasticache subnet group is not already created for your VPC, it can be omitted and will be automatically generated by the downstream module.
|
||||
|
||||
## The BYO-VPC Module
|
||||
|
||||
@ -136,12 +136,12 @@ module "byo-vpc" {
|
||||
|
||||
```
|
||||
|
||||
Defining the `fleet_image` as a local allows it to be reused by other addon modules that will require the running fleet version to be specified such as the external vulnerability processing addon. For Fleet Premium users, the license can be included in an AWS Secretsmanager Secret and added in as the commented-out example above shows.
|
||||
Defining the `fleet_image` as a local allows it to be reused by other add-on modules that will require the running fleet version to be specified such as the external vulnerability processing add-on. For Fleet Premium users, the license can be included in an AWS Secretsmanager Secret and added in as the commented-out example above shows.
|
||||
|
||||
|
||||
## Addons
|
||||
## Add-ons
|
||||
|
||||
Similar to using the root module, it is recommended to at least include the migration addon module to make it easier to upgrade Fleet in the future and to get the initial migrations in place. This adds the following:
|
||||
Similar to using the root module, it is recommended to at least include the migration add-on module to make it easier to upgrade Fleet in the future and to get the initial migrations in place. This adds the following:
|
||||
|
||||
|
||||
```hcl
|
||||
@ -157,7 +157,7 @@ module "migrations" {
|
||||
|
||||
```
|
||||
|
||||
All addons at the time of this writing are compatible with the BYO-VPC module. If examples reference resources the BYO-Nothing/root module in the format of `module.main.byo-vpc...` simply omit `.main` from them so they look like the references above in the migrations example.
|
||||
All add-ons at the time of this writing are compatible with the BYO-VPC module. If examples reference resources, the BYO-Nothing/root module in the format of `module.main.byo-vpc...` simply omit `.main` from them so they look like the references above in the migrations example.
|
||||
|
||||
## Bringing It All Together
|
||||
|
||||
|
@ -64,7 +64,7 @@ Next, create a new input:
|
||||
If everything with the account config works, you should be able to immediately see the results of a
|
||||
global index search:
|
||||
![splunk-index](../website/assets/images/articles/mapping-fleet-and-osquery-results-to-the-mitre-attck-framework-via-splunk-global-index-results-1775x915@2x.png)
|
||||
Now comes the tough part, or at least it was a bit challenging for me, since I'm no Splunk expert. We’re going tobuild some SPL (Search Processing Language) to translate the observations we've uncovered via osquery into search results in Splunk. After that, we can drop the search results into a dashboard or even build an alert. That being said though, if this was an alerting use case, I would recommend using the built-in Policies from Fleet to trigger alerts via webhooks. Here's what the first query looks like to get the Process Connections from our Fleet scheduled query and push it to a table in Splunk:
|
||||
Now comes the tough part, or at least it was a bit challenging for me, since I'm no Splunk expert. We’re going to build some SPL (Search Processing Language) to translate the observations we've uncovered via osquery into search results in Splunk. After that, we can drop the search results into a dashboard or even build an alert. That being said though, if this was an alerting use case, I would recommend using the built-in Policies from Fleet to trigger alerts via webhooks. Here's what the first query looks like to get the Process Connections from our Fleet scheduled query and push it to a table in Splunk:
|
||||
```
|
||||
index="osquery_results" name="pack/Global/ATT&CK® - Process_Network_Conn" |
|
||||
dedup _time, hostname |
|
||||
|
@ -18,9 +18,9 @@ Running Fleet in ECS consists of two main components the [ECS Service](https://g
|
||||
|
||||
#### Fleet migrations
|
||||
|
||||
Migrations in ECS can be achieved (and is recommended) by running [dedicated ECS tasks](https://github.com/fleetdm/fleet/tree/main/infrastructure/dogfood/terraform/aws#migrating-the-db) that run the `fleet prepare --no-prompt=true db` command. See [terraform for more details](https://github.com/fleetdm/fleet/blob/main/infrastructure/dogfood/terraform/aws/ecs.tf#L261)
|
||||
Migrations in ECS can be achieved by running [dedicated ECS tasks](https://github.com/fleetdm/fleet/tree/main/infrastructure/dogfood/terraform/aws#migrating-the-db) that run the `fleet prepare --no-prompt=true db` command. See [terraform for more details](https://github.com/fleetdm/fleet/blob/main/infrastructure/dogfood/terraform/aws/ecs.tf#L261)
|
||||
|
||||
Alternatively you can bake the prepare command into the same task definition see [here for a discussion](https://github.com/fleetdm/fleet/pull/1761#discussion_r697599457), but this not recommended for production environments.
|
||||
Alternatively you can bake the prepare command into the same task definition see [here for a discussion](https://github.com/fleetdm/fleet/pull/1761#discussion_r697599457), but this is not recommended for production environments.
|
||||
|
||||
<meta name="title" value="AWS ECS">
|
||||
<meta name="pageOrderInSection" value="400">
|
||||
|
@ -18,8 +18,7 @@ possible to dynamically scale read replicas to increase performance and [enable
|
||||
It is also possible to use [Aurora Global](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html) to
|
||||
span multiple regions for more advanced configurations(_not included in the [reference terraform](https://github.com/fleetdm/fleet/tree/main/infrastructure/dogfood/terraform/aws)_).
|
||||
|
||||
In some cases adding a read replica can increase database performance for specific access patterns. In scenarios when automating the API or with `fleetctl`
|
||||
there can be benefits to read performance.
|
||||
In some cases adding a read replica can increase database performance for specific access patterns. In scenarios when automating the API or with `fleetctl`, there can be benefits to read performance.
|
||||
|
||||
**Note:Fleet servers need to talk to a writer in the same datacenter. Cross region replication can be used for failover but writes need to be local.**
|
||||
|
||||
|
@ -17,20 +17,16 @@ Deploying on AWS with Fleet’s reference architecture is an easy way to get a f
|
||||
|
||||
### Remote State
|
||||
|
||||
Remote state can be simple (local state) or complicated (S3, state locking, etc.). To keep this guide straightforward we are
|
||||
going to leave remote state out of the equation. For more information on how to manage terraform remote state see https://developer.hashicorp.com/terraform/language/state/remote
|
||||
Remote state can be simple (local state) or complicated (S3, state locking, etc.). To keep this guide straightforward we are going to leave the remote state out of the equation. For more information on how to manage terraform remote state see https://developer.hashicorp.com/terraform/language/state/remote
|
||||
|
||||
### Modules
|
||||
|
||||
[Fleet terraform](https://github.com/fleetdm/fleet/tree/main/terraform) is made up of multiple modules. These modules can be used independently, or as group to stand up an opinionated
|
||||
set of infrastructure that we have found success with.
|
||||
|
||||
Each module defines the required resource and consumes the next nested module. The root module creates the VPC and then pulls in the `byo-vpc` module
|
||||
configuring it as necessary. The `byo-vpc` module creates the database and cache instances that get passed into the `byo-db` module. And finally the `byo-db` module
|
||||
creates the ECS cluster and load balancer to be consumed by the `byo-ecs` module.
|
||||
Each module defines the required resource and consumes the next nested module. The root module creates the VPC and then pulls in the `byo-vpc` module configuring it as necessary. The `byo-vpc` module creates the database and cache instances that get passed into the `byo-db` module. And finally the `byo-db` module creates the ECS cluster and load balancer to be consumed by the `byo-ecs` module.
|
||||
|
||||
The modules are made to be flexible allowing you to bring your own infrastructure. For example if you already have an existing VPC
|
||||
you'd like to deploy Fleet into, you could opt to use the `byo-vpc` module, supplying the necessary configuration like subnets(database, cache, and application need to communicate) and VPC ID.
|
||||
The modules are made to be flexible allowing you to bring your own infrastructure. For example if you already have an existing VPC you'd like to deploy Fleet into, you could opt to use the `byo-vpc` module, supplying the necessary configuration like subnets (database, cache, and application needed to communicate) and VPC ID.
|
||||
|
||||
|
||||
#### Examples
|
||||
@ -74,8 +70,7 @@ module "fleet_vpcless" {
|
||||
}
|
||||
```
|
||||
|
||||
This configuration allows you to bring your own VPC, public & private subnets, and ACM certificate. All of these are required
|
||||
to configure the remainder of the infrastructure, like the Database and ECS.
|
||||
This configuration allows you to bring your own VPC, public & private subnets, and ACM certificate. All of these are required to configure the remainder of the infrastructure, like the Database and ECS.
|
||||
|
||||
##### Bring only Fleet
|
||||
```hcl
|
||||
@ -132,13 +127,10 @@ The infrastructure used in this deployment is available in all regions. The foll
|
||||
By default, both RDS & Elasticache are encrypted at rest and encrypted in transit. The S3 buckets are also server-side encrypted using AWS managed KMS keys.
|
||||
|
||||
### Networking
|
||||
For more details on the networking configuration take a look at https://github.com/terraform-aws-modules/terraform-aws-vpc. In the configuration Fleet provides
|
||||
we are creating public and private subnets in addition to separate data layer for RDS and Elasticache. The configuration also defaults
|
||||
to using a single NAT Gateway.
|
||||
For more details on the networking configuration take a look at https://github.com/terraform-aws-modules/terraform-aws-vpc. In the configuration Fleet provides, we are creating public and private subnets in addition to separate data layers for RDS and Elasticache. The configuration also defaults to use a single NAT Gateway.
|
||||
|
||||
### Backups
|
||||
RDS daily snapshots are enabled by default and retention is set to 30 days. A snapshot identifier can be supplied via terraform variable (`rds_initial_snapshot`)
|
||||
in order to create the database from a previous snapshot.
|
||||
RDS daily snapshots are enabled by default and retention is set to 30 days. A snapshot identifier can be supplied via terraform variable (`rds_initial_snapshot`) in order to create the database from a previous snapshot.
|
||||
|
||||
## Deployment
|
||||
|
||||
|
@ -29,7 +29,7 @@ Fleet comes with a [built-in query library](https://fleetdm.com/queries) for rep
|
||||
|
||||
You can easily write queries yourself with query auto-complete, as well as import query packs for HID to detect IOCs using Yara or other intrusion detection mechanisms from the community or other vendors. Or, you can import policies to monitor for high-impact vulnerabilities such as a particular TPM chip; for example, a large vehicle manufacturer uses Fleet to do this.
|
||||
|
||||
Customers can build on these built-in policies to monitor ongoing compliance with regulator standards like NIST, PCI, ISO, SOC, and HIPAA.
|
||||
Customers can build on these built-in policies to monitor ongoing compliance with regulatory standards like NIST, PCI, ISO, SOC, and HIPAA.
|
||||
|
||||
## Has anyone stress-tested Fleet? How many hosts can the Fleet server handle?
|
||||
|
||||
@ -67,7 +67,7 @@ We have different licenses for portions of our software which are noted in the [
|
||||
|
||||
- The product will be available for download without leaving an email address or logging in.
|
||||
|
||||
- We will always allow you to benchmark the performance of Fleet. (Fleet also [load tests the platform before every release](https://fleetdm.com/handbook/engineering#rituals), with increasingly ambitious targets. The scale of realtime reporting supported by Fleet has increased 5,000% since 2019. Today, Fleet deployments supports 500,000 devices, and counting. The company is committed to driving this number to 1M+, and beyond.)
|
||||
- We will always allow you to benchmark the performance of Fleet. (Fleet also [load tests the platform before every release](https://fleetdm.com/handbook/engineering#rituals), with increasingly ambitious targets. The scale of real time reporting supported by Fleet has increased 5,000% since 2019. Today, Fleet deployments support 500,000 devices, and counting. The company is committed to driving this number to 1M+, and beyond.)
|
||||
|
||||
## How do I contact Fleet for support?
|
||||
|
||||
@ -79,7 +79,7 @@ If your organization has Fleet Premium, you can [access professional support](ht
|
||||
|
||||
If you opt not to renew Fleet Premium, you can continue using only the free capabilities of Fleet (same code base, just unconfigure the license key.)
|
||||
|
||||
## Can we buy a licence to access premium features with reduced support for a reduced cost?
|
||||
## Can we buy a license to access premium features with reduced support for a reduced cost?
|
||||
|
||||
We aren’t able to sell licenses and support separately.
|
||||
|
||||
@ -397,7 +397,7 @@ $ fleetctl get hosts --json | jq '.spec .os_version' | sort | uniq -c
|
||||
|
||||
No. The agent options set using your software orchestration tool will override the default agent options that appear in the **Settings > Organization settings > Agent options** page. On this page, if you hit the **Save** button, the options that appear in the Fleet UI will override the agent options set using your software orchestration.
|
||||
|
||||
### How does Fleet determines online and offline status?
|
||||
### How does Fleet determine online and offline status?
|
||||
|
||||
#### Online hosts
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user