11 KiB
Using the Fleet Terraform module with an existing VPC
The Fleet Terraform module is the recommended way to quickly get Fleet up and running in AWS. However, some organizations may already have an existing VPC that they would like leverage to deploy Fleet. This article shows what that would look like, leveraging the module at the bring-your-own VPC (BYO-VPC) level.
Required resources
Starting at the BYO-VPC level has all of the same initial requirements as the root (BYO-Nothing) Terraform module. We will need to include these in our examle here as well for visibilty:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
locals {
fleet_domain_name = "fleet.<your_domain>.com"
}
module "acm" {
source = "terraform-aws-modules/acm/aws"
version = "4.3.1"
domain_name = local.fleet_domain_name
zone_id = aws_route53_zone.main.id
wait_for_validation = true
}
resource "aws_route53_zone" "main" {
name = local.fleet_domain_name
}
resource "aws_route53_record" "main" {
zone_id = aws_route53_zone.main.id
name = local.fleet_domain_name
type = "A"
alias {
name = module.byo-vpc.byo-db.alb.lb_dns_name
zone_id = module.byo-vpc.byo-db.alb.lb_zone_id
evaluate_target_health = true
}
}
Additionally, we will have to have a VPC created. The Terraform AWS VPC module is one of the easiest ways to get all the necessary pieces quickly, so we'll use this in the example.
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.18.1"
name = "fleet-vpc"
cidr = "10.10.0.0/16"
azs = ["us-east-2a", "us-east-2b", "us-east-2c"]
private_subnets = ["10.10.1.0/24", "10.10.2.0/24", "10.10.3.0/24"]
public_subnets = ["10.10.11.0/24", "10.10.12.0/24", "10.10.13.0/24"]
database_subnets = ["10.10.21.0/24", "10.10.22.0/24", "10.10.23.0/24"]
elasticache_subnets = ["10.10.31.0/24", "10.10.32.0/24", "10.10.33.0/24"]
create_database_subnet_group = false
create_database_subnet_route_table = true
create_elasticache_subnet_group = true
create_elasticache_subnet_route_table = true
enable_vpn_gateway = false
one_nat_gateway_per_az = false
single_nat_gateway = true
enable_nat_gateway = true
enable_flow_log = false
create_flow_log_cloudwatch_log_group = false
create_flow_log_cloudwatch_iam_role = false
flow_log_max_aggregation_interval = null
flow_log_cloudwatch_log_group_name_prefix = null
flow_log_cloudwatch_log_group_name_suffix = null
vpc_flow_log_tags = {}
enable_dns_hostnames = false
enable_dns_support = true
}
Since it is likely that an organization wanting to leverage BYO-VPC did not use the above module, below is the resource info required from the existing VPC and the corresponding resource name from the module used below:
- The VPC ID:
module.vpc.vpc_id
- A private subnet for Fleet ECS containers:
module.vpc.private_subnets
- A private subnet for RDS:
module.vpc.database_subnets
- A private subnet for Redis:
module.vpc.elasticache_subnets
- An elasticache subnet group for Redis (optional):
module.vpc.elasticache_subnet_group_name
- A public subnet for the load balancer:
module.vpc.public_subnets
While Fleet recommends that each private subnet be unique as a best practice, it is techincally possible to place Fleet/ECS, RDS, and Redis all in the same private subnet. Just provide the same subnet ID in each of the respective locations below. If an elasticache subnet group is not already created for your VPC, it can be omitted and will be automatically generated by the downstream module.
The BYO-VPC Module
Next, we'll want to configure the BYO-VPC module with at least a minimal configuration. While it is not necessary to specify a Fleet image version, it is strongly recommended. This way an organization is fully in control of the version of Fleet deployed regardless of updates to the Terraform module or Fleet releases. Below shows a minimal configuration that includes the Fleet image:
locals {
fleet_image = "fleetdm/fleet:v4.36.0"
}
module "byo-vpc" {
source = "github.com/fleetdm/fleet//terraform/byo-vpc?ref=tf-mod-byo-vpc-v1.4.0"
vpc_config = {
vpc_id = module.vpc.vpc_id
networking = {
subnets = module.vpc.private_subnets
}
}
rds_config = {
subnets = module.vpc.database_subnets
}
redis_config = {
subnets = module.vpc.elasticache_subnets
elasticache_subnet_group_name = module.vpc.elasticache_subnet_group_name
availability_zones = module.vpc.azs
}
alb_config = {
subnets = module.vpc.public_subnets
certificate_arn = module.acm.acm_certificate_arn
}
fleet_config = {
image = local.fleet_image
extra_secrets = {
// FLEET_LICENSE_KEY: "secret_manager_license_key_arn"
}
}
}
Defining the fleet_image
as a local allows it to be reused by other addon modules that will require the running fleet version to be specified such as the external vulnerability processing addon. For Fleet Premium users, the license can be included in an AWS Secretsmanager Secret and added in as the commented-out example above shows.
Addons
Similar to using the root module, it is recommended to at least include the migration addon module to make it easier to upgrade Fleet in the future and to get the initial migrations in place. This adds the following:
module "migrations" {
source = "github.com/fleetdm/fleet//terraform/addons/migrations?ref=tf-mod-addon-migrations-v1.0.0"
ecs_cluster = module.byo-vpc.byo-db.byo-ecs.service.cluster
task_definition = module.byo-vpc.byo-db.byo-ecs.task_definition.family
task_definition_revision = module.byo-vpc.byo-db.byo-ecs.task_definition.revision
subnets = module.byo-vpc.byo-db.byo-ecs.service.network_configuration[0].subnets
security_groups = module.byo-vpc.byo-db.byo-ecs.service.network_configuration[0].security_groups
}
All addons at the time of this writing are compatible with the BYO-VPC module. If examples reference resources the BYO-Nothing/root module in the format of module.main.byo-vpc...
simply omit .main
from them so they look like the references above in the migrations example.
Bringing It All Together
This is what a complete configuration would look like:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
locals {
fleet_domain_name = "fleet.<your_domain>.com"
fleet_image = "fleetdm/fleet:v4.36.0"
}
module "acm" {
source = "terraform-aws-modules/acm/aws"
version = "4.3.1"
domain_name = local.fleet_domain_name
zone_id = aws_route53_zone.main.id
wait_for_validation = true
}
resource "aws_route53_zone" "main" {
name = local.fleet_domain_name
}
resource "aws_route53_record" "main" {
zone_id = aws_route53_zone.main.id
name = local.fleet_domain_name
type = "A"
alias {
name = module.byo-vpc.byo-db.alb.lb_dns_name
zone_id = module.byo-vpc.byo-db.alb.lb_zone_id
evaluate_target_health = true
}
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.18.1"
name = "fleet-vpc"
cidr = "10.10.0.0/16"
azs = ["us-east-2a", "us-east-2b", "us-east-2c"]
private_subnets = ["10.10.1.0/24", "10.10.2.0/24", "10.10.3.0/24"]
public_subnets = ["10.10.11.0/24", "10.10.12.0/24", "10.10.13.0/24"]
database_subnets = ["10.10.21.0/24", "10.10.22.0/24", "10.10.23.0/24"]
elasticache_subnets = ["10.10.31.0/24", "10.10.32.0/24", "10.10.33.0/24"]
create_database_subnet_group = false
create_database_subnet_route_table = true
create_elasticache_subnet_group = true
create_elasticache_subnet_route_table = true
enable_vpn_gateway = false
one_nat_gateway_per_az = false
single_nat_gateway = true
enable_nat_gateway = true
enable_flow_log = false
create_flow_log_cloudwatch_log_group = false
create_flow_log_cloudwatch_iam_role = false
flow_log_max_aggregation_interval = null
flow_log_cloudwatch_log_group_name_prefix = null
flow_log_cloudwatch_log_group_name_suffix = null
vpc_flow_log_tags = {}
enable_dns_hostnames = false
enable_dns_support = true
}
module "byo-vpc" {
source = "github.com/fleetdm/fleet//terraform/byo-vpc?ref=tf-mod-byo-vpc-v1.4.0"
vpc_config = {
vpc_id = module.vpc.vpc_id
networking = {
subnets = module.vpc.private_subnets
}
}
rds_config = {
subnets = module.vpc.database_subnets
}
redis_config = {
subnets = module.vpc.elasticache_subnets
elasticache_subnet_group_name = module.vpc.elasticache_subnet_group_name
availability_zones = module.vpc.azs
}
alb_config = {
subnets = module.vpc.public_subnets
certificate_arn = module.acm.acm_certificate_arn
}
fleet_config = {
image = local.fleet_image
extra_secrets = {
// FLEET_LICENSE_KEY: "secret_manager_license_key_arn"
}
}
}
module "migrations" {
source = "github.com/fleetdm/fleet//terraform/addons/migrations?ref=tf-mod-addon-migrations-v1.0.0"
ecs_cluster = module.byo-vpc.byo-db.byo-ecs.service.cluster
task_definition = module.byo-vpc.byo-db.byo-ecs.task_definition.family
task_definition_revision = module.byo-vpc.byo-db.byo-ecs.task_definition.revision
subnets = module.byo-vpc.byo-db.byo-ecs.service.network_configuration[0].subnets
security_groups = module.byo-vpc.byo-db.byo-ecs.service.network_configuration[0].security_groups
}
Since the VPC must exist before the BYO-VPC module can be applied and our BYO-VPC module has to create the the other resources before the migrations can be run, we will need to use targeted applies as below:
terraform init
terraform apply -target=module.vpc
terraform apply -target=module.byo-vpc
terraform apply
The BYO-VPC configuration can be fully customized similar to the Terraform root module. See the BYO-VPC reference for a full list of variables.