fleet/infrastructure/sandbox
Tomas Touceda 8457e55b53
Bump go to 1.19.1 (#7690)
* Bump go to 1.19.1

* Bump remaining go-version to the 1.19.1

* Add extra paths for test-go

* Oops, putting the right path in the right place

* gofmt file

* gofmt ALL THE THINGS

* Moar changes

* Actually, go.mod doesn't like minor versions
2022-09-12 20:32:43 -03:00
..
JITProvisioner Bump go to 1.19.1 (#7690) 2022-09-12 20:32:43 -03:00
Monitoring Bump go to 1.19.1 (#7690) 2022-09-12 20:32:43 -03:00
PreProvisioner Bump go to 1.19.1 (#7690) 2022-09-12 20:32:43 -03:00
SharedInfrastructure mark ecr as immutable in preperation for 4.19.0 release (#7324) 2022-08-24 12:09:53 +00:00
.gitignore Fleet Sandbox (#5079) 2022-07-19 13:56:53 -05:00
.terraform.lock.hcl Demo packaging (#7020) 2022-08-05 11:41:41 -04:00
backend-prod.conf Fleet Sandbox (#5079) 2022-07-19 13:56:53 -05:00
main.tf noticed some tags being outdated in sandbox (#7382) 2022-08-24 12:09:16 -04:00
readme.md Update docs for ease of use and send alerts to help-p1 (#7477) 2022-08-31 11:25:35 -03:00

Terraform for the Fleet Demo Environment

This folder holds the infrastructure code for Fleet's demo environment.

This readme itself is intended for infrastructure developers. If you aren't an infrastructure developer, please see https://sandbox.fleetdm.com/openapi.json for documentation.

Instance state machine

provisioned -> unclaimed -> claimed -> [destroyed]

provisioned means an instance was "terraform apply'ed" but no installers were generated. unclaimed means its ready for a customer. claimed means its already in use by a customer. [destroyed] isn't a state you'll see in dynamodb, but it means that everything has been torn down.

Bugs

  1. module.shared-infrastructure.kubernetes_manifest.targetgroupbinding is bugged sometimes, if it gives issues just comment it out
  2. on a fresh apply, module.shared-infrastructure.aws_acm_certificate.main will have to be targeted first, then a normal apply can follow
  3. If errors happen, see if applying again will fix it
  4. There is a secret for apple signing whos values are not provided by this code. If you destroy/apply this secret, then it will have to be filled in manually.

Environment Access

AWS SSO Console

  1. You will need to be in the group "AWS Sandbox Prod Admins" in the Fleet Google Workspace
  2. From Google Apps, select "AWS SSO"
  3. Under "AWS Account" select "Fleet Cloud Sandbox Prod"
  4. Choose "Management console" under "SandboxProdAdmins"

AWS CLI Access

  1. Add the following to your ~/.aws/config:
    [profile sandbox_prod]
    region = us-east-2
    sso_start_url = https://d-9a671703a6.awsapps.com/start
    sso_region = us-east-2
    sso_account_id = 411315989055
    sso_role_name = SandboxProdAdmins
    
  2. Login to sso on the cli via aws sso login --profile=sandbox_prod
  3. To automatically use this profile, export AWS_PROFILE=sandbox_prod
  4. For more help with AWS SSO Configuration see https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sso.html

VPN Access

You will need to be in the proper group in the Fleet Google Workspace to access this environment. Access to this environment will "just work" once added.

Database Access

If you need to access the MySQL database backing Fleet Cloud Sandbox, do the following:

  1. Obtain database hostname
    aws rds describe-db-clusters --filter Name=db-cluster-id,Values=sandbox-prod --query "DBClusters[0].Endpoint" --output=text
    
  2. Obtain database master username
    aws rds describe-db-clusters --filter Name=db-cluster-id,Values=sandbox-prod --query "DBClusters[0].MasterUsername" --output=text
    
  3. Obtain database master password secret name (terraform adds a secret pet name, so we can obtain it from state data)
    terraform show -json | jq -r '.values.root_module.child_modules[].resources | flatten | .[] | select(.address == "module.shared-infrastructure.aws_secretsmanager_secret.database_password_secret").values.name'
    
  4. Obtain database master password
    aws secretsmanager get-secret-value --secret-id "$(terraform show -json | jq -r '.values.root_module.child_modules[].resources | flatten | .[] | select(.address == "module.shared-infrastructure.aws_secretsmanager_secret.database_password_secret").values.name')" --query "SecretString" --output text
    
  5. TL;DR -- Put it all together to get into MySQL. Just copy-paste the part below if you just want the credentials without understanding where they come from.
    DBPASSWORD="$(aws secretsmanager get-secret-value --secret-id "$(terraform show -json | jq -r '.values.root_module.child_modules[].resources | flatten | .[] | select(.address == "module.shared-infrastructure.aws_secretsmanager_secret.database_password_secret").values.name')" --query "SecretString" --output text)"
    aws rds describe-db-clusters --filter Name=db-cluster-id,Values=sandbox-prod --query "DBClusters[0].[Endpoint,MasterUsername]" --output=text | read DBHOST DBUSER
    mysql -h"${DBHOST}" -u"${DBUSER}" -p"${DBPASSWORD}"
    

Maintenance commands

Referesh fleet instances

for i in $(aws dynamodb scan --table-name sandbox-prod-lifecycle | jq -r '.Items[] | select(.State.S == "unclaimed") | .ID.S'); do helm uninstall $i; aws dynamodb delete-item --table-name sandbox-prod-lifecycle --key "{\"ID\": {\"S\": \"${i}\"}}"; done

Cleanup instances that are running, but not tracked

for i in $((aws dynamodb scan --table-name sandbox-prod-lifecycle | jq -r '.Items[] | .ID.S'; aws dynamodb scan --table-name sandbox-prod-lifecycle | jq -r '.Items[] | .ID.S'; helm list | tail -n +2 | cut -f 1) | sort | uniq -u); do helm uninstall $i; done

Cleanup instances that failed to provision

for i in $(aws dynamodb scan --table-name sandbox-prod-lifecycle | jq -r '.Items[] | select(.State.S == "provisioned") | .ID.S'); do helm uninstall $i; aws dynamodb delete-item --table-name sandbox-prod-lifecycle --key "{\"ID\": {\"S\": \"${i}\"}}"; done

Cleanup untracked instances fully

This needs to be run in the deprovisioner terraform directory!

for i in $((aws dynamodb scan --table-name sandbox-prod-lifecycle | jq -r '.Items[] | .ID.S'; aws dynamodb scan --table-name sandbox-prod-lifecycle | jq -r '.Items[] | .ID.S'; terraform workspace list | sed 's/ //g' | grep -v '.*default' | sed '/^$/d') | sort | uniq -u); do (terraform workspace select $i && terraform apply -destroy -auto-approve && terraform workspace select default && terraform workspace delete $i); [ $? = 0 ] || break; done

Runbooks

5xx errors

If you are seeing 5xx errors, find out what instance its from via the saved query here: https://us-east-2.console.aws.amazon.com/athena/home?region=us-east-2#/query-editor Make sure you set the workgroup to sandbox-prod-logs otherwise you won't be able to see the saved query.

You can also see errors via the target groups here: https://us-east-2.console.aws.amazon.com/ec2/v2/home?region=us-east-2#TargetGroups:

Fleet Logs

Fleet logs can be accessed via kubectl. Setup kubectl by following these instructions: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html#create-kubeconfig-automatically Examples:

# Obtain kubeconfig
aws eks update-kubeconfig --region us-east-2 --name sandbox-prod
# List pods (We currently use the default namespace)
kubectl get pods # Search in there which one it is. There will be 2 instances + a migrations one
# Obtain Logs from all pods for the release. You can also use `--previous` to obtain logs from a previous pod crash if desired.
kubectl logs -l release=<instance id>

We do not use eksctl since we use terraform managed resources.

Database debugging

Database debugging is accessed through the rds console: https://us-east-2.console.aws.amazon.com/rds/home?region=us-east-2#database:id=sandbox-prod;is-cluster=true Currently only database metrics are available because performance insights is not available for serverless RDS