mirror of
https://github.com/empayre/fleet.git
synced 2024-11-06 08:55:24 +00:00
82ba1a00a2
* checkin for testing * Initial work on packaging, still need to configure fleet to use it * Add the terraform stuff for installers * Add iam permissions for packaging * Add environment variables for installers to fleet * Implement review fixes * Add an extra state for provisioned, but not ready for customers * Add secretsmanager stuff for apple * fixup * fixup * Bugfixes * fixup * fixup and added some stuff to the readdme * Add link to openapi.json in readme |
||
---|---|---|
.. | ||
JITProvisioner | ||
Monitoring | ||
PreProvisioner | ||
SharedInfrastructure | ||
.gitignore | ||
.terraform.lock.hcl | ||
backend-prod.conf | ||
main.tf | ||
readme.md |
Terraform for the Fleet Demo Environment
This folder holds the infrastructure code for Fleet's demo environment.
This readme itself is intended for infrastructure developers. If you aren't an infrastructure developer, please see https://sandbox.fleetdm.com/openapi.json for documentation.
Instance state machine
provisioned -> unclaimed -> claimed -> [destroyed]
provisioned means an instance was "terraform apply'ed" but no installers were generated. unclaimed means its ready for a customer. claimed means its already in use by a customer. [destroyed] isn't a state you'll see in dynamodb, but it means that everything has been torn down.
Bugs
- module.shared-infrastructure.kubernetes_manifest.targetgroupbinding is bugged sometimes, if it gives issues just comment it out
- on a fresh apply, module.shared-infrastructure.aws_acm_certificate.main will have to be targeted first, then a normal apply can follow
- If errors happen, see if applying again will fix it
- There is a secret for apple signing whos values are not provided by this code. If you destroy/apply this secret, then it will have to be filled in manually.
Maintenance commands
Referesh fleet instances
for i in $(aws dynamodb scan --table-name sandbox-prod-lifecycle | jq -r '.Items[] | select(.State.S == "unclaimed") | .ID.S'); do helm uninstall $i; aws dynamodb delete-item --table-name sandbox-prod-lifecycle --key "{\"ID\": {\"S\": \"${i}\"}}"; done
Cleanup instances that are running, but not tracked
for i in $((aws dynamodb scan --table-name sandbox-prod-lifecycle | jq -r '.Items[] | .ID.S'; aws dynamodb scan --table-name sandbox-prod-lifecycle | jq -r '.Items[] | .ID.S'; helm list | tail -n +2 | cut -f 1) | sort | uniq -u); do helm uninstall $i; done
Cleanup instances that failed to provision
for i in $(aws dynamodb scan --table-name sandbox-prod-lifecycle | jq -r '.Items[] | select(.State.S == "provisioned") | .ID.S'); do helm uninstall $i; aws dynamodb delete-item --table-name sandbox-prod-lifecycle --key "{\"ID\": {\"S\": \"${i}\"}}"; done
TODOs
- JITProvisioner needs to return proper errors
- Create and use a different kms key for installers
- Sane scale levels for prod
- Allow for parallel spinup of sandbox instances (preprovisioner)
- https://redis.io/commands/flushdb/ during the teardown process
- name state machines something random and track the new name in dynamodb