* Add sentry
* Fix gosum
* More gosum fixes
* Add missing def for config
* Enrich sentry scope a bit
* Add changes file
* Add goroutine safe scope to errors
* Encapsulate sentry logic
* Add documentation for new flag
* Add sentry capturing to crons and other background tasks
* Only send to sentry when enabled
This PR implements the status/result logger functions necessary interface with a Kafka REST Proxy service.
Specifically, this is compatible with the [Confluent KAFKA Rest Proxy Service ](https://docs.confluent.io/1.0/kafka-rest/docs/intro.html).
* Add max jitter percent config
* Fix jitter calc
* Remove comment
* Reduce test jitter to make tests less flaky
* Remove jitter entirely
* Document new config
* Fix doc link
* Rename core->free and basic->premium
* Fix lint js
* Comment out portion of test that seems to timeout
* Rename tier to premium if basic is still loaded
* update .gitattributes to be explicit about line endings with regards to the test certs
* update building-fleet guide to include python2 dependency on windows
* update configuration to default to OS specific temporary directories
- Add config option for license key.
- Define license details data structure.
- Include license details in app config API responses.
Currently any non-empty value for `--license_key` behaves as though the
installation is licensed for `basic`. If the license key is empty,
`core` is returned.
Still to come is the appropriate parsing for the license key.
This feature enables a new config option (redis.duplicate_results). When set to true, all Live Query results will be copied to an additional Redis pubsub channel named LQDuplicate
This is useful in a scenario that would involve shipping the Live Query results outside of Fleet, near-realtime.
Add a config setting to allow copying message fields and decorations into Google Pub/Sub attributes, making it possible to use these values for subscription filters.
The enrollment cooldown period was sometimes causing problems when
osquery (probably unintentionally, see
https://github.com/osquery/osquery/issues/6993) tried to enroll more
than once from the same osqueryd process.
We now set this to default to off and make it configurable. With #417
this feature may be unnecessary for most deployments.
Osquery now exposes more information during host enrollment than Fleet
previously handled. We can use this to provide more options to users in
problematic enrollment scenarios.
Users can configure --osquery_host_identifier in Fleet to set which
identifier is used to determine uniqueness of hosts. The
default (provided) replicates existing behavior in Fleet. For many
users, setting this to instance will provide better enrollment
stability.
Closes#373
In #212 these settings were updated and caused connectivity issues for
users in common environment configurations. The new changes are
aggressive (modern enforces TLS 1.3) and Mozilla indicates that
intermediate is an appropriate default. This will ensure better
compatibility for common deployments while still allowing the option to
use the strictest settings.
Document unintentional mismatched yaml key.
Fixes#269
The current implementation of FleetDM doesn't support Docker secrets for supplying the MySQL password and JWT key. This PR provides the ability for a file path to read in secrets. The goal of this PR is to avoid storing secrets in a static config or in an environment variable.
Example config for Docker:
```yaml
mysql:
address: mysql:3306
database: fleet
username: fleet
password_path: /run/secrets/mysql-fleetdm-password
redis:
address: redis:6379
server:
address: 0.0.0.0:8080
cert: /run/secrets/fleetdm-tls-cert
key: /run/secrets/fleetdm-tls-key
auth:
jwt_key_path: /run/secrets/fleetdm-jwt-key
filesystem:
status_log_file: /var/log/osquery/status.log
result_log_file: /var/log/osquery/result.log
enable_log_rotation: true
logging:
json: true
```
This adds the option to set up an S3 bucket as the storage backend for file carving (partially solving #111).
It works by using the multipart upload capabilities of S3 to maintain compatibility with the "upload in blocks" protocol that osquery uses. It does this basically replacing the carve_blocks table while still maintaining the metadata in the original place (it would probably be possible to rely completely on S3 by using object tagging at the cost of listing performance). To make this pluggable, I created a new field in the service struct dedicated to the CarveStore which, if no configuration for S3 is set up will be just a reference to the standard datastore, otherwise it will point to the S3 one (effectively this separation will allow in the future to add more backends).