Osquery now exposes more information during host enrollment than Fleet
previously handled. We can use this to provide more options to users in
problematic enrollment scenarios.
Users can configure --osquery_host_identifier in Fleet to set which
identifier is used to determine uniqueness of hosts. The
default (provided) replicates existing behavior in Fleet. For many
users, setting this to instance will provide better enrollment
stability.
Closes#373
Since the original logic was implemented, there have been some changes
in the way that config refreshes are configured. This commit reflects
those changes and should be backwards compatible.
Closes#357
Somewhere around osquery 4.4.0 these messages were added to query
responses. We can now expose them to the API clients rather than using
the placeholder text.
Required for #192
Label membership is now stored in the label_membership table. This is
done in preparation for adding "manual" labels, as previously label
membership was associated directly with label query executions.
Label queries are now all executed at the same time, rather than on
separate intervals. This simplifies the calculation of which distributed
queries need to be run when a host checks in.
This change optimizes live queries by pushing the computation of query
targets to the creation time of the query, and efficiently caching the
targets in Redis. This results in a huge performance improvement at both
steady-state, and when running live queries.
- Live queries are stored using a bitfield in Redis, and takes
advantage of bitfield operations to be extremely efficient.
- Only run Redis live query test when REDIS_TEST is set in environment
- Ensure that live queries are only sent to hosts when there is a client
listening for results. Addresses an existing issue in Fleet along with
appropriate cleanup for the refactored live query backend.
Fleet used significant resources storing the full network interface
information for each host. This data was unused, except to get the
IP and MAC of the primary interface. With these changes, only those
pieces of data are stored.
- Calculate and store primary IP and MAC
- Remove transaction for storing full interfaces
- Update targets search to use new IP and MAC columns
- Update frontend to use new new columns
Additional information is collected when host details are updated using
the queries specified in the Fleet configuration. This additional
information is then available in the host API responses.
- Add toggle to disable live queries in advanced settings
- Add new live query status endpoint (checks for disabled via config and Redis health)
- Update QueryPage UI to use new live query status endpoint
Implements #2140
- Add logging for new campaigns
- Add logging for new query creations/modification/deletion
- Add usernames for logs found in labels, options, packs, osquery options, queries and scheduled queries where something is created, modified or deleted
Adds Google Cloud PubSub logging for status and results.
This also changes the Write interface for logging modules to add a context.Context (only used by pubsub currently).
When an osqueryd agent sends an enroll request it automatically sends
some details about the system. We now save these details which helps
ensure we send the correct platform config.
Closes#2065
- Refactor configuration for logging to use separate plugins
- Move existing filesystem logging to filesystem plugin
- Create new AWS firehose plugin
- Update documentation around logging
Almost two years ago, we began referring to the project as Fleet, but there are
many occurences of the term "Kolide" throughout the UI and documentation. This
PR attempts to clear up those uses where it is easily achievable.
The term "Kolide" is used throughout the code as well, but modifying this would
be more likely to introduce bugs.
Previously decorators were stored in a separate table. Now they are stored
directly with the config so that they can be modified on a per-platform basis.
Delete now unused decorators code.
Instead of trying to decode and re-encode status logs, we now write them directly as they come in.
This change prevents future changes to the osquery status log file format (addition and deletion of fields ) from
affecting Fleet. A similar change was implemented in #1636 for result logs.
Closes#1664
Initially fleet decoded the incoming JSON sent to the log endpoint.
Then the log event would be written to a log writer by calling json.Encoder{}.Encode.
Re-encoding logs is lossy; whenever a new field is sent by osqueryd we don't keep up with them.
Instead of caring about the content of the OsqueryResultLog, fleet will now write all log results
exactly as sent to the server by osqueryd.
Closes#1632Closes#1615
This PR adds support for file integrity monitoring. This is done by providing a simplified API that can be used to PATCH/GET FIM configurations. There is also code to build the FIM configuration to send back to osquery. Each PATCH request, if successful, replaces Fleet's existing FIM configuration. For example:
curl -X "PATCH" "https://localhost:8080/api/v1/kolide/fim" \
-H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzZXNzaW9uX2tleSI6IkVhaFhvZWswMGtWSEdaTTNCWndIMnhpYWxkNWZpcVFDR2hEcW1HK2UySmRNOGVFVE1DeTNTaUlFWmhZNUxhdW1ueFZDV2JiR1Bwdm5TKzdyK3NJUzNnPT0ifQ.SDCHAUA1vTuWGjXtcQds2GZLM27HAAiOUhR4WvgvTNY" \
-H "Content-Type: application/json; charset=utf-8" \
-d $'{
"interval": 500,
"file_paths": {
"etc": [
"/etc/%%"
],
"users": [
"/Users/%/Library/%%",
"/Users/%/Documents/%%"
],
"usr": [
"/usr/bin/%%"
]
}
}'
Fix for issue where osquery sends empty strings where we expect integers in detail queries. We handle empty strings in these cases by changing them to "0" and then letting the different conversion functions change the "0" string into the appropriate integer type. This has been tested against running osquery hosts.
Closes#1521
Closes issue #1475
The command line tool that uses this endpoint -> https://github.com/kolide/configimporter
* Added support for atomic imports and dry run imports
* Added code so that imports are idempotent
We now track the `config_tls_refresh`, `distributed_interval` and
`logger_tls_period` flag values for each host. Each value is updated by a
detail query agains the `osquery_flags` table, because they may be specified
outside of Kolide. The flags that can be specified within Kolide are also
updated when a config is returned to the host that changes their value.
This will enable us to do a more accurate per-host online status calculation as
discussed in #1419.
Made log rotation for osquery results and status logs optional. This required writing the logwriter package which is a drop in replacement for lumberjack. We still use lumberjack if the log rotation flag --osquery_enable_log_rotation flag is set. Note that the performance of the default is quite a bit better than lumberjack.
BenchmarkLogger-8 2000000 747 ns/op
BenchmarkLumberjack-8 1000000 1965 ns/op
PASS
BenchmarkLogger-8 2000000 731 ns/op
BenchmarkLumberjack-8 1000000 2040 ns/op
PASS
BenchmarkLogger-8 2000000 741 ns/op
BenchmarkLumberjack-8 1000000 1970 ns/op
PASS
BenchmarkLogger-8 2000000 737 ns/op
BenchmarkLumberjack-8 1000000 1930 ns/op
PASS
The seen time should only be updated once per request from the osquery agent to
the Kolide server. We now do that only in AuthenticateHost (which every request
besides enrollment must go through).
Return `accelerate: 10` with distributed queries if we do not have host
details. This facilitates the host quickly joining all expected labels, as
`platform` gated label queries will not be returned until the detail queries
return with the platform.
Fixes#1421.
Notable refactoring:
- Use stdlib "context" in place of "golang.org/x/net/context"
- Go-kit no longer wraps errors, so we remove the unwrap in transport_error.go
- Use MakeHandler when setting up endpoint tests (fixes test bug caught during
this refactoring)
Closes#1411.
Ensure that host network interfaces do not disappear when they (unexpectedly)
are returned with no updates from osquery. Add test to verify.
Fixes#1278
Previously we were using `build_platform`, which does not always properly
reflect the platform of the host running osquery. Now we should properly
retrieve the platform.
Fixes#1264
Saving a new detail update time when the host details were not actually updated
caused detail updates to be missed. This PR fixes the existing test to catch
the bug, and fixes the bug.
As of recently, osquery will report when a distributed query fails. We now
expose errors over the results websocket. When a query errored on the host, the
`error` key in the result will be non-null. Note that osquery currently doesn't
provide any details so the error string will always be "failed". I anticipate
that we will fix this and the string is included for future-proofing.
Successful result:
```
{
"type": "result",
"data": {
"distributed_query_execution_id": 15,
"host": {
... omitted ...
},
"rows": [
{
"hour": "1"
}
],
"error": null
}
}
```
Failed result:
```
{
"type": "result",
"data": {
"distributed_query_execution_id": 14,
"host": {
... omitted ...
},
"rows": [
],
"error": "failed"
}
}
```
This PR separates the table migrations from the data population migrations. Table migrations run before data migrations.
Now, we have the ability to create the database tables without populating them with data. This can be useful for running "unit" tests against a MySQL store that doesn't have any pre-populated data. When performing real migrations, or for more "integration" style testing, the data migrations can also be executed.
Note there are some special cases that must be observed with these migrations, and the README is updated to reflect those.
* Initial scaffolding of the host summary endpoint
* inmem datastore implementation of GenerateHostStatusStatistics
* HostSummary docstring
* changing the url of the host summary endpoint
* datastore tests for GenerateHostStatusStatistics
* MySQL datastore implementation of GenerateHostStatusStatistics
* <= and >= to catch exact time edge case
* removing clock interface method
* lowercase error wraps
* removin superfluous whitespace
* use updated_at
* adding a seen_at column to the hosts table
* moving the update of seen_time to the caller
* using db.Get instead of db.Select
A distributed query campaign can be "orphaned" (left in the QueryRunning state)
if the Kolide server restarts while it is running, or other weirdness occurs.
When this happens, no subscribers are waiting to read results written by
osqueryd agents, but the agents continue to receive the query. Previously, this
would cause us to error on ingestion.
The new behavior will instead set the campaign to completed when it detects
that it is orphaned. This should prevent sending queries for which there is no
subscriber.
- New NoSubscriber error interface in pubsub
- Detect NoSubscriber errors and close campaigns
- Tests on pubsub and service methods
Fixes#695
* Moving query attributes from the query object to the pack-query relationship
* some additional tests
* http request parsing test
* QueryOptions in new test_util code
* initial scaffolding of new request structures
* service and datastore
* test outline
* l2 merge conflict scrub
* service tests for scheduled query service
* service and datastore tests
* most endpoints and transports
* order of values are not deterministic with inmem
* transport tests
* rename PackQuery to ScheduledQuery
* removing existing implementation of adding queries to packs
* accounting for the new argument to NewQuery
* fix alignment in sql query
* removing underscore
* add removed to the datastore
* removed differential from the schema
* Adds created_by attribute to packs
This PR also updated the distributed query code to use the pattern
established here (service checks context)
* add enable/disable state to packs
* add query_count to packs API responses
* add host_count to packs API responses (very, very poorly)
* pack description should not be required
* counting hosts in packs via mysql
* removing extraneous newline in test
* Switch case instead of if/if else
* add description to update query for SavePack method
* change AND to WHERE in query as per @zwass
* add ordering and list options as per @murphybytes' suggestion
Removed Gorm, replaced it with Sqlx
* Added SQL bundling command to Makfile
* Using go-kit logger
* Added soft delete capability
* Changed SearchLabel to accept a variadic param for optional omit list
instead of array
* Gorm removed
* Refactor table structures to use CURRENT_TIMESTAMP mysql function
* Moved Inmem datastore into it's own package
* Updated README
* Implemented code review suggestions from @zwass
* Removed reference to Gorm from glide.yaml
- Introduce kolide.ListOptions to store pagination params (in the future it can
also store ordering/filtering params)
- Refactor service/datastore methods to take kolide.ListOptions
- Implement pagination
- Introduce a new pattern for defining/ingesting detail queries
- Add many relevant host details:
- Platform
- osquery Version
- Memory
- Hostname
- UUID
- OS Version
- Uptime
- Primary interface MAC
- Primary interface IP
- Fix parsing for inconsistent JSON schema returned from osquery
- Tests
- Establish a pattern for host authentication
- Establish a pattern for error JSON
- Add transport and make endpoint functions
- Fix discovered bugs + update tests