Somewhere around osquery 4.4.0 these messages were added to query
responses. We can now expose them to the API clients rather than using
the placeholder text.
Required for #192
This adds the option to set up an S3 bucket as the storage backend for file carving (partially solving #111).
It works by using the multipart upload capabilities of S3 to maintain compatibility with the "upload in blocks" protocol that osquery uses. It does this basically replacing the carve_blocks table while still maintaining the metadata in the original place (it would probably be possible to rely completely on S3 by using object tagging at the cost of listing performance). To make this pluggable, I created a new field in the service struct dedicated to the CarveStore which, if no configuration for S3 is set up will be just a reference to the standard datastore, otherwise it will point to the S3 one (effectively this separation will allow in the future to add more backends).
- Add endpoints for osquery to register and continue a carve.
- Implement client functionality for retrieving carve details and contents in fleetctl.
- Add documentation on using file carving with Fleet.
Addresses kolide/fleet#1714
Getting a single host with `fleetctl get host foobar` will look up the
host with the matching hostname, uuid, osquery identifier, or node key,
and provide the full host details along with the labels the host is a
member of.
Label membership is now stored in the label_membership table. This is
done in preparation for adding "manual" labels, as previously label
membership was associated directly with label query executions.
Label queries are now all executed at the same time, rather than on
separate intervals. This simplifies the calculation of which distributed
queries need to be run when a host checks in.
This commit takes advantage of the existing pagination APIs in the Fleet
server, and provides additional APIs to support pagination in the web
UI. Doing this dramatically reduces the response sizes for requests from
the UI, and limits the performance impact of UI clients on the Fleet and
MySQL servers.
This change optimizes live queries by pushing the computation of query
targets to the creation time of the query, and efficiently caching the
targets in Redis. This results in a huge performance improvement at both
steady-state, and when running live queries.
- Live queries are stored using a bitfield in Redis, and takes
advantage of bitfield operations to be extremely efficient.
- Only run Redis live query test when REDIS_TEST is set in environment
- Ensure that live queries are only sent to hosts when there is a client
listening for results. Addresses an existing issue in Fleet along with
appropriate cleanup for the refactored live query backend.
This PR removes unused types, code, DB tables, and associated migrations that are unused since Fleet 2.0.
An existing migration was refactored, and should remain compatible with both existing and new Fleet installations.
- Add toggle to disable live queries in advanced settings
- Add new live query status endpoint (checks for disabled via config and Redis health)
- Update QueryPage UI to use new live query status endpoint
Implements #2140
- Add logging for new campaigns
- Add logging for new query creations/modification/deletion
- Add usernames for logs found in labels, options, packs, osquery options, queries and scheduled queries where something is created, modified or deleted
When an osqueryd agent sends an enroll request it automatically sends
some details about the system. We now save these details which helps
ensure we send the correct platform config.
Closes#2065
An incorrect authorization check allowed non-admin users to modify the details of other users. We now enforce the appropriate authorization so that unprivileged users can only modify their own details.
Thanks to 'Quikke' for the report.
Replaces the UI endpoints for creating and modifying labels. These were removed
in #1686 because we thought we were killing the UI.
Now labels can be created and edited in the UI again.
Replaces (and appropriately refactors) a number of endpoints that were removed long ago when we decided to kill the UI with the fleetctl release. We turned out not to do this, and now need to restore these missing endpoints.
This is not a straight up replacement of the existing code because of refactoring to the DB schemas that was also done in the migration.
Most of the replaced code was removed in #1670 and #1686.
Fixes#1811, fixes#1810
With the UI, deleting by ID made sense. With fleetctl, we now want to delete
by name. Transition only the methods used for spec related entities, as others
will be removed soon.
Previously decorators were stored in a separate table. Now they are stored
directly with the config so that they can be modified on a per-platform basis.
Delete now unused decorators code.
- Add new Apply spec methods for queries and packs
- Remove now extraneous datastore/service methods
- Remove import service (unused, and had many dependencies that this breaks)
- Refactor tests as appropriate
Instead of trying to decode and re-encode status logs, we now write them directly as they come in.
This change prevents future changes to the osquery status log file format (addition and deletion of fields ) from
affecting Fleet. A similar change was implemented in #1636 for result logs.
Closes#1664
Initially fleet decoded the incoming JSON sent to the log endpoint.
Then the log event would be written to a log writer by calling json.Encoder{}.Encode.
Re-encoding logs is lossy; whenever a new field is sent by osqueryd we don't keep up with them.
Instead of caring about the content of the OsqueryResultLog, fleet will now write all log results
exactly as sent to the server by osqueryd.
Closes#1632Closes#1615
The launcher service implementation is an adapter around the TLS service.
All launcher methods that have an equivalent in TLS pass the business logic to the
TLS API.
Closes#1565
This PR adds support for file integrity monitoring. This is done by providing a simplified API that can be used to PATCH/GET FIM configurations. There is also code to build the FIM configuration to send back to osquery. Each PATCH request, if successful, replaces Fleet's existing FIM configuration. For example:
curl -X "PATCH" "https://localhost:8080/api/v1/kolide/fim" \
-H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzZXNzaW9uX2tleSI6IkVhaFhvZWswMGtWSEdaTTNCWndIMnhpYWxkNWZpcVFDR2hEcW1HK2UySmRNOGVFVE1DeTNTaUlFWmhZNUxhdW1ueFZDV2JiR1Bwdm5TKzdyK3NJUzNnPT0ifQ.SDCHAUA1vTuWGjXtcQds2GZLM27HAAiOUhR4WvgvTNY" \
-H "Content-Type: application/json; charset=utf-8" \
-d $'{
"interval": 500,
"file_paths": {
"etc": [
"/etc/%%"
],
"users": [
"/Users/%/Library/%%",
"/Users/%/Documents/%%"
],
"usr": [
"/usr/bin/%%"
]
}
}'
Closes issue #1475
The command line tool that uses this endpoint -> https://github.com/kolide/configimporter
* Added support for atomic imports and dry run imports
* Added code so that imports are idempotent
This PR partially addresses #1456, providing SSO SAML support. The flow of the code is as follows.
A Kolide user attempts to access a protected resource and is directed to log in.
If SSO identity providers (IDP) have been configured by an admin, the user is presented with SSO log in.
The user selects SSO, which invokes a call the InitiateSSO passing the URL of the protected resource that the user was originally trying access. Kolide server loads the IDP metadata and caches it along with the URL. We then build an auth request URL for the IDP which is returned to the front end.
The IDP calls the server, invoking CallbackSSO with the auth response.
We extract the original request id from the response and use it to fetch the cached metadata and the URL. We check the signature of the response, and validate the timestamps. If everything passes we get the user id from the IDP response and use it to create a login session. We then build a page which executes some javascript that will write the token to web local storage, and redirect to the original URL.
I've created a test web page in tools/app/authtest.html that can be used to test and debug new IDP's which also illustrates how a front end would interact with the IDP and the server. This page can be loaded by starting Kolide with the environment variable KOLIDE_TEST_PAGE_PATH to the full path of the page and then accessed at https://localhost:8080/test
Replaces the existing calculation that uses a global online interval. This method was lacking due to the fact that different hosts may have different checkin intervals set.
The new calculation uses `min(distributed_interval, config_tls_refresh) + 30` as the interval. This is calculated with the stored values for each host.
Closes#1321
Partially addresses #1456. This PR provides datastore support for SSO by creating a new entity IdentityProvider. This entity is an abstraction of the SAML IdentityProvider and contains the data needed to perform SAML authentication.
We now track the `config_tls_refresh`, `distributed_interval` and
`logger_tls_period` flag values for each host. Each value is updated by a
detail query agains the `osquery_flags` table, because they may be specified
outside of Kolide. The flags that can be specified within Kolide are also
updated when a config is returned to the host that changes their value.
This will enable us to do a more accurate per-host online status calculation as
discussed in #1419.
Push the calculation of target counts into the SQL query, rather than loading
all of the targets and then counting them. This provides a dramatic (>100x)
speedup in loading of the manage packs page when large numbers of hosts are
present.
Closes#1426
* Change email functionality
* Code review changes for @groob
* Name change per @groob
* Code review changes per @marpaia
Also added addition non-happy path tests to satisfy concerns by @groob