* Remove username from UI code
* Remove username from tests
* Remove username from database
* Modify server endpoints for removing username
* Implement backend aspects of removing username
* Update API docs
* Add name to fleetctl
- Add policy.rego file defining authorization policies.
- Add Go integrations to evaluate Rego policies (via OPA).
- Add middleware to ensure requests without authorization check are rejected (guard against programmer error).
- Add authorization checks to most service endpoints.
- Add config option for license key.
- Define license details data structure.
- Include license details in app config API responses.
Currently any non-empty value for `--license_key` behaves as though the
installation is licensed for `basic`. If the license key is empty,
`core` is returned.
Still to come is the appropriate parsing for the license key.
- Migrate old admins to global admins
- Migrate old non-admins to global maintainers
- Remove old admin column
- Give initial user global admin privilege
- Comment out some tests (to be refactored for new permissions model later)
This adds the option to set up an S3 bucket as the storage backend for file carving (partially solving #111).
It works by using the multipart upload capabilities of S3 to maintain compatibility with the "upload in blocks" protocol that osquery uses. It does this basically replacing the carve_blocks table while still maintaining the metadata in the original place (it would probably be possible to rely completely on S3 by using object tagging at the cost of listing performance). To make this pluggable, I created a new field in the service struct dedicated to the CarveStore which, if no configuration for S3 is set up will be just a reference to the standard datastore, otherwise it will point to the S3 one (effectively this separation will allow in the future to add more backends).
This change optimizes live queries by pushing the computation of query
targets to the creation time of the query, and efficiently caching the
targets in Redis. This results in a huge performance improvement at both
steady-state, and when running live queries.
- Live queries are stored using a bitfield in Redis, and takes
advantage of bitfield operations to be extremely efficient.
- Only run Redis live query test when REDIS_TEST is set in environment
- Ensure that live queries are only sent to hosts when there is a client
listening for results. Addresses an existing issue in Fleet along with
appropriate cleanup for the refactored live query backend.
This PR partially addresses #1456, providing SSO SAML support. The flow of the code is as follows.
A Kolide user attempts to access a protected resource and is directed to log in.
If SSO identity providers (IDP) have been configured by an admin, the user is presented with SSO log in.
The user selects SSO, which invokes a call the InitiateSSO passing the URL of the protected resource that the user was originally trying access. Kolide server loads the IDP metadata and caches it along with the URL. We then build an auth request URL for the IDP which is returned to the front end.
The IDP calls the server, invoking CallbackSSO with the auth response.
We extract the original request id from the response and use it to fetch the cached metadata and the URL. We check the signature of the response, and validate the timestamps. If everything passes we get the user id from the IDP response and use it to create a login session. We then build a page which executes some javascript that will write the token to web local storage, and redirect to the original URL.
I've created a test web page in tools/app/authtest.html that can be used to test and debug new IDP's which also illustrates how a front end would interact with the IDP and the server. This page can be loaded by starting Kolide with the environment variable KOLIDE_TEST_PAGE_PATH to the full path of the page and then accessed at https://localhost:8080/test