In some MySQL configurations, using a GROUP BY that doesn't refer to every
column in the SELECT will throw errors. Replace the use of GROUP BY with SELECT
DISTINCT as this is also more clear as to the intentions of the query.
Fixes#1249
* Change email functionality
* Code review changes for @groob
* Name change per @groob
* Code review changes per @marpaia
Also added addition non-happy path tests to satisfy concerns by @groob
add endpoint to serve the kolide certificate back to the user
The API will attempt to establish a TLS connection and fetch the certificate from the TLS ConnectionState.
The PEM encoded certificate will be served to the client in a JSON response as a base64 encoded string.
Closes#1012
* Simplifying SMTP Logic
This commit breaks the test email sending into it's own service method
(thus removing the capability from the API- if we want it back, we can
wire up another endpoint for just that). Additionally, error wrapping is
used through the new ModifyAppConfig service method to ensure that an
error or failed email will always result in an error while ensuring that
the submitted record always get committed (unless a serious error
happens).
* never wrap a nil error
* use err instead of individual errors
As of recently, osquery will report when a distributed query fails. We now
expose errors over the results websocket. When a query errored on the host, the
`error` key in the result will be non-null. Note that osquery currently doesn't
provide any details so the error string will always be "failed". I anticipate
that we will fix this and the string is included for future-proofing.
Successful result:
```
{
"type": "result",
"data": {
"distributed_query_execution_id": 15,
"host": {
... omitted ...
},
"rows": [
{
"hour": "1"
}
],
"error": null
}
}
```
Failed result:
```
{
"type": "result",
"data": {
"distributed_query_execution_id": 14,
"host": {
... omitted ...
},
"rows": [
],
"error": "failed"
}
}
```
* add a js validator that makes smtp server port required
* specifying that the InputField should be a number. this doesn't work, but i think that it should.
* casting the port as an int as a stop-gap fix
* email doesn't already have to be enabled to be enabled
* don't return the smtp password from the API
* show a fake placeholder password if the username is also set
* error type for @groob
Permissions errors were preventing users from completing this flow
- Add separate endpoint for performing required password reset
- Rewrite frontend reset to use this endpoint
Fixes#792
Add logging middleware for more of the kolide Service interfaces.
This PR was created through code generation, however it's not likely that the logging middleware can all be continuously regenerated - we're likely to want to add method specific key/values to individual methods. Moving forward, logging middleware should be maintained when changes are made to a service interface method.
- Remove require password reset from ModifyUser and
RequestPasswordReset methods, and UserPayload struct
- Add new RequirePasswordReset method
- Refactor JS for new separate method
This PR separates the table migrations from the data population migrations. Table migrations run before data migrations.
Now, we have the ability to create the database tables without populating them with data. This can be useful for running "unit" tests against a MySQL store that doesn't have any pre-populated data. When performing real migrations, or for more "integration" style testing, the data migrations can also be executed.
Note there are some special cases that must be observed with these migrations, and the README is updated to reflect those.
* Initial scaffolding of the host summary endpoint
* inmem datastore implementation of GenerateHostStatusStatistics
* HostSummary docstring
* changing the url of the host summary endpoint
* datastore tests for GenerateHostStatusStatistics
* MySQL datastore implementation of GenerateHostStatusStatistics
* <= and >= to catch exact time edge case
* removing clock interface method
* lowercase error wraps
* removin superfluous whitespace
* use updated_at
* adding a seen_at column to the hosts table
* moving the update of seen_time to the caller
* using db.Get instead of db.Select
This PR adds the `host_ids` and `label_ids` field to the packs HTTP API so that one can operate on the hosts/labels which a pack is scheduled to be executed on. This replaces (and deletes) the `/api/v1/kolide/packs/123/labels/456` API in favor of `PATCH /api/v1/packs/123` and specifying the `label_ids` field. This also allows for bulk operations.
Consider the following API examples:
## Creating a pack with a known set of hosts and labels
The key addition is the `host_ids` and `label_ids` field in both the request and the response.
### Request
```
POST /api/v1/kolide/packs
```
```json
{
"name": "My new pack",
"description": "The newest of the packs",
"host_ids": [1, 2, 3],
"label_ids": [1, 3, 5]
}
```
### Response
```json
{
"pack": {
"id": 123,
"name": "My new pack",
"description": "The newest of the packs",
"platform": "",
"created_by": 1,
"disabled": false,
"query_count": 0,
"total_hosts_count": 5,
"host_ids": [1, 2, 3],
"label_ids": [1, 3, 5]
}
}
```
## Modifying the hosts and/or labels that a pack is scheduled to execute on
### Request
```
PATCH /api/v1/kolide/packs/123
```
```json
{
"host_ids": [1, 2, 3, 4, 5],
"label_ids": [1, 3, 5, 7]
}
```
### Response
```json
{
"pack": {
"id": 123,
"name": "My new pack",
"description": "The newest of the packs",
"platform": "",
"created_by": 1,
"disabled": false,
"query_count": 0,
"total_hosts_count": 5,
"host_ids": [1, 2, 3, 4, 5],
"label_ids": [1, 3, 5, 7]
}
}
```
close#633
with an exposed interface.
Not checking for a specific sentinel error reduces coupling between packages
and allows adding context like the resource ID and resource type.
* Moving query attributes from the query object to the pack-query relationship
* some additional tests
* http request parsing test
* QueryOptions in new test_util code
* initial scaffolding of new request structures
* service and datastore
* test outline
* l2 merge conflict scrub
* service tests for scheduled query service
* service and datastore tests
* most endpoints and transports
* order of values are not deterministic with inmem
* transport tests
* rename PackQuery to ScheduledQuery
* removing existing implementation of adding queries to packs
* accounting for the new argument to NewQuery
* fix alignment in sql query
* removing underscore
* add removed to the datastore
* removed differential from the schema
- New datastore method for bulk deletion
- New service method calling this datastore method
- Endpoint, transport and handler connections for service method
Closes#389
- Add saved state to query (to differentiate queries explicitly saved from
those just run as distributed queries)
- Remove unique constraint on query name
Closes#390
- New route `/api/v1/kolide/results/{id}` with upgrade to websocket connection
- Query results pushed over websocket as they are received from pubsub
- Target totals updates pushed over websocket every second
- New datastore method to support retrieiving target totals
- Websocket package includes helpers and patterns for communicating over websockets
* Adds created_by attribute to packs
This PR also updated the distributed query code to use the pattern
established here (service checks context)
* add enable/disable state to packs
* add query_count to packs API responses
* add host_count to packs API responses (very, very poorly)
* pack description should not be required
* counting hosts in packs via mysql
* removing extraneous newline in test
* Switch case instead of if/if else
* add description to update query for SavePack method
* change AND to WHERE in query as per @zwass
* add ordering and list options as per @murphybytes' suggestion
* initial scaffolding
* pack info sidebar
* fixing the merge of the routes
* Remove radium from pack info sidepanel
* lint
* cards!
* redux entity config
* pack interface
* wiring up redux with fake dev data
* Add description attribute to packs
* move redux to top level page component to isolate data fetching
* initial scaffolding of all packs table
* adding redux entities back
* minimal
* alpha order in packs.js
* no newlines in HTML
* onclick handler to function on component class
* alpha order in router
* alpha order in paths.js
* no newline in side panel
* removing input field
* lint fixes
Removed Gorm, replaced it with Sqlx
* Added SQL bundling command to Makfile
* Using go-kit logger
* Added soft delete capability
* Changed SearchLabel to accept a variadic param for optional omit list
instead of array
* Gorm removed
* Refactor table structures to use CURRENT_TIMESTAMP mysql function
* Moved Inmem datastore into it's own package
* Updated README
* Implemented code review suggestions from @zwass
* Removed reference to Gorm from glide.yaml
New datastore methods are introduced for creating/updating
distributed query campaigns, as well as determining the active
distributed queries for a given host.
The endpoint is only active if there are no users in the datastore.
While the endpoint is active, it also disables all the other API endpoints, and /config returns `{"require_setup":true}`
for #378
A new datastore interface is needed for buffering incoming distributed query results to be sent to the client. This PR attempts to define and implement that interface.
It is intended that the ReadChannel() method be used by the goroutine that will push query results down a websocket to the client. Passing the results through this channel will allow that goroutine to perform a select on both the channel and the websocket, in order to properly handle IO.
- Introduce kolide.ListOptions to store pagination params (in the future it can
also store ordering/filtering params)
- Refactor service/datastore methods to take kolide.ListOptions
- Implement pagination
- Introduce a new pattern for defining/ingesting detail queries
- Add many relevant host details:
- Platform
- osquery Version
- Memory
- Hostname
- UUID
- OS Version
- Uptime
- Primary interface MAC
- Primary interface IP
- Fix parsing for inconsistent JSON schema returned from osquery
- Tests
- Establish a pattern for host authentication
- Establish a pattern for error JSON
- Add transport and make endpoint functions
- Fix discovered bugs + update tests