mirror of
https://github.com/valitydev/osquery-1.git
synced 2024-11-07 01:55:20 +00:00
Introduce decorator queries
This commit is contained in:
parent
c2a364c573
commit
2379493721
@ -68,7 +68,7 @@ This config tells osqueryd to schedule two queries, **macosx_kextstat** and
|
||||
**foobar**:
|
||||
|
||||
* the schedule keys must be unique
|
||||
* the "interval" specifies query frequency (in seconds)
|
||||
* the `interval` specifies query frequency (in seconds)
|
||||
|
||||
The first query will document changes to the OS X host's kernel extensions,
|
||||
with a query interval of 10 seconds. Consider using osquery's [performance
|
||||
@ -81,8 +81,354 @@ stored in RocksDB. On subsequent runs, only result-set changes are logged to
|
||||
RocksDB.
|
||||
|
||||
Scheduled queries can also set: `"removed":false` and `"snapshot":true`. See
|
||||
the next section on [logging](logging.md) to learn how query options affect the
|
||||
output.
|
||||
the next section on [logging](../deployment/logging.md), and the below configuration specification to learn how query options affect the output.
|
||||
|
||||
## Query Packs
|
||||
|
||||
Configuration supports sets, called packs, of queries that help define your
|
||||
schedule. Packs are distributed with osquery and labeled based on broad
|
||||
categories of information and visibility. For example, a "compliance" pack will
|
||||
include queries that check for changes in locked down operating system features
|
||||
and user settings. A "vulnerability management" pack may perform general asset
|
||||
management queries that build event logs around package and software install
|
||||
changes.
|
||||
|
||||
In an osquery configuration JSON, packs are defined as a top-level-key and
|
||||
consist of pack name to pack content JSON data structures.
|
||||
|
||||
```json
|
||||
{
|
||||
"schedule": {...},
|
||||
"packs": {
|
||||
"internal_stuff": {
|
||||
"discovery": [
|
||||
"select pid from processes where name = 'ldap';"
|
||||
],
|
||||
"platform": "linux",
|
||||
"version": "1.5.2",
|
||||
"queries": {
|
||||
"active_directory": {
|
||||
"query": "select * from ad_config;",
|
||||
"interval": "1200",
|
||||
"description": "Check each user's active directory cached settings."
|
||||
}
|
||||
}
|
||||
},
|
||||
"testing": {
|
||||
"shard": "10",
|
||||
"queries": {
|
||||
"suid_bins": {
|
||||
"query": "select * from suid_bins;",
|
||||
"interval": "3600",
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The pack value may also be a string, such as:
|
||||
|
||||
```json
|
||||
{
|
||||
"packs": {
|
||||
"external_pack": "/path/to/external_pack.conf",
|
||||
"internal_stuff": {
|
||||
[...]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If using a string instead of an inline JSON dictionary the configuration plugin will be asked to "generate" that resource. In the case of the default **filesystem** plugin, these strings are considered paths.
|
||||
|
||||
Queries added to the schedule from packs inherit the pack name as part of the scheduled query name identifier. For example, consider the embedded `active_directory` query above, it is in the `internal_stuff` pack so the scheduled query name becomes: `pack_internal_stuff_active_directory`. The delimiter can be changed using the `--pack_delimiter=_`, see the [CLI Options](../installation/cli-flags.md) for more details.
|
||||
|
||||
### Discovery queries
|
||||
|
||||
Discovery queries are a feature of query packs that make it much easier to monitor services at scale. Consider that there are some groups of scheduled
|
||||
queries which should only be run on a host when a condition is true. For
|
||||
example, perhaps you want to write some queries to monitor MySQL. You've made a
|
||||
pack called "mysql" and now you only want the queries in that pack to execute
|
||||
if the `mysqld` program is running on the host.
|
||||
|
||||
Without discovery queries, you could have your configuration management write a
|
||||
different configuration file for your MySQL tier. Unfortunately, however, this
|
||||
requires you to know the complete set of hosts in your environment which are
|
||||
running MySQL. This is problematic, especially if engineers in your environment
|
||||
can install arbitrary software on arbitrary hosts. If MySQL is installed on a
|
||||
non-standard host, you have no way to know. Therefore, you cannot schedule your MySQL pack on those hosts through configuration management logic.
|
||||
|
||||
One solution to this problem is discovery queries.
|
||||
|
||||
Query packs allow you to define a set of osquery queries which control whether
|
||||
or not the pack will execute. Discovery queries are represented by the
|
||||
top-level "discovery" key-word in a pack. The value should be a list of osquery
|
||||
queries. If all of the queries return more than zero rows, then the queries are
|
||||
added to the query schedule. This allows you to distribute configurations for
|
||||
many services and programs, while ensuring that only relevant queries will be
|
||||
executing on your host.
|
||||
|
||||
You don't need to define any discovery queries for a pack. If no discovery
|
||||
queries are defined, then the pack will always execute.
|
||||
|
||||
Discovery queries look like:
|
||||
|
||||
```json
|
||||
{
|
||||
"discovery": [
|
||||
"select pid from processes where name = 'foobar';",
|
||||
"select count(*) from users where username like 'www%';"
|
||||
],
|
||||
"queries": {}
|
||||
}
|
||||
```
|
||||
|
||||
In the above example, the pack will only execute on hosts which are running
|
||||
processes called "foobar" or has users that start with "www".
|
||||
|
||||
Discovery queries are refreshed for all packs every 60 minutes. You can
|
||||
change this value via the `pack_refresh_interval` configuration option.
|
||||
|
||||
### Packs FAQs
|
||||
|
||||
**Where do packs go?**
|
||||
|
||||
The default way to define a query pack is in the main configuration file.
|
||||
Consider the following example:
|
||||
|
||||
```json
|
||||
{
|
||||
"options": {
|
||||
"enable_monitor": "true"
|
||||
},
|
||||
"packs": {
|
||||
"foo": {
|
||||
"queries": {}
|
||||
},
|
||||
"bar": {
|
||||
"queries": {}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Alternatively, however, you can also define the value of a pack as a raw
|
||||
string. Consider the following example:
|
||||
|
||||
```json
|
||||
{
|
||||
"options": {
|
||||
"enable_monitor": "true"
|
||||
},
|
||||
"packs": {
|
||||
"foo": "/tmp/foo.json",
|
||||
"bar": "/tmp/bar.json"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In the above example, the packs are defined using a local filesystem path.
|
||||
When osquery's config parser is provided a string instead of inline dictionary the active config plugin is called to resolve what should be done to go from `/tmp/foo.json` to the actual content of the pack. See [configuration plugin](../development/config-plugins.md) development for more information on packs.
|
||||
|
||||
**Where can I get more packs?**
|
||||
|
||||
We release (and bundle alongside RPMs/DEBs/PKGs/etc.) query packs that emit high signal events as well as event data that is worth storing in the case of future incidents and security events. The queries within each pack will be performance tested and well-formed (JOIN, select-limited, etc.). But it is always an exercise for the user to make sure queries are useful and are not impacting performance critical hosts. You can find the query packs that are released by the osquery team documented at [https://osquery.io/docs/packs](https://osquery.io/docs/packs) and the content in [**/packs**](https://github.com/facebook/osquery/blob/master/packs) within the osquery repository.
|
||||
|
||||
**How do I modify the default options in the provided packs?**
|
||||
|
||||
We don't offer a built-in way to modify the default intervals / options in the
|
||||
supplied query packs. Fortunately, however, packs are just JSON. Therefore, it
|
||||
would be rather trivial to write a tool which reads in pack JSON, modifies it
|
||||
in some way, then re-writes the JSON.
|
||||
|
||||
## Configuration specification
|
||||
|
||||
This section details all (read: most) of the default configuration keys, called the default specification. We mention 'default' as the configuration can be extended using `ConfigParser` plugins.
|
||||
|
||||
### Options
|
||||
|
||||
The `options` key defines a map of option name to option value pairs. The names must be a CLI flag in the "osquery configuration options" set; running `osqueryd --help` will enumerate the list.
|
||||
|
||||
Example:
|
||||
```json
|
||||
{
|
||||
"options": {
|
||||
"read_max": 100000,
|
||||
"events_max": 100000,
|
||||
"enable_monitor": true,
|
||||
"host_identifier": "uuid"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If a flag value is specified on the CLI as a switch, or specified in the Gflags `--flagfile` file it will be overridden if the equivalent "options" key exists in the config.
|
||||
|
||||
There are LOTs of CLI flags that CANNOT be set with the `options` key. These flags determine the start and initialization of osquery and configuration loading usually depends on these CLI-only flags. Refer to the `--help` list to determine the appropriateness of options.
|
||||
|
||||
### Schedule
|
||||
|
||||
The `schedule` key defines a map of scheduled query names to the query details. You will see mention of the schedule throughout osquery's documentation. It is the focal point of osqueryd's capabilities.
|
||||
|
||||
Example:
|
||||
```json
|
||||
{
|
||||
"schedule": {
|
||||
"users_browser_plugins": {
|
||||
"query": "SELECT * FROM users JOIN browser_plugins USING (uid)",
|
||||
"interval": 60
|
||||
},
|
||||
"hashes_of_bin": {
|
||||
"query": "SELECT path, hash.sha256 FROM file JOIN hash USING (path) WHERE file.directory = '/bin/';",
|
||||
"interval": 3600,
|
||||
"removed": false,
|
||||
"platform": "darwin",
|
||||
"version": "1.4.5",
|
||||
"shard": 1
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Each of `schedule`'s value's is also a map, we call these scheduled queries and their key is the `name` which shows up in your results log. In the example above the schedule includes two queries: **users_browser_plugins** and **hashes_of_bin**. While it is common to schedule a `SELECT * FROM your_favorite_table`, one of the powers of osquery is SQL expression and the combination of several table concepts please use `JOIN`s liberally.
|
||||
|
||||
The basic scheduled query specification includes:
|
||||
* `query`: the SQL query to run
|
||||
* `interval`: an interval in seconds to run the query (subject to splay/smoothing)
|
||||
* `removed`: a boolean to determine if removed actions should be logged
|
||||
* `snapshot`: a boolean to set 'snapshot' mode
|
||||
* `platform`: restrict this query to a given platform
|
||||
* `version`: only run on osquery versions greater than or equal-to
|
||||
* `shard`: restrict this query to a percentage (1-100) of target hosts
|
||||
|
||||
The `platform` key can be:
|
||||
* `darwin` for OS X hosts
|
||||
* `freebsd` for FreeBSD hosts
|
||||
* `linux` for any RedHat or Debian-based hosts
|
||||
* `ubuntu` for Debian-based hosts (yes, we know)
|
||||
* `centos` for RedHat-based hosts (also, see above, we get it)
|
||||
* `any` or `all` for all, alternatively no platform key selects all
|
||||
|
||||
The `shard` key works by hashing the hostname then taking the quotient 255 of the first byte. This allows us to select a deterministic 'preview' for the query, this helps when slow-rolling or testing new queries.
|
||||
|
||||
The schedule and associated queries generate a timeline of events through the defined intervals. There are several tables `*_events` which natively yield a time series, all other tables are subjected to execution on an interval. When the results from a table differ from the results when the query was last executed, logs are emitted with `{"action": "removed"}` or `{"action": "added"}` for the appropriate action.
|
||||
|
||||
Snapshot queries, those with `snapshot: true` will not store differentials and will not emulate an event stream. Snapshots always return the entire results from the query on the given interval. See
|
||||
the next section on [logging](../deployment/logging.md) for examples of each log output.
|
||||
|
||||
### Packs
|
||||
|
||||
The above section on packs almost covers all you need to know about query packs. The specification contains a few caveats since packs are designed for distribution. Packs use the `packs` key, a map where the key is a pack name and the value can be either a string or a dictionary (object). When a string is used the value is passed back into the config plugin and acts as a "resource" request.
|
||||
|
||||
```json
|
||||
{
|
||||
"packs": {
|
||||
"pack_name_1": "/path/to/pack.json",
|
||||
"pack_name_2": {
|
||||
"queries": {},
|
||||
"shard": 10,
|
||||
"version": "1.7.0",
|
||||
"platform": "linux",
|
||||
"discovery": [
|
||||
"SELECT * FROM processes WHERE name = 'osqueryi'"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
As with scheduled queries, described above, each pack borrows the `platform`, `version`, and `shard` selectors and restrictions. These work the exact same way, but apply to the entire pack. This is a short-hand for applying selectors and restrictions to large sets of queries.
|
||||
|
||||
The `queries` key mimics the configuration's `schedule` key.
|
||||
|
||||
The `discovery` query set feature is described in detail in the above packs section. This array should include queries to be executed in an `OR` manner.
|
||||
|
||||
### File Paths
|
||||
|
||||
The `file_paths` key defines a map of file integrity monitoring (FIM) categories to sets of filesystem globbing lines. Please refer to the [FIM](../deployment/file-itegrity-monitoring.md) guide for details on how to use osquery as a FIM tool.
|
||||
|
||||
Example:
|
||||
```json
|
||||
{
|
||||
"file_paths": {
|
||||
"custom_category": [
|
||||
"/etc/**",
|
||||
"/tmp/.*"
|
||||
],
|
||||
"device_nodes": [
|
||||
"/dev/*"
|
||||
]
|
||||
},
|
||||
"file_accesses": [
|
||||
"custom_category"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
The file paths set has a sister key: `file_accesses` which contains a set of categories names that opt-in for filesystem access monitoring.
|
||||
|
||||
### YARA
|
||||
|
||||
The `yara` key uses two subkeys to configure YARA signatures: `signatures`, and to define a mapping for signature sets to categories of `file_paths` defined in the "file paths" configuration. Please refer to the much more detailed [YARA](../deployment/yara.md) deployment guide.
|
||||
|
||||
Example:
|
||||
```json
|
||||
{
|
||||
"yara": {
|
||||
"signatures": {
|
||||
"signature_group_1": [
|
||||
"/path/to/signature.sig"
|
||||
]
|
||||
},
|
||||
"file_paths": {
|
||||
"custom_category": [
|
||||
"signature_group_1"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
There is a strict relationship between the top-level `file_paths` key, and `yara`'s equivalent subkey.
|
||||
|
||||
### Decorator queries
|
||||
|
||||
Decorator queries exist in osquery versions 1.7.3+ and are used to add additional "decorations" to results and snapshot logs. There are three types of decorator queries based on when and how you want the decoration data.
|
||||
|
||||
```json
|
||||
{
|
||||
"decorators": {
|
||||
"load": [
|
||||
"SELECT version FROM osquery_info"
|
||||
],
|
||||
"always": [
|
||||
"SELECT user AS username FROM logged_in_users WHERE user <> '' ORDER BY time LIMIT 1;"
|
||||
],
|
||||
"interval": {
|
||||
"3600": [
|
||||
"SELECT total_seconds AS uptime FROM uptime;"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The types of decorators are:
|
||||
* `load`: run these decorators when the configuration loads (or is reloaded)
|
||||
* `always`: run these decorators before each query in the schedule
|
||||
* `interval`: a special key that defines a map of interval times, see below
|
||||
|
||||
Each decorator query should return at most 1 row. A warning will be generated if more than 1 row is returned as they will be forcefully ignored and constitute undefined behavior. Each decorator query should be careful not to emit column collisions, this is also undefined behavior.
|
||||
|
||||
The columns, and their values, will be appended to each log line as follows. Assuming the above set of decorators is used, and the schedule is execution for over an hour (3600 seconds):
|
||||
|
||||
```json
|
||||
{"decorators": {"user": "you", "uptime": "10000", "version": "1.7.3"}}
|
||||
```
|
||||
|
||||
Expect the normal set of log keys to be included and note that `decorators` is a top-level key in the log line whose value is an embedded map.
|
||||
|
||||
The `interval` type uses a map of interval 'periods' as keys, and the set of decorator queries for each value. Each of these intervals MUST be minute-intervals. Anything not divisible by 60 will generate a warning, and will not run.
|
||||
|
||||
## Chef Configuration
|
||||
|
||||
@ -220,203 +566,6 @@ end
|
||||
|
||||
And the same configuration file from the OS X example is appropriate.
|
||||
|
||||
## Query Packs
|
||||
|
||||
Configuration supports sets, called packs, of queries that help define your
|
||||
schedule. Packs are distributed with osquery and labeled based on broad
|
||||
categories of information and visibility. For example, a "compliance" pack will
|
||||
include queries that check for changes in locked down operating system features
|
||||
and user settings. A "vulnerability management" pack may perform general asset
|
||||
management queries that build event logs around package and software install
|
||||
changes.
|
||||
|
||||
In an osquery configuration JSON, packs are defined as a top-level-key and
|
||||
consist of pack name to pack content JSON data structures.
|
||||
|
||||
```json
|
||||
{
|
||||
"schedule": {...},
|
||||
"packs": {
|
||||
"internal_stuff": {
|
||||
"discovery": [
|
||||
"select pid from processes where name = 'ldap';"
|
||||
],
|
||||
"platform": "linux",
|
||||
"version": "1.5.2",
|
||||
"queries": {
|
||||
"active_directory": {
|
||||
"query": "select * from ad_config;",
|
||||
"interval": "1200",
|
||||
"description": "Check each user's active directory cached settings."
|
||||
}
|
||||
}
|
||||
},
|
||||
"testing": {
|
||||
"shard": "10",
|
||||
"queries": {
|
||||
"suid_bins": {
|
||||
"query": "select * from suid_bins;",
|
||||
"interval": "3600",
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The pack value may also be a string, such as:
|
||||
|
||||
```json
|
||||
{
|
||||
"packs": {
|
||||
"external_pack": "/path/to/external_pack.conf",
|
||||
"internal_stuff": {
|
||||
[...]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If using a string instead of an inline JSON dictionary the configuration plugin will be asked to "generate" that resource. In the case of the default **filesystem** plugin, these strings are considered paths.
|
||||
|
||||
Queries added to the schedule from packs inherit the pack name as part of the scheduled query name identifier. For example, consider the embedded `active_directory` query above, it is in the `internal_stuff` pack so the scheduled query name becomes: `pack_internal_stuff_active_directory`. The delimiter can be changed using the `--pack_delimiter=_`, see the [CLI Options](../installation/cli-flags.md) for more details.
|
||||
|
||||
### Discovery queries
|
||||
|
||||
Discovery queries are a feature of query packs that make it much easier to monitor services at scale. Consider that there are some groups of scheduled
|
||||
queries which should only be run on a host when a condition is true. For
|
||||
example, perhaps you want to write some queries to monitor MySQL. You've made a
|
||||
pack called "mysql" and now you only want the queries in that pack to execute
|
||||
if the `mysqld` program is running on the host.
|
||||
|
||||
Without discovery queries, you could have your configuration management write a
|
||||
different configuration file for your MySQL tier. Unfortunately, however, this
|
||||
requires you to know the complete set of hosts in your environment which are
|
||||
running MySQL. This is problematic, especially if engineers in your environment
|
||||
can install arbitrary software on arbitrary hosts. If MySQL is installed on a
|
||||
non-standard host, you have no way to know. Therefore, you cannot schedule your MySQL pack on those hosts through configuration management logic.
|
||||
|
||||
One solution to this problem is discovery queries.
|
||||
|
||||
Query packs allow you to define a set of osquery queries which control whether
|
||||
or not the pack will execute. Discovery queries are represented by the
|
||||
top-level "discovery" key-word in a pack. The value should be a list of osquery
|
||||
queries. If all of the queries return more than zero rows, then the queries are
|
||||
added to the query schedule. This allows you to distribute configurations for
|
||||
many services and programs, while ensuring that only relevant queries will be
|
||||
executing on your host.
|
||||
|
||||
You don't need to define any discovery queries for a pack. If no discovery
|
||||
queries are defined, then the pack will always execute.
|
||||
|
||||
Discovery queries look like:
|
||||
|
||||
```json
|
||||
{
|
||||
"discovery": [
|
||||
"select pid from processes where name = 'foobar';",
|
||||
"select count(*) from users where username like 'www%';"
|
||||
],
|
||||
"queries": {}
|
||||
}
|
||||
```
|
||||
|
||||
In the above example, the pack will only execute on hosts which are running
|
||||
processes called "foobar" or has users that start with "www".
|
||||
|
||||
Discovery queries are refreshed for all packs every 60 minutes. You can
|
||||
change this value via the `pack_refresh_interval` configuration option.
|
||||
|
||||
**Where do packs go?**
|
||||
|
||||
The default way to define a query pack is in the main configuration file.
|
||||
Consider the following example:
|
||||
|
||||
```json
|
||||
{
|
||||
"options": {
|
||||
"enable_monitor": "true"
|
||||
},
|
||||
"packs": {
|
||||
"foo": {
|
||||
"queries": {}
|
||||
},
|
||||
"bar": {
|
||||
"queries": {}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Alternatively, however, you can also define the value of a pack as a raw
|
||||
string. Consider the following example:
|
||||
|
||||
```json
|
||||
{
|
||||
"options": {
|
||||
"enable_monitor": "true"
|
||||
},
|
||||
"packs": {
|
||||
"foo": "/tmp/foo.json",
|
||||
"bar": "/tmp/bar.json"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In the above example, the packs are defined using a local filesystem path.
|
||||
When osquery's config parser is provided a string instead of inline dictionary the active config plugin is called to resolve what should be done to go from `/tmp/foo.json` to the actual content of the pack. See [configuration plugin](../development/config-plugins.md) development for more information on packs.
|
||||
|
||||
### Options
|
||||
|
||||
In addition to discovery and queries, a pack may contain a **platform**, **shard**, or **version** key. Specifying platform allows you to specify that the pack should only be executed on "linux", "darwin", etc. The shard key applies Chef-style percentage sharding. Appropriate values range from 1 - 100 and represent a percentage of hosts that should use this pack. Values over 100 are equivalent to 100%, 0 discounts the shard option. The hosts that fall into the range are a deterministic 10%. The version key can set a minimum supported version for this query.
|
||||
|
||||
In practice, this looks like:
|
||||
|
||||
```json
|
||||
{
|
||||
"platform": "any",
|
||||
"version": "1.5.0",
|
||||
"shard": "10",
|
||||
"queries": {}
|
||||
}
|
||||
```
|
||||
|
||||
Additionally, you can specify platform and version on individual queries in
|
||||
a pack. For example:
|
||||
|
||||
```json
|
||||
{
|
||||
"platform": "any",
|
||||
"version": "1.5.0",
|
||||
"queries": {
|
||||
"info": {
|
||||
"query": "select * from osquery_info;",
|
||||
"interval": 60
|
||||
},
|
||||
"packs": {
|
||||
"query": "select * from osquery_packs;",
|
||||
"interval": 60,
|
||||
"version": "1.5.2"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In this example, the **info** query will run on osquery version 1.5.0 and above
|
||||
since the minimum version defined for the global pack is 1.5.0. The **packs**
|
||||
query, however, defines an additional version constraint, therefore the **packs** query will only run on osquery version 1.5.2 and above.
|
||||
|
||||
**Where can I get more existing packs?**
|
||||
|
||||
We release (and bundle alongside RPMs/DEBs/PKGs/etc.) query packs that emit high signal events as well as event data that is worth storing in the case of future incidents and security events. The queries within each pack will be performance tested and well-formed (JOIN, select-limited, etc.). But it is always an exercise for the user to make sure queries are useful and are not impacting performance critical hosts. You can find the query packs that are released by the osquery team documented at [https://osquery.io/docs/packs](https://osquery.io/docs/packs) and the content in [**/packs**](https://github.com/facebook/osquery/blob/master/packs) within the osquery repository.
|
||||
|
||||
**How do I modify the default options in the provided packs?**
|
||||
|
||||
We don't offer a built-in way to modify the default intervals / options in the
|
||||
supplied query packs. Fortunately, however, packs are just JSON. Therefore, it
|
||||
would be rather trivial to write a tool which reads in pack JSON, modifies it
|
||||
in some way, then re-writes the JSON.
|
||||
|
||||
## osqueryctl helper
|
||||
|
||||
To test a deploy or configuration we include a short helper script called **osqueryctl**. There are several actions including "start", "stop", and "config-check" that apply to both OS X and Linux.
|
||||
|
@ -160,8 +160,9 @@ class Config : private boost::noncopyable {
|
||||
* }));
|
||||
* @endcode
|
||||
*/
|
||||
void scheduledQueries(std::function<
|
||||
void(const std::string& name, const ScheduledQuery& query)> predicate);
|
||||
void scheduledQueries(
|
||||
std::function<void(const std::string& name, const ScheduledQuery& query)>
|
||||
predicate);
|
||||
|
||||
/**
|
||||
* @brief Map a function across the set of configured files
|
||||
@ -289,7 +290,7 @@ class Config : private boost::noncopyable {
|
||||
std::map<std::string, QueryPerformance> performance_;
|
||||
|
||||
/// A set of named categories filled with filesystem globbing paths.
|
||||
using FileCategories = std::map<std::string, std::vector<std::string> >;
|
||||
using FileCategories = std::map<std::string, std::vector<std::string>>;
|
||||
std::map<std::string, FileCategories> files_;
|
||||
|
||||
/// A set of hashes for each source of the config.
|
||||
@ -312,6 +313,7 @@ class Config : private boost::noncopyable {
|
||||
friend class ConfigTests;
|
||||
friend class FilePathsConfigParserPluginTests;
|
||||
friend class FileEventsTableTests;
|
||||
friend class DecoratorsConfigParserPluginTests;
|
||||
FRIEND_TEST(OptionsConfigParserPluginTests, test_get_option);
|
||||
FRIEND_TEST(PacksTests, test_discovery_cache);
|
||||
FRIEND_TEST(SchedulerTests, test_monitor);
|
||||
|
@ -338,11 +338,14 @@ struct QueryLogItem {
|
||||
std::string identifier;
|
||||
|
||||
/// The time that the query was executed, seconds as UNIX time.
|
||||
int time;
|
||||
size_t time{0};
|
||||
|
||||
/// The time that the query was executed, an ASCII string.
|
||||
std::string calendar_time;
|
||||
|
||||
/// A set of additional fields to emit with the log line.
|
||||
std::map<std::string, std::string> decorations;
|
||||
|
||||
/// equals operator
|
||||
bool operator==(const QueryLogItem& comp) const {
|
||||
return (comp.results == results) && (comp.name == name);
|
||||
|
246
osquery/config/parsers/decorators.cpp
Normal file
246
osquery/config/parsers/decorators.cpp
Normal file
@ -0,0 +1,246 @@
|
||||
/*
|
||||
* Copyright (c) 2014-present, Facebook, Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* This source code is licensed under the BSD-style license found in the
|
||||
* LICENSE file in the root directory of this source tree. An additional grant
|
||||
* of patent rights can be found in the PATENTS file in the same directory.
|
||||
*
|
||||
*/
|
||||
|
||||
#include <osquery/config.h>
|
||||
#include <osquery/flags.h>
|
||||
#include <osquery/logger.h>
|
||||
#include <osquery/sql.h>
|
||||
|
||||
#include "osquery/config/parsers/decorators.h"
|
||||
|
||||
namespace pt = boost::property_tree;
|
||||
|
||||
namespace osquery {
|
||||
|
||||
FLAG(bool, disable_decorators, false, "Disable log result decoration");
|
||||
|
||||
/// Statically define the parser name to avoid mistakes.
|
||||
#define PARSER_NAME "decorators"
|
||||
|
||||
using KeyValueMap = std::map<std::string, std::string>;
|
||||
using DecorationStore = std::map<std::string, KeyValueMap>;
|
||||
|
||||
/**
|
||||
* @brief A simple ConfigParserPlugin for a "decorators" dictionary key.
|
||||
*
|
||||
* Decorators append data to results, snapshots, and status log lines.
|
||||
* They can be used to add arbitrary additional datums within the 'decorators'
|
||||
* subkey.
|
||||
*
|
||||
* Decorators come in three basic flavors, defined by when they are run:
|
||||
* load: run these decorators when the config is loaded.
|
||||
* always: run these decorators for every query immediate before
|
||||
* interval: run these decorators on an interval.
|
||||
*
|
||||
* When 'interval' is used, the value is a dictionary of intervals, each of the
|
||||
* subkeys are treated as the requested interval in sections. The internals
|
||||
* are emulated by the query schedule.
|
||||
*
|
||||
* Decorators are sets of queries, and each selected column within the set is
|
||||
* added to the 'decorators' dictionary. Including two queries with the same
|
||||
* column name is undefined behavior and will most likely lead to either
|
||||
* duplicate keys or overwriting. Issuing a query that emits more than one row
|
||||
* will also lead to undefined behavior. The decorator executor will ignore any
|
||||
* rows past the first.
|
||||
*/
|
||||
class DecoratorsConfigParserPlugin : public ConfigParserPlugin {
|
||||
public:
|
||||
std::vector<std::string> keys() const override { return {PARSER_NAME}; }
|
||||
|
||||
Status setUp() override;
|
||||
|
||||
Status update(const std::string& source, const ParserConfig& config) override;
|
||||
|
||||
/// Update the set of decorators for a given source.
|
||||
void updateDecorations(const std::string& source,
|
||||
const pt::ptree& decorators);
|
||||
|
||||
/// Clear the decorations created from decorators for the given source.
|
||||
void clearSources(const std::string& source);
|
||||
|
||||
public:
|
||||
/// Set of configuration sources to the set of decorator queries.
|
||||
std::map<std::string, std::vector<std::string>> always_;
|
||||
|
||||
/// Set of configuration sources to the set of on-load decorator queries.
|
||||
std::map<std::string, std::vector<std::string>> load_;
|
||||
|
||||
/// Set of configuration sources to valid intervals.
|
||||
std::map<std::string, std::map<size_t, std::vector<std::string>>> intervals_;
|
||||
|
||||
public:
|
||||
/// The result set of decorations, column names and their values.
|
||||
static DecorationStore kDecorations;
|
||||
|
||||
/// Protect additions to the decorator set.
|
||||
static Mutex kDecorationsMutex;
|
||||
};
|
||||
|
||||
DecorationStore DecoratorsConfigParserPlugin::kDecorations;
|
||||
Mutex DecoratorsConfigParserPlugin::kDecorationsMutex;
|
||||
|
||||
Status DecoratorsConfigParserPlugin::setUp() {
|
||||
// Decorators are kept within customized data structures.
|
||||
// No need to define a key for the ::getData API.
|
||||
return Status(0, "OK");
|
||||
}
|
||||
|
||||
Status DecoratorsConfigParserPlugin::update(const std::string& source,
|
||||
const ParserConfig& config) {
|
||||
clearSources(source);
|
||||
clearDecorations(source);
|
||||
if (config.count(PARSER_NAME) > 0) {
|
||||
// Each of these methods acquires the decorator lock separately.
|
||||
// The run decorators method is designed to have call sites throughout
|
||||
// the code base.
|
||||
updateDecorations(source, config.at(PARSER_NAME));
|
||||
runDecorators(DECORATE_LOAD, 0, source);
|
||||
}
|
||||
|
||||
return Status(0, "OK");
|
||||
}
|
||||
|
||||
void DecoratorsConfigParserPlugin::clearSources(const std::string& source) {
|
||||
// Reset the internal data store.
|
||||
WriteLock lock(DecoratorsConfigParserPlugin::kDecorationsMutex);
|
||||
intervals_[source].clear();
|
||||
always_[source].clear();
|
||||
load_[source].clear();
|
||||
}
|
||||
|
||||
void DecoratorsConfigParserPlugin::updateDecorations(
|
||||
const std::string& source, const pt::ptree& decorators) {
|
||||
WriteLock lock(DecoratorsConfigParserPlugin::kDecorationsMutex);
|
||||
// Assign load decorators.
|
||||
auto& load_key = kDecorationPointKeys.at(DECORATE_LOAD);
|
||||
if (decorators.count(load_key) > 0) {
|
||||
for (const auto& item : decorators.get_child(load_key)) {
|
||||
load_[source].push_back(item.second.data());
|
||||
}
|
||||
}
|
||||
|
||||
// Assign always decorators.
|
||||
auto& always_key = kDecorationPointKeys.at(DECORATE_ALWAYS);
|
||||
if (decorators.count(always_key) > 0) {
|
||||
for (const auto& item : decorators.get_child(always_key)) {
|
||||
always_[source].push_back(item.second.data());
|
||||
}
|
||||
}
|
||||
|
||||
// Check if intervals are defined.
|
||||
auto& interval_key = kDecorationPointKeys.at(DECORATE_INTERVAL);
|
||||
if (decorators.count(interval_key) > 0) {
|
||||
auto& interval = decorators.get_child(interval_key);
|
||||
for (const auto& item : interval) {
|
||||
size_t rate = std::stoll(item.first);
|
||||
if (rate % 60 != 0) {
|
||||
LOG(WARNING) << "Invalid decorator interval rate " << rate
|
||||
<< " in config source: " << source;
|
||||
continue;
|
||||
}
|
||||
|
||||
// This is a valid interval, update the set of intervals to include
|
||||
// this value. When intervals are checked this set is scanned, if a
|
||||
// match is found, then the associated config data is executed.
|
||||
for (const auto& interval_query : item.second) {
|
||||
intervals_[source][rate].push_back(interval_query.second.data());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
inline void addDecoration(const std::string& source,
|
||||
const std::string& name,
|
||||
const std::string& value) {
|
||||
DecoratorsConfigParserPlugin::kDecorations[source][name] = value;
|
||||
}
|
||||
|
||||
inline void runDecorators(const std::string& source,
|
||||
const std::vector<std::string>& queries) {
|
||||
for (const auto& query : queries) {
|
||||
auto results = SQL(query);
|
||||
if (results.rows().size() > 0) {
|
||||
// Notice the warning above about undefined behavior when:
|
||||
// 1: You include decorators that emit the same column name
|
||||
// 2: You include a query that returns more than 1 row.
|
||||
for (const auto& column : results.rows()[0]) {
|
||||
addDecoration(source, column.first, column.second);
|
||||
}
|
||||
}
|
||||
|
||||
if (results.rows().size() > 1) {
|
||||
// Multiple rows exhibit undefined behavior.
|
||||
LOG(WARNING) << "Multiple rows returned for decorator query: " << query;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void clearDecorations(const std::string& source) {
|
||||
WriteLock lock(DecoratorsConfigParserPlugin::kDecorationsMutex);
|
||||
DecoratorsConfigParserPlugin::kDecorations[source].clear();
|
||||
}
|
||||
|
||||
void runDecorators(DecorationPoint point,
|
||||
size_t time,
|
||||
const std::string& source) {
|
||||
if (FLAGS_disable_decorators) {
|
||||
return;
|
||||
}
|
||||
|
||||
auto parser = Config::getParser(PARSER_NAME);
|
||||
if (parser == nullptr) {
|
||||
// The decorators parser does not exist.
|
||||
return;
|
||||
}
|
||||
|
||||
// Abstract the use of the decorator parser API.
|
||||
auto dp = std::dynamic_pointer_cast<DecoratorsConfigParserPlugin>(parser);
|
||||
WriteLock lock(DecoratorsConfigParserPlugin::kDecorationsMutex);
|
||||
if (point == DECORATE_LOAD) {
|
||||
for (const auto& target_source : dp->load_) {
|
||||
if (source.empty() || target_source.first == source) {
|
||||
runDecorators(target_source.first, target_source.second);
|
||||
}
|
||||
}
|
||||
} else if (point == DECORATE_ALWAYS) {
|
||||
for (const auto& target_source : dp->always_) {
|
||||
if (source.empty() || target_source.first == source) {
|
||||
runDecorators(target_source.first, target_source.second);
|
||||
}
|
||||
}
|
||||
} else if (point == DECORATE_INTERVAL) {
|
||||
for (const auto& target_source : dp->intervals_) {
|
||||
for (const auto& interval : target_source.second) {
|
||||
if (time % interval.first == 0) {
|
||||
if (source.empty() || target_source.first == source) {
|
||||
runDecorators(target_source.first, interval.second);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void getDecorations(std::map<std::string, std::string>& results) {
|
||||
if (FLAGS_disable_decorators) {
|
||||
return;
|
||||
}
|
||||
|
||||
WriteLock lock(DecoratorsConfigParserPlugin::kDecorationsMutex);
|
||||
// Copy the decorations into the log_item.
|
||||
for (const auto& source : DecoratorsConfigParserPlugin::kDecorations) {
|
||||
for (const auto& decoration : source.second) {
|
||||
results[decoration.first] = decoration.second;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
REGISTER_INTERNAL(DecoratorsConfigParserPlugin, "config_parser", PARSER_NAME);
|
||||
}
|
61
osquery/config/parsers/decorators.h
Normal file
61
osquery/config/parsers/decorators.h
Normal file
@ -0,0 +1,61 @@
|
||||
/*
|
||||
* Copyright (c) 2014-present, Facebook, Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* This source code is licensed under the BSD-style license found in the
|
||||
* LICENSE file in the root directory of this source tree. An additional grant
|
||||
* of patent rights can be found in the PATENTS file in the same directory.
|
||||
*
|
||||
*/
|
||||
|
||||
#include <map>
|
||||
#include <functional>
|
||||
#include <osquery/config.h>
|
||||
#include <osquery/database.h>
|
||||
|
||||
namespace osquery {
|
||||
|
||||
/// Enforce specific types of decoration.
|
||||
enum DecorationPoint {
|
||||
DECORATE_LOAD,
|
||||
DECORATE_ALWAYS,
|
||||
DECORATE_INTERVAL,
|
||||
};
|
||||
|
||||
/// Define a map of decoration points to their expected configuration key.
|
||||
const std::map<DecorationPoint, std::string> kDecorationPointKeys = {
|
||||
{DECORATE_LOAD, "load"},
|
||||
{DECORATE_ALWAYS, "always"},
|
||||
{DECORATE_INTERVAL, "interval"},
|
||||
};
|
||||
|
||||
/**
|
||||
* @brief Iterate the discovered decorators for a given point type.
|
||||
*
|
||||
* The configuration maintains various sources, each may contain a set of
|
||||
* decorators. The source tracking is abstracted for the decorator iterator.
|
||||
*
|
||||
* @param point request execution of decorators for this given point.
|
||||
* @param time an optional time for points using intervals.
|
||||
* @param source restrict run to a specific config source.
|
||||
*/
|
||||
void runDecorators(DecorationPoint point,
|
||||
size_t time = 0,
|
||||
const std::string& source = "");
|
||||
|
||||
/**
|
||||
* @brief Access the internal storage of the Decorator parser.
|
||||
*
|
||||
* The decoration set is a map of column name to value. It contains the opaque
|
||||
* set of decoration point results.
|
||||
*
|
||||
* Decorations are applied to log items before they are sent to the downstream
|
||||
* logging APIs: logString, logSnapshot, logHealthStatus, etc.
|
||||
*
|
||||
* @param results the output parameter to write decorations.
|
||||
*/
|
||||
void getDecorations(std::map<std::string, std::string>& results);
|
||||
|
||||
/// Clear decorations for a source when it updates.
|
||||
void clearDecorations(const std::string& source);
|
||||
}
|
102
osquery/config/parsers/tests/decorators_tests.cpp
Normal file
102
osquery/config/parsers/tests/decorators_tests.cpp
Normal file
@ -0,0 +1,102 @@
|
||||
/*
|
||||
* Copyright (c) 2014-present, Facebook, Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* This source code is licensed under the BSD-style license found in the
|
||||
* LICENSE file in the root directory of this source tree. An additional grant
|
||||
* of patent rights can be found in the PATENTS file in the same directory.
|
||||
*
|
||||
*/
|
||||
|
||||
#include <gtest/gtest.h>
|
||||
|
||||
#include <osquery/config.h>
|
||||
#include <osquery/flags.h>
|
||||
#include <osquery/registry.h>
|
||||
|
||||
#include "osquery/core/test_util.h"
|
||||
#include "osquery/config/parsers/decorators.h"
|
||||
|
||||
namespace osquery {
|
||||
|
||||
DECLARE_bool(disable_decorators);
|
||||
|
||||
class DecoratorsConfigParserPluginTests : public testing::Test {
|
||||
public:
|
||||
void SetUp() override {
|
||||
// Read config content manually.
|
||||
readFile(kTestDataPath + "test_parse_items.conf", content_);
|
||||
|
||||
// Construct a config map, the typical output from `Config::genConfig`.
|
||||
config_data_["awesome"] = content_;
|
||||
Config::getInstance().reset();
|
||||
clearDecorations("awesome");
|
||||
|
||||
// Backup the current decorator status.
|
||||
decorator_status_ = FLAGS_disable_decorators;
|
||||
FLAGS_disable_decorators = true;
|
||||
}
|
||||
|
||||
void TearDown() override { FLAGS_disable_decorators = decorator_status_; }
|
||||
|
||||
protected:
|
||||
std::string content_;
|
||||
std::map<std::string, std::string> config_data_;
|
||||
bool decorator_status_{false};
|
||||
};
|
||||
|
||||
TEST_F(DecoratorsConfigParserPluginTests, test_decorators_list) {
|
||||
// Assume the decorators are disabled.
|
||||
Config::getInstance().update(config_data_);
|
||||
auto parser = Config::getParser("decorators");
|
||||
EXPECT_NE(parser, nullptr);
|
||||
|
||||
// Expect the decorators to be disabled by default.
|
||||
QueryLogItem item;
|
||||
getDecorations(item.decorations);
|
||||
EXPECT_EQ(item.decorations.size(), 0U);
|
||||
}
|
||||
|
||||
TEST_F(DecoratorsConfigParserPluginTests, test_decorators_run_load) {
|
||||
// Re-enable the decorators, then update the config.
|
||||
// The 'load' decorator set should run every time the config is updated.
|
||||
FLAGS_disable_decorators = false;
|
||||
Config::getInstance().update(config_data_);
|
||||
|
||||
QueryLogItem item;
|
||||
getDecorations(item.decorations);
|
||||
ASSERT_EQ(item.decorations.size(), 3U);
|
||||
EXPECT_EQ(item.decorations["load_test"], "test");
|
||||
}
|
||||
|
||||
TEST_F(DecoratorsConfigParserPluginTests, test_decorators_run_interval) {
|
||||
// Prevent loads from executing.
|
||||
FLAGS_disable_decorators = true;
|
||||
Config::getInstance().update(config_data_);
|
||||
|
||||
// Mimic the schedule's execution.
|
||||
FLAGS_disable_decorators = false;
|
||||
runDecorators(DECORATE_INTERVAL, 60);
|
||||
|
||||
QueryLogItem item;
|
||||
getDecorations(item.decorations);
|
||||
ASSERT_EQ(item.decorations.size(), 2U);
|
||||
EXPECT_EQ(item.decorations.at("internal_60_test"), "test");
|
||||
|
||||
std::string log_line;
|
||||
serializeQueryLogItemJSON(item, log_line);
|
||||
std::string expected =
|
||||
"{\"snapshot\":\"\",\"decorations\":{\"internal_60_test\":\"test\","
|
||||
"\"one\":\"1\"},\"name\":\"\",\"hostIdentifier\":\"\",\"calendarTime\":"
|
||||
"\"\",\"unixTime\":\"0\"}\n";
|
||||
EXPECT_EQ(log_line, expected);
|
||||
|
||||
// Now clear and run again.
|
||||
clearDecorations("awesome");
|
||||
runDecorators(DECORATE_INTERVAL, 60 * 60);
|
||||
|
||||
QueryLogItem second_item;
|
||||
getDecorations(second_item.decorations);
|
||||
ASSERT_EQ(second_item.decorations.size(), 2U);
|
||||
}
|
||||
}
|
@ -270,6 +270,15 @@ Status serializeQueryLogItem(const QueryLogItem& i, pt::ptree& tree) {
|
||||
tree.add_child("snapshot", results_tree);
|
||||
}
|
||||
|
||||
// Check if the config has added decorations.
|
||||
if (i.decorations.size() > 0) {
|
||||
tree.add_child("decorations", pt::ptree());
|
||||
auto& decorations = tree.get_child("decorations");
|
||||
for (const auto& name : i.decorations) {
|
||||
decorations.put<std::string>(name.first, name.second);
|
||||
}
|
||||
}
|
||||
|
||||
tree.put<std::string>("name", i.name);
|
||||
tree.put<std::string>("hostIdentifier", i.identifier);
|
||||
tree.put<std::string>("calendarTime", i.calendar_time);
|
||||
@ -310,6 +319,13 @@ Status deserializeQueryLogItem(const pt::ptree& tree, QueryLogItem& item) {
|
||||
}
|
||||
}
|
||||
|
||||
if (tree.count("decorations") > 0) {
|
||||
auto& decorations = tree.get_child("decorations");
|
||||
for (const auto& i : decorations) {
|
||||
item.decorations[i.first] = i.second.data();
|
||||
}
|
||||
}
|
||||
|
||||
item.name = tree.get<std::string>("name", "");
|
||||
item.identifier = tree.get<std::string>("hostIdentifier", "");
|
||||
item.calendar_time = tree.get<std::string>("calendarTime", "");
|
||||
|
@ -16,6 +16,7 @@
|
||||
#include <osquery/flags.h>
|
||||
#include <osquery/logger.h>
|
||||
|
||||
#include "osquery/config/parsers/decorators.h"
|
||||
#include "osquery/database/query.h"
|
||||
#include "osquery/dispatcher/scheduler.h"
|
||||
#include "osquery/sql/sqlite_util.h"
|
||||
@ -56,6 +57,7 @@ inline SQL monitor(const std::string& name, const ScheduledQuery& query) {
|
||||
inline void launchQuery(const std::string& name, const ScheduledQuery& query) {
|
||||
// Execute the scheduled query and create a named query object.
|
||||
VLOG(1) << "Executing query: " << query.query;
|
||||
runDecorators(DECORATE_ALWAYS);
|
||||
auto sql =
|
||||
(FLAGS_enable_monitor) ? monitor(name, query) : SQLInternal(query.query);
|
||||
|
||||
@ -75,6 +77,7 @@ inline void launchQuery(const std::string& name, const ScheduledQuery& query) {
|
||||
item.identifier = ident;
|
||||
item.time = osquery::getUnixTime();
|
||||
item.calendar_time = osquery::getAsciiTime();
|
||||
getDecorations(item.decorations);
|
||||
|
||||
if (query.options.count("snapshot") && query.options.at("snapshot")) {
|
||||
// This is a snapshot query, emit results with a differential or state.
|
||||
@ -128,6 +131,10 @@ void SchedulerRunner::start() {
|
||||
launchQuery(name, query);
|
||||
}
|
||||
}));
|
||||
// Configuration decorators run on 60 second intervals only.
|
||||
if (i % 60 == 0) {
|
||||
runDecorators(DECORATE_INTERVAL, i);
|
||||
}
|
||||
// Put the thread into an interruptible sleep without a config instance.
|
||||
pauseMilli(interval_ * 1000);
|
||||
if (interrupted()) {
|
||||
|
@ -25,6 +25,7 @@
|
||||
#include "osquery/remote/transports/tls.h"
|
||||
#include "osquery/remote/utility.h"
|
||||
|
||||
#include "osquery/config/parsers/decorators.h"
|
||||
#include "osquery/logger/plugins/tls.h"
|
||||
|
||||
namespace pt = boost::property_tree;
|
||||
@ -84,6 +85,15 @@ Status TLSLoggerPlugin::logString(const std::string& s) {
|
||||
}
|
||||
|
||||
Status TLSLoggerPlugin::logStatus(const std::vector<StatusLogLine>& log) {
|
||||
// Append decorations to status (unique to TLS logger).
|
||||
// Assemble a decorations tree to append to each status buffer line.
|
||||
pt::ptree dtree;
|
||||
std::map<std::string, std::string> decorations;
|
||||
getDecorations(decorations);
|
||||
for (const auto& decoration : decorations) {
|
||||
dtree.put(decoration.first, decoration.second);
|
||||
}
|
||||
|
||||
for (const auto& item : log) {
|
||||
// Convert the StatusLogLine into ptree format, to convert to JSON.
|
||||
pt::ptree buffer;
|
||||
@ -91,6 +101,9 @@ Status TLSLoggerPlugin::logStatus(const std::vector<StatusLogLine>& log) {
|
||||
buffer.put("filename", item.filename);
|
||||
buffer.put("line", item.line);
|
||||
buffer.put("message", item.message);
|
||||
if (decorations.size() > 0) {
|
||||
buffer.put_child("decorations", dtree);
|
||||
}
|
||||
|
||||
// Convert to JSON, for storing a string-representation in the database.
|
||||
std::string json;
|
||||
@ -145,7 +158,8 @@ Status TLSLogForwarderRunner::send(std::vector<std::string>& log_data,
|
||||
// Read each logged line into JSON and populate a list of lines.
|
||||
// The result list will use the 'data' key.
|
||||
pt::ptree children;
|
||||
iterate(log_data, ([&children](std::string& item) {
|
||||
iterate(log_data,
|
||||
([&children](std::string& item) {
|
||||
pt::ptree child;
|
||||
try {
|
||||
std::stringstream input;
|
||||
@ -175,7 +189,8 @@ void TLSLogForwarderRunner::check() {
|
||||
|
||||
// For each index, accumulate the log line into the result or status set.
|
||||
std::vector<std::string> results, statuses;
|
||||
iterate(indexes, ([&results, &statuses](std::string& index) {
|
||||
iterate(indexes,
|
||||
([&results, &statuses](std::string& index) {
|
||||
std::string value;
|
||||
auto& target = ((index.at(0) == 'r') ? results : statuses);
|
||||
if (getDatabaseValue(kLogs, index, value)) {
|
||||
@ -196,7 +211,8 @@ void TLSLogForwarderRunner::check() {
|
||||
<< status.getMessage() << ")";
|
||||
} else {
|
||||
// Clear the results logs once they were sent.
|
||||
iterate(indexes, ([&results](std::string& index) {
|
||||
iterate(indexes,
|
||||
([&results](std::string& index) {
|
||||
if (index.at(0) != 'r') {
|
||||
return;
|
||||
}
|
||||
@ -212,7 +228,8 @@ void TLSLogForwarderRunner::check() {
|
||||
<< status.getMessage() << ")";
|
||||
} else {
|
||||
// Clear the status logs once they were sent.
|
||||
iterate(indexes, ([&results](std::string& index) {
|
||||
iterate(indexes,
|
||||
([&results](std::string& index) {
|
||||
if (index.at(0) != 's') {
|
||||
return;
|
||||
}
|
||||
|
@ -8,6 +8,7 @@
|
||||
"dictionary": {
|
||||
"foo": "bar"
|
||||
},
|
||||
|
||||
"packs": {
|
||||
"foobar": {
|
||||
"version": "1.5.0",
|
||||
@ -31,12 +32,14 @@
|
||||
]
|
||||
}
|
||||
},
|
||||
|
||||
"schedule": {
|
||||
"launchd": {
|
||||
"query": "select * from launchd;",
|
||||
"interval": 3600
|
||||
}
|
||||
},
|
||||
|
||||
"file_paths": {
|
||||
"logs": [
|
||||
"/dev/null"
|
||||
@ -54,5 +57,26 @@
|
||||
"foo",
|
||||
"bar"
|
||||
]
|
||||
},
|
||||
|
||||
"decorators": {
|
||||
"load": [
|
||||
"select version from osquery_info",
|
||||
"select uuid as hostuuid from system_info",
|
||||
"select 'test' as load_test"
|
||||
],
|
||||
"always": [
|
||||
"select user as username from logged_in_users where user <> '' order by time limit 1;",
|
||||
"select 'test' as always_test"
|
||||
],
|
||||
"interval": {
|
||||
"60": [
|
||||
"select 1 as one from time",
|
||||
"select 'test' as internal_60_test"
|
||||
],
|
||||
"61": [
|
||||
"select 'invalid' as invalid_interval_test"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
Loading…
Reference in New Issue
Block a user