mirror of
https://github.com/valitydev/osquery-1.git
synced 2024-11-07 09:58:54 +00:00
Various fixes to the documentation
This commit is contained in:
parent
370290d103
commit
cb1856654d
@ -1,6 +1,6 @@
|
||||
An osquery deployment can help you establish an infrastructural baseline, allowing you to detect malicious activity using scheduled queries.
|
||||
|
||||
This approach will help you catch known malware ([WireLurker](http://bits.blogs.nytimes.com/2014/11/05/malicious-software-campaign-targets-apple-users-in-china/), IceFog, Imuler, etc), and more importantly, unknown malware. Let's look at Mac OS X startup items for a given laptop using [osqueryi](../introduction/using-osqueryi.md):
|
||||
This approach will help you catch known malware ([WireLurker](http://bits.blogs.nytimes.com/2014/11/05/malicious-software-campaign-targets-apple-users-in-china/), IceFog, Imuler, etc.), and more importantly, unknown malware. Let's look at Mac OS X startup items for a given laptop using [osqueryi](../introduction/using-osqueryi.md):
|
||||
|
||||
```sh
|
||||
$ osqueryi
|
||||
@ -23,7 +23,7 @@ We can use osquery's log aggregation capabilities to easily pinpoint when the at
|
||||
|
||||
## Looking at the logs
|
||||
|
||||
Using the [log aggregation guide](log-aggregation.md), you will receive log lines like the following in your datastore (ElasticSearch, Splunk, etc):
|
||||
Using the [log aggregation guide](log-aggregation.md), you will receive log lines like the following in your datastore (ElasticSearch, Splunk, etc.):
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -13,9 +13,9 @@ a query schedule from a configuration.
|
||||
## Configuration components
|
||||
|
||||
The osquery "configuration" is read from a config plugin. This plugin is a data retrieval method and is set to **filesystem** by default.
|
||||
Other retrieval and run-time updating methods may include a HTTP/TLS request using the **tls** config plugin. In all cases the response data must be JSON-formatted.
|
||||
Other retrieval and run-time updating methods may include an HTTP/TLS request using the **tls** config plugin. In all cases the response data must be JSON-formatted.
|
||||
|
||||
There are several components to a configuration:
|
||||
There are several components contributing to a configuration:
|
||||
|
||||
* Daemon options and feature settings
|
||||
* Query Schedule: the set of SQL queries and intervals
|
||||
@ -32,7 +32,7 @@ The default config plugin, **filesystem**, reads from a file and optional direct
|
||||
* Linux: **/etc/osquery/osquery.conf** and **/etc/osquery/osquery.conf.d/**
|
||||
* Mac OS X: **/var/osquery/osquery.conf** and **/var/osquery/osquery.conf.d/**
|
||||
|
||||
You may override the **filesystem** plugin's path using `--config_path=/path/to/osquery.conf`. And you may use the ".d/" directory search path based on that custom location.
|
||||
You may override the **filesystem** plugin's path using `--config_path=/path/to/osquery.conf`. You may also use the ".d/" directory search path based on that custom location.
|
||||
|
||||
Here is an example config that includes options and the query schedule:
|
||||
|
||||
@ -58,21 +58,21 @@ Here is an example config that includes options and the query schedule:
|
||||
This config tells osqueryd to schedule two queries, **macosx_kextstat** and **foobar**:
|
||||
|
||||
* the schedule keys must be unique
|
||||
* the "interval" specifies query frequency, in seconds
|
||||
* the "interval" specifies query frequency (in seconds)
|
||||
|
||||
The first query will document changes to an OS X host's kernel extensions, with a query interval of 10 seconds. Consider using osquery's [performance tooling](performance-safety.md) to understand the performance impact for each query.
|
||||
The first query will document changes to the OS X host's kernel extensions, with a query interval of 10 seconds. Consider using osquery's [performance tooling](performance-safety.md) to understand the performance impact for each query.
|
||||
|
||||
The results of your query are cached on disk via [RocksDB](http://rocksdb.org/). On first query run, all of the results are stored in RocksDB. On subsequent runs, only result-set changes are logged to RocksDB.
|
||||
The results of your query are cached on disk using [RocksDB](http://rocksdb.org/). On the first query run, all of the results are stored in RocksDB. On subsequent runs, only result-set changes are logged to RocksDB.
|
||||
|
||||
Scheduled queries can also set: `"removed":false` and `"snapshot":true`. See the next section on [logging](logging.md) for how query options affect output.
|
||||
Scheduled queries can also set: `"removed":false` and `"snapshot":true`. See the next section on [logging](logging.md) to learn how query options affect the output.
|
||||
|
||||
## Chef Configuration
|
||||
|
||||
Here are example chef cookbook recipes and files for OS X and Linux deployments.
|
||||
Consider improving the recipes using node attributes to further control what
|
||||
nodes and clients enable osquery. It helps to create a canary or testing set
|
||||
that implement a separate "testing" configuration. These recipes assume you
|
||||
are deploying the OS X package or Linux package separately.
|
||||
nodes and clients enable osquery. It helps to create a canary or a testing set
|
||||
that implements a separate "testing" configuration. These recipes assume you
|
||||
are deploying the OS X package or the Linux package separately.
|
||||
|
||||
### Chef OS X
|
||||
|
||||
@ -190,7 +190,7 @@ And the same configuration file from the OS X example is appropriate.
|
||||
|
||||
Configuration supports sets, called packs, of queries that help define your schedule. Packs are distributed with osquery and labeled based on broad categories of information and visibility. For example, a "compliance" pack will include queries that check for changes in locked down operating system features and user settings. A "vulnerability management" pack may perform general asset management queries that build event logs around package and software install changes.
|
||||
|
||||
In an osquery configuration JSON, packs are defined as a top-level-key and consist of (pack name to pack location JSON) pairs.
|
||||
In an osquery configuration JSON, packs are defined as a top-level-key and consist of _pack name to pack location JSON_ pairs.
|
||||
|
||||
```json
|
||||
{
|
||||
@ -203,7 +203,7 @@ In an osquery configuration JSON, packs are defined as a top-level-key and consi
|
||||
}
|
||||
```
|
||||
|
||||
Most packs are cross-platform concepts that may include platform-specific tables/queries. The pack content is slightly different and more descriptive that a normal osquery schedule.
|
||||
Most packs are cross-platform concepts that may include platform-specific tables/queries. The pack content is slightly different and more descriptive than a normal osquery schedule.
|
||||
|
||||
Here is an example "compliance" pack:
|
||||
|
||||
@ -237,7 +237,7 @@ A query pack may make wider limitations about how the queries apply too:
|
||||
|
||||
Then every query within will only be added to a schedule if the osqueryd process is running on a Ubuntu distro with a minimum osquery version of 1.4.5.
|
||||
|
||||
We plan to release (and bundle alongside RPMs/DEBs/PKGs/etc) query packs that emit high signal events as well as event data that is worth storing in the case of future incidents and security events. The queries within each pack will be performance tested and well-formed (JOIN, select-limited, etc). But it is always an exercise for the user to make sure queries are useful and are not impacting performance critical hosts.
|
||||
We plan to release (and bundle alongside RPMs/DEBs/PKGs/etc.) query packs that emit high signal events as well as event data that is worth storing in the case of future incidents and security events. The queries within each pack will be performance tested and well-formed (JOIN, select-limited, etc.). But it is always an exercise for the user to make sure queries are useful and are not impacting performance critical hosts.
|
||||
|
||||
## osqueryctl helper
|
||||
|
||||
|
@ -37,7 +37,7 @@ The same dependency check is applied to the logger plugin setting after a valid
|
||||
|
||||
## More Options
|
||||
|
||||
Extensions are most useful when used to expose config or logger plugins. Along with autoloading extensions you can start osqueryd services with non-default plugins using `--flagfile=PATH`. The osqueryd init service on Linux searches for a `/etc/osquery/osquery.flags` path containing flags. This is a great place to add non-default extensions options or for replacing plugins
|
||||
Extensions are most useful when used to expose config or logger plugins. Along with autoloading extensions you can start osqueryd services with non-default plugins using `--flagfile=PATH`. The osqueryd init service on Linux searches for a `/etc/osquery/osquery.flags` path containing flags. This is a great place to add non-default extensions options or for replacing plugins:
|
||||
|
||||
```sh
|
||||
$ cat /etc/osquery/osquery.flags
|
||||
|
@ -74,7 +74,7 @@ sourcetype = osquery_warning
|
||||
|
||||
### Rsyslog
|
||||
|
||||
[rsyslog](http://www.rsyslog.com/) is a tried and testing unix log forwarding service. If you're deploying osqueryd in a production linux environment where you don't have to worry about lossy network connections, this may be your best option.
|
||||
[rsyslog](http://www.rsyslog.com/) is a tried and testing unix log forwarding service. If you're deploying osqueryd in a production Linux environment where you don't have to worry about lossy network connections, this may be your best option.
|
||||
|
||||
## Analyzing logs
|
||||
|
||||
@ -100,8 +100,8 @@ Splunk will automatically extract the relevant fields for analytics, as shown be
|
||||
|
||||
![](https://i.imgur.com/tWCPx51.png)
|
||||
|
||||
### Rsyslog, Fluentd, Scribe, etc
|
||||
### Rsyslog, Fluentd, Scribe, etc.
|
||||
|
||||
If you're using a log forwarder which has less requirements on how data is stored (ie: Splunk Forwarders require the use of Splunk, etc), then you have many options on how you can interact with osqueryd data. It is recommended that you use whatever log analytics platform that you're comfortable with.
|
||||
If you're using a log forwarder which has less requirements on how data is stored (for example, Splunk Forwarders require the use of Splunk, etc.), then you have many options on how you can interact with osqueryd data. It is recommended that you use whatever log analytics platform that you're comfortable with.
|
||||
|
||||
Many people are very comfortable with [Logstash](http://logstash.net/). If you already have an existing Logstash/Elasticsearch deployment, that is a great option to exercise. If your organization uses a different backend log management solution, osquery should tie into that with minimal effort.
|
||||
|
@ -25,7 +25,7 @@ The same is true for the WARNING, ERROR and FATAL logs. For more information on
|
||||
|
||||
### Results logs
|
||||
|
||||
The results of your scheduled queries are logged to the "results log". These are differential changes between the last-most-recent query execution and the current execution. Each log line is a JSON string that indicates what data has been added/removed by which query. There are two format options, *single*, or event, and *batched*. Some queries do not make sense to log "removed" events like:
|
||||
The results of your scheduled queries are logged to the "results log". These are differential changes between the last (most recent) query execution and the current execution. Each log line is a JSON string that indicates what data has been added/removed by which query. There are two format options, *single*, or event, and *batched*. Some queries do not make sense to log "removed" events like:
|
||||
|
||||
```sql
|
||||
SELECT i.*, p.resident_size, p.user_time, p.system_time, t.minutes as c
|
||||
@ -51,9 +51,9 @@ By adding an outer join of `time` and using `time.minutes` as a counter this que
|
||||
|
||||
Snapshot logs are an alternate form of query result logging. A snapshot is an 'exact point in time' set of results, no differentials. If you always want a list of mounts, not the added and removed mounts, use a snapshot. In the mounts case, where differential results are seldom emitted (assuming hosts do not often mount and unmount), a complete snapshot will log after every query execution. This *will* be a lot of data amortized across your fleet.
|
||||
|
||||
To be extra-super-clear about the burden of data snapshots impose they are logged to a dedicated sink. The **filesystem** logger plugins writes snapshot results to **/var/log/osquery/osqueryd.snapshots.log**.
|
||||
Data snapshots may generate _a large amount_ of output. For log collection safety, output is written to a dedicated sink. The **filesystem** logger plugins writes snapshot results to **/var/log/osquery/osqueryd.snapshots.log**.
|
||||
|
||||
To schedule a snapshot query use:
|
||||
To schedule a snapshot query, use:
|
||||
```json
|
||||
{
|
||||
"schedule": {
|
||||
|
@ -4,7 +4,7 @@ This guide provides an overview and tutorial for assuring performance of the osq
|
||||
|
||||
## Testing query performance
|
||||
|
||||
The osquery tooling provides a full-featured profiling script. The script can evaluate table, query, and scheduled query performance on a system. Before scheduling a set of queries on your enterprise hosts, it is best practice to measure the expected performance impact:
|
||||
The osquery tooling provides a full-featured profiling script. The script can evaluate table, query, and scheduled query performance on a system. Before scheduling a set of queries on your enterprise hosts, it is best practice to measure the expected performance impact.
|
||||
|
||||
Consider the following `osquery.conf`:
|
||||
|
||||
@ -41,7 +41,7 @@ Consider the following `osquery.conf`:
|
||||
|
||||
Each query provides useful information and will run every minute. But what sort of impact will this have on the client machines?
|
||||
|
||||
For this we can use `./tools/profile.py` to profile the queries by running them for a configured number of rounds and reporting the pre-defined performance category of each. A higher category result means higher impact. High impact queries should be avoided, but if the information is valuable consider running them less-often.
|
||||
For this we can use `./tools/profile.py` to profile the queries by running them for a configured number of rounds and reporting the pre-defined performance category of each. A higher category result means higher impact. High impact queries should be avoided, but if the information is valuable, consider running them less-often.
|
||||
|
||||
```
|
||||
$ sudo -E python ./tools/profile.py --config osquery.conf
|
||||
@ -72,14 +72,14 @@ The build will run each of the support operating system platform/versions and in
|
||||
* Build and run `make test`
|
||||
* Attempt to detect memory leaks using `./tools/profile.py --leaks`
|
||||
* Run a performance measurement using `./tools/profile.py`
|
||||
* Check performance against the latest release tag and commit to master.
|
||||
* Build docs and API spec on release tag or commit to master.
|
||||
* Check performance against the latest release tag and commit to master
|
||||
* Build docs and API spec on release tag or commit to master
|
||||
|
||||
## Virtual table blacklist
|
||||
|
||||
Performance impacting virtual tables are most likely the result of missing features/tooling in osquery. Because of their dependencies on core optimizations there's no hard including the table generation code in master as long as the table is blacklisted when a non-developer builds the tool suite.
|
||||
Performance impacting virtual tables are most likely the result of missing features/tooling in osquery. Because of their dependencies on core optimizations, there is no harm including the table generation code in master as long as the table is blacklisted when a non-developer builds the tool suite.
|
||||
|
||||
If you are developing latent tables that would be blacklisted please make sure you are relying on a feature with a clear issue and traction. Then add your table name (as it appears in the `.table` spec) to [`specs/blacklist`](https://github.com/facebook/osquery/blob/master/specs/blacklist) and adopt:
|
||||
If you are developing latent tables that would be blacklisted, please make sure you are relying on a feature with a clear issue and traction. Then add your table name (as it appears in the `.table` spec) to [`specs/blacklist`](https://github.com/facebook/osquery/blob/master/specs/blacklist) and adopt:
|
||||
|
||||
```
|
||||
$ DISABLE_BLACKLIST=1 make
|
||||
|
@ -90,7 +90,7 @@ We include a very basic example python TLS/HTTPS server: [./tools/tests/test_htt
|
||||
|
||||
The TLS clients built into osquery use the system-provided OpenSSL libraries. The clients use boost's ASIO header-libraries through the [cpp-netlib](http://cpp-netlib.org/) HTTPS library. OpenSSL is very outdated on OS X (deprecated since OS X 10.7), but still salvageable.
|
||||
|
||||
On Linux and FreeBSD the TLS client prefers the TLS 1.2 protocol, but includes TLS 1.1 and TLS 1.0-- and the following cipher suites:
|
||||
On Linux and FreeBSD the TLS client prefers the TLS 1.2 protocol, but includes TLS 1.1/1.0 as well as the following cipher suites:
|
||||
|
||||
```
|
||||
ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:\
|
||||
|
@ -41,7 +41,7 @@ For example, when a file in */usr/bin/* and */usr/sbin/* is changed it will be s
|
||||
|
||||
# yara_events table
|
||||
|
||||
Using the configuration above you can see it in action. While osquery is running I executed `touch /Users/wxs/tmp/foo` in another terminal. Here is the relevant queries to show what happened:
|
||||
Using the configuration above you can see it in action. While osquery was running I executed `touch /Users/wxs/tmp/foo` in another terminal. Here is the relevant queries to show what happened:
|
||||
|
||||
```bash
|
||||
osquery> select * from file_events;
|
||||
@ -74,15 +74,15 @@ osquery> select * from yara_events;
|
||||
osquery>
|
||||
```
|
||||
|
||||
As you can see, even though no matches were found an row is still created and stored.
|
||||
As you can see, even though no matches were found a row is still created and stored.
|
||||
|
||||
## On-demand YARA scanning
|
||||
|
||||
The [**yara**](https://osquery.io/docs/tables/#yara) table is used for on-demand scanning. With this table you can arbitrarily YARA scan any available file on the filesystem with any available signature files or signature group from the configuration. In order to scan the table must be given a constraint which says where to scan and what to scan with.
|
||||
The [**yara**](https://osquery.io/docs/tables/#yara) table is used for on-demand scanning. With this table you can arbitrarily YARA scan any available file on the filesystem with any available signature files or signature group from the configuration. In order to scan, the table must be given a constraint which says where to scan and what to scan with.
|
||||
|
||||
In order to determine where to scan the table accepts either a *path* or a *pattern* constraint. The *path* constraint must be a full path to a single file. There is no expansion or recursion with this constraint. The *pattern* constraint follows the same wildcard rules mentioned before.
|
||||
In order to determine where to scan, the table accepts either a *path* or a *pattern* constraint. The *path* constraint must be a full path to a single file. There is no expansion or recursion with this constraint. The *pattern* constraint follows the same wildcard rules mentioned before.
|
||||
|
||||
Once the "where" is out of the way you must specify the "what" part. This is done through either the *sigfile* or *sig_group* constraints. The *sigfile* constraint can be either an absolute path to a signature file on disk or a path relative to */var/osquery/*. The signature file will be compiled only for the execution of this one query and removed afterwards. The *sig_group* constraint must consist of a named signature grouping from your configuration file.
|
||||
Once the "where" is out of the way, you must specify the "what" part. This is done through either the *sigfile* or *sig_group* constraints. The *sigfile* constraint can be either an absolute path to a signature file on disk or a path relative to */var/osquery/*. The signature file will be compiled only for the execution of this one query and removed afterwards. The *sig_group* constraint must consist of a named signature grouping from your configuration file.
|
||||
|
||||
Here are some examples of the **yara** table in action:
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
## Dependencies
|
||||
|
||||
We include a `make deps` command to make it easier for developers to get started with the osquery project. `make deps` uses homebrew for OS X and traditional package managers for various distributions of Linux.
|
||||
We include a `make deps` command to make it easier for developers to get started with the osquery project. `make deps` uses Homebrew for OS X and traditional package managers for various distributions of Linux.
|
||||
|
||||
WARNING: This will install or build various dependencies on the build host that are not required to "use" osquery, only build osquery binaries and packages.
|
||||
|
||||
@ -11,7 +11,7 @@ WARNING: This will install or build various dependencies on the build host that
|
||||
To build osquery on OS X, you need `pip` and `brew` installed. `make deps` will take care of installing the appropriate library dependencies, but it's recommended to take take a look at the Makefile, just in case
|
||||
something conflicts with your environment.
|
||||
|
||||
Anything that does not have a homebrew package is built from source from _https://github.com/osquery/third-party_, which is a git submodule of this repository and is set up by `make deps`.
|
||||
Anything that does not have a Homebrew package is built from source from _https://github.com/osquery/third-party_, which is a git submodule of this repository and is set up by `make deps`.
|
||||
|
||||
The complete installation/build steps are as follows:
|
||||
|
||||
@ -47,7 +47,7 @@ $ vagrant up ubuntu14
|
||||
$ vagrant ssh ubuntu14
|
||||
```
|
||||
|
||||
By default vagrant will allocate 2 virtual CPUs to the virtual machine instance. You can override this by setting `OSQUERY_BUILD_CPUS` environment variable before spinning up an instance. To allocate the maximum number of CPUs `OSQUERY_BUILD_CPUS` can be set as:
|
||||
By default vagrant will allocate 2 virtual CPUs to the virtual machine instance. You can override this by setting `OSQUERY_BUILD_CPUS` environment variable before spinning up an instance. To allocate the maximum number of CPUs, `OSQUERY_BUILD_CPUS` can be set as:
|
||||
|
||||
```sh
|
||||
OSQUERY_BUILD_CPUS=`nproc` # for Linux
|
||||
@ -172,7 +172,7 @@ You must run `make deps` to make sure you are pulling in the most-recent depende
|
||||
|
||||
`make deps` will take care of installing everything you need to compile osquery. However, to properly develop and contribute code, you'll need to install some additional programs. If you write C++ often, you likely already have these programs installed. We don't bundle these tools with osquery because many programmers are quite fond of their personal installations of LLVM utilities, debuggers, etc.
|
||||
|
||||
- clang-format: we use clang-format to format all code in osquery. After staging your commit changes, run `make format`. (requires clang-format)
|
||||
- clang-format: we use clang-format to format all code in osquery. After staging your commit changes, run `make format` (requires clang-format).
|
||||
- valgrind: performance is a top priority for osquery, so all code should be thoroughly tested with valgrind or instruments. After building your code use `./tools/profile.py --leaks` to run all queries and test for memory leaks.
|
||||
|
||||
## Build Performance
|
||||
|
@ -43,7 +43,7 @@ REGISTER(FilesystemConfigPlugin, "config", "filesystem");
|
||||
}
|
||||
```
|
||||
|
||||
There are 5 parts of a config plugin.
|
||||
There are 5 parts of a config plugin:
|
||||
|
||||
- Include the plugin macros as well as command line argument macros.
|
||||
- If your config requires customization expose it as arguments.
|
||||
|
@ -2,7 +2,7 @@ SQL tables are used to represent abstract operating system concepts, such as run
|
||||
|
||||
A table can be used in conjunction with other tables via operations like sub-queries and joins. This allows for a rich data exploration experience. While osquery ships with a default set of tables, osquery provides an API that allows you to create new tables.
|
||||
|
||||
You can explore current tables: [https://osquery.io/tables](https://osquery.io/tables). Tables that are up for grabs in terms of development can be found on Github issues using the "virtual tables" + "[up for grabs tag](https://github.com/facebook/osquery/issues?q=is%3Aopen+is%3Aissue+label%3A%22virtual+tables%22)".
|
||||
You can explore current tables here: [https://osquery.io/tables](https://osquery.io/tables). Tables that are up for grabs in terms of development can be found on Github issues using the "virtual tables" + "[up for grabs tag](https://github.com/facebook/osquery/issues?q=is%3Aopen+is%3Aissue+label%3A%22virtual+tables%22)".
|
||||
|
||||
## New Table Walkthrough
|
||||
|
||||
@ -92,7 +92,7 @@ QueryData genTime(QueryContext &context) {
|
||||
Key points to remember:
|
||||
|
||||
- Your implementation function should be in the `osquery::tables` namespace.
|
||||
- Your implementation function should accept on `QueryContext&` parameter and return an instance of `QueryData`
|
||||
- Your implementation function should accept on `QueryContext&` parameter and return an instance of `QueryData`.
|
||||
|
||||
## Using where clauses
|
||||
|
||||
@ -126,11 +126,11 @@ Examples:
|
||||
|
||||
Data types like `QueryData`, `Row`, `DiffResults`, etc. are osquery's built-in data result types. They're all defined in [include/osquery/database/results.h](https://github.com/facebook/osquery/blob/master/include/osquery/database/results.h).
|
||||
|
||||
`Row` is just a `typedef` for a `std::map<std::string, std::string>`. That's it. A row of data is just a mapping of strings that represent column names to strings that represent column values. Note that, currently, even if your SQL table type is an `int` and not a `std::string`, we need to cast the ints as strings to comply with the type definition of the `Row` object. They'll be casted back to `int`'s later. This is all handled transparently by osquery's supporting infrastructure as long as you use the macros like `TEXT`, `INTEGER`, `BIGINT`, etc when inserting columns into your row.
|
||||
`Row` is just a `typedef` for a `std::map<std::string, std::string>`. That's it. A row of data is just a mapping of strings that represent column names to strings that represent column values. Note that, currently, even if your SQL table type is an `int` and not a `std::string`, we need to cast the ints as strings to comply with the type definition of the `Row` object. They'll be casted back to `int`s later. This is all handled transparently by osquery's supporting infrastructure as long as you use the macros like `TEXT`, `INTEGER`, `BIGINT`, etc. when inserting columns into your row.
|
||||
|
||||
`QueryData` is just a `typedef` for a `std::vector<Row>`. Query data is just a list of rows. Simple enough.
|
||||
|
||||
To populate the data that will be returned to the user at runtime, your implementation function must generate the data that you'd like to display and populate a `QueryData` map with the appropriate `Rows`. Then, just return the `QueryData`.
|
||||
To populate the data that will be returned to the user at runtime, your implementation function must generate the data that you'd like to display and populate a `QueryData` map with the appropriate `Row`s. Then, just return the `QueryData`.
|
||||
|
||||
In our case, we used system APIs to create a struct of type `tm` which has fields such as `tm_hour`, `tm_min` and `tm_sec` which represent the current time. We can then create our three entries in our `Row` variable: hour, minutes and seconds. Then we push that single row onto the `QueryData` variable and return it. Note that if we wanted our table to have many rows (a more common use-case), we would just push back more `Row` maps onto `results`.
|
||||
|
||||
|
@ -71,7 +71,7 @@ service Extension {
|
||||
ExtensionStatus ping(),
|
||||
/// Call an extension (or core) registry plugin.
|
||||
ExtensionResponse call(
|
||||
/// The registry name (e.g., config, logger, table, etc).
|
||||
/// The registry name (e.g., config, logger, table, etc.).
|
||||
1:string registry,
|
||||
/// The registry item name (plugin name).
|
||||
2:string item,
|
||||
|
@ -18,7 +18,7 @@ There's an array of yet-to-be-implemented uses of the inotify publisher, but a s
|
||||
|
||||
## Event Subscribers
|
||||
|
||||
Let's continue to use the inotify event publisher as an example. And let's implement a table that reports new files created in "/etc/`" The first thing we need is a [table spec](creating-tables.md):
|
||||
Let's continue to use the inotify event publisher as an example. And let's implement a table that reports new files created in "/etc/". The first thing we need is a [table spec](creating-tables.md):
|
||||
|
||||
```python
|
||||
table_name("new_etc_files")
|
||||
|
@ -40,7 +40,7 @@ The above code is very simple. If you're unfamiliar with the syntax/concepts of
|
||||
|
||||
## Building a test
|
||||
|
||||
Whatever component of osquery you're working on has it's own "CMakeLists.txt" file. For example, the _tables_ component (folder) has it's own "CMakeLists.txt"`" file at [osquery/tables/CMakeLists.txt](https://github.com/facebook/osquery/blob/master/osquery/tables/CMakeLists.txt). The file that we're going to be modifying today is [osquery/CMakeLists.txt](https://github.com/facebook/osquery/tree/master/osquery/CMakeLists.txt). Edit that file to include the following contents:
|
||||
Each component of osquery you're working on has its own "CMakeLists.txt" file. For example, the _tables_ component (folder) has its own "CMakeLists.txt" file at [osquery/tables/CMakeLists.txt](https://github.com/facebook/osquery/blob/master/osquery/tables/CMakeLists.txt). The file that we're going to be modifying today is [osquery/CMakeLists.txt](https://github.com/facebook/osquery/tree/master/osquery/CMakeLists.txt). Edit that file to include the following content:
|
||||
|
||||
```CMake
|
||||
ADD_OSQUERY_TEST(example_test example_test.cpp)
|
||||
@ -50,7 +50,7 @@ After you specify the test sources, add whatever libraries you have to link agai
|
||||
|
||||
## Running a test
|
||||
|
||||
From the root of the repository run `make`. If you're code compiles properly, run `make test`. Ensure that your test has passed.
|
||||
From the root of the repository run `make`. If your code compiles properly, run `make test`. Ensure that your test has passed.
|
||||
|
||||
**Extending the test**
|
||||
|
||||
|
@ -58,11 +58,11 @@ the "worker" process will be restarted.
|
||||
|
||||
`--watchdog_level=1`
|
||||
|
||||
### Backing storage control flags
|
||||
|
||||
Performance limit level (0=loose, 1=normal, 2=restrictive, 3=debug). The default watchdog process uses a "level" to configure performance limits.
|
||||
The higher the level the more strict the limits become.
|
||||
|
||||
### Backing storage control flags
|
||||
|
||||
`--database_in_memory=false`
|
||||
|
||||
Keep osquery backing-store in memory.
|
||||
@ -71,14 +71,14 @@ For the default backing-store, RocksDB, this option is not supported.
|
||||
|
||||
`--database_path=/var/osquery/osquery.db`
|
||||
|
||||
### Extensions control flags
|
||||
|
||||
If using a disk-based backing store, specify a path.
|
||||
osquery will keep state using a "backing store" using RocksDB by default.
|
||||
This state holds event information such that it may be queried later according
|
||||
to a schedule. It holds the results of the most recent query for each query within
|
||||
the schedule. This last-queried result allows query-differential logging.
|
||||
|
||||
### Extensions control flags
|
||||
|
||||
`--disable_extensions=false`
|
||||
|
||||
Disable extension API. See the [SDK development](../development/osquery-sdk.md) page for more information on osquery extensions, and the [deployment](../deployment/extensions.md) page for how to use extensions.
|
||||
@ -248,6 +248,6 @@ Maximum returned row value size.
|
||||
|
||||
## Shell-only flags
|
||||
|
||||
Most of the shell flags are self-explainitory and are adapted from the SQLite shell. Refer the shell's ".help" command for details and explainations.
|
||||
Most of the shell flags are self-explanatory and are adapted from the SQLite shell. Refer to the shell's ".help" command for details and explanations.
|
||||
|
||||
We have added a `--json` switch to output rows as a JSON list.
|
||||
We have added the `--json` switch to output rows as a JSON list.
|
||||
|
@ -2,7 +2,7 @@ We support building custom deployment packages (pkg/deb/rpm) for less common use
|
||||
|
||||
- Slipstreaming additional tools into osquery's existing packages
|
||||
- Proprietary modifications to "core" features that aren't simple additional plugins
|
||||
- Custom dependency modifications (patched versions of glog, thrift, etc)
|
||||
- Custom dependency modifications (patched versions of glog, thrift, etc.)
|
||||
|
||||
The first step to creating custom packages is having [built](../development/building.md) and tested osquery. This means reading the development guides and in most cases having a dedicated "build host".
|
||||
|
||||
@ -51,7 +51,7 @@ $ ./tools/deployment/make_osx_package.sh -c ~/Desktop/osquery.conf
|
||||
|
||||
The distributable package can be found at `./build/darwin/osquery-VERSION.pkg`.
|
||||
|
||||
You can now use your existing package distribution system ([JAMF](http://www.jamfsoftware.com/), [Chef](https://www.getchef.com/chef/), etc) to push this package to your infrastructure.
|
||||
You can now use your existing package distribution system ([JAMF](http://www.jamfsoftware.com/), [Chef](https://www.getchef.com/chef/), etc.) to push this package to your infrastructure.
|
||||
|
||||
### Custom LaunchDaemon
|
||||
|
||||
@ -66,10 +66,10 @@ $ ./tools/deployment/make_osx_package.sh -c /internal/osquery/osquery.conf \
|
||||
|
||||
### Removing the LaunchDaemon
|
||||
|
||||
Perhaps you just want to deploy the osquery binaries via a pkg and you'd like to manage the scheduling of osqueryd via some other mechanism. To do this, when you run `make_osx_package.sh`, include a `-n`/`--no-launchd` flag.
|
||||
|
||||
This will make the package just lay the binaries down. The LaunchDaemon won't be included and no LaunchDaemon will be unloaded or loaded by the post-install script of the package. For example:
|
||||
Perhaps you just want to deploy the osquery binaries via a pkg and you'd like to manage the scheduling of osqueryd via some other mechanism. To do this, when you run `make_osx_package.sh`, include a `-n`/`--no-launchd` flag. For example:
|
||||
|
||||
```sh
|
||||
$ ./tools/deployment/make_osx_package.sh -n
|
||||
```
|
||||
|
||||
This will make the package just lay the binaries down. The LaunchDaemon won't be included and no LaunchDaemon will be unloaded or loaded by the post-install script of the package.
|
||||
|
@ -1,8 +1,8 @@
|
||||
## Downloads
|
||||
|
||||
Distro-specific packages are built for each supported operating system.
|
||||
These packages contain the osquery daemon, shell, and example configuration and startup scripts.
|
||||
This means a `/etc/init.d/osqueryd` script that does not automatically start until a configuration file is created*.
|
||||
These packages contain the osquery daemon, shell, example configuration and startup scripts.
|
||||
Note that the `/etc/init.d/osqueryd` script does not automatically start the daemon until a configuration file is created*.
|
||||
|
||||
Supported distributions are:
|
||||
|
||||
@ -45,7 +45,7 @@ $ sudo yum install osquery
|
||||
|
||||
## dpkg-based Distros
|
||||
|
||||
We publish that same two packages, osquery and osquery-unstable, in an apt repository for Ubuntu 12.04 (precise) and 14.04 (trusty):
|
||||
We publish the same two packages, osquery and osquery-unstable, in an apt repository for Ubuntu 12.04 (precise) and 14.04 (trusty):
|
||||
|
||||
**Ubuntu Trusty 14.04 LTS**
|
||||
|
||||
|
@ -1,11 +1,11 @@
|
||||
The high-performance and low-footprint **distributed host monitoring daemon**, osqueryd, allows you to schedule queries to be executed across your entire infrastructure. The daemon takes care of aggregating the query results over time and generates logs which indicate state changes in your infrastructure. You can use this to maintain insight into the security, performance, configuration, and state of your entire infrastructure. osqueryd's logging can integrate into your internal log aggregation pipeline, regardless of your technology stack, via a robust plugin architecture.
|
||||
|
||||
The **interactive query console**, osqueryi, gives you a SQL interface to try out new queries and explore your operating system. With the power of a complete SQL language and dozens of useful tables built-in, osqueryi is an invaluable tool when performing incident response, diagnosing an systems operations problem, troubleshooting a performance issue, etc.
|
||||
The **interactive query console**, osqueryi, gives you a SQL interface to try out new queries and explore your operating system. With the power of a complete SQL language and dozens of useful tables built-in, osqueryi is an invaluable tool when performing incident response, diagnosing a systems operations problem, troubleshooting a performance issue, etc.
|
||||
|
||||
osquery is **cross platform**. Even though osquery takes advantage of very low-level operating system APIs, you can build and use osquery on Mac OS X, Ubuntu, Cent OS and other popular enterprise Linux distributions. This has the distinct advantage of allowing you to be able to use one platform for monitoring complex operating system state across your entire infrastructure. Monitor your corporate Mac OS X clients the same way you monitor your production Linux servers.
|
||||
|
||||
To make deploying osquery in your infrastructure as easy as possible, osquery comes with **native packages for all supported operating systems**. There is great tooling and documentation around creating packages so packaging and deploying your custom osquery tools can be just as easy too.
|
||||
To make deploying osquery in your infrastructure as easy as possible, osquery comes with **native packages for all supported operating systems**. There is extensive tooling and documentation around creating packages so packaging and deploying your custom osquery tools can be just as easy too.
|
||||
|
||||
To assist with the rollout process, the osquery user guide has **detailed documentation on internal deployment**. osquery was built so that every environment specific aspect of the toolchain can be hot-swapped at run-time with custom plugins. Use these interfaces to deeply integrate osquery into your infrastructure if one of the **several existing plugins** don not suit your needs.
|
||||
To assist with the rollout process, the osquery user guide has **detailed documentation on internal deployment**. osquery was built so that every environment specific aspect of the toolchain can be hot-swapped at run-time with custom plugins. Use these interfaces to deeply integrate osquery into your infrastructure if one of the **several existing plugins** do not suit your needs.
|
||||
|
||||
Additionally, osquery's codebase is made up of **high-performance, modular components with clearly documented public APIs**. These components can be easily strung together to create new, interesting applications and tools. Language bindings exist for many languages using a Thrift interface, so you can continue using comfortable and familiar technologies.
|
||||
|
@ -33,7 +33,7 @@ Each query represents a monitored view of your operating system. The first time
|
||||
]
|
||||
```
|
||||
|
||||
If there are no USB devices added or removed to the laptop this query would never log a result again. The query would still run every 60 seconds but the results would match the previous run and thus no state change would be detected. If a USB memory stick was inserted and left in the laptop for 60 seconds the daemon would log:
|
||||
If there are no USB devices added or removed to the laptop, this query would never log a result again. The query would still run every 60 seconds but the results would match the previous run and thus no state change would be detected. If a USB memory stick was inserted and left in the laptop for 60 seconds the daemon would log:
|
||||
|
||||
```
|
||||
[
|
||||
|
@ -27,7 +27,7 @@ osquery> SELECT DISTINCT
|
||||
osquery>
|
||||
```
|
||||
|
||||
The shell accepts a single positional argument and several output modes. If you wanted to script the output and act on JSON or CSV values try:
|
||||
The shell accepts a single positional argument and one of the several output modes. If you want to output JSON or CSV values, try:
|
||||
|
||||
```
|
||||
$ osqueryi --json "select * from routes where destination = '::1'"
|
||||
@ -103,5 +103,5 @@ osquery> .exit
|
||||
$
|
||||
```
|
||||
|
||||
The shell does not keep much state or connect to a osqueryd daemon.
|
||||
If you would like to run queries and log changes to the output or log operating system events consider deploying a query **schedule** using [osqueryd](using-osqueryd.md).
|
||||
The shell does not keep much state or connect to the osqueryd daemon.
|
||||
If you would like to run queries and log changes to the output or log operating system events, consider deploying a query **schedule** using [osqueryd](using-osqueryd.md).
|
||||
|
Loading…
Reference in New Issue
Block a user