Website: Migrate 6 articles (#5801)

* Website: add 6 product articles & images, update styles

* Update articles/fleet-quick-tips-querying-procdump-eula-has-been-accepted.md

Co-authored-by: Mike Thomas <78363703+mike-j-thomas@users.noreply.github.com>
This commit is contained in:
Eric 2022-05-18 19:07:16 -05:00 committed by GitHub
parent 11963568a0
commit 4dfe497ac4
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
14 changed files with 347 additions and 1 deletions

View File

@ -0,0 +1,22 @@
# eBPF & the future of osquery on Linux
![eBPF & the future of osquery on Linux](../website/assets/images/articles/ebpf-the-future-of-osquery-on-linux-cover-700x394@2x.png)
What is the state of event instrumentation with osquery on Linux today? How is the Audit framework meeting Linux visibility needs, and what are the shortcomings of the approach? What is eBPF and how will it open new opportunities for osquery instrumentation on Linux?
This talk discusses the Audit approach to Linux events with osquery, including configuration and the capabilities exposed. eBPF is introduced along with the new `bpf_process_events` and `bpf_socket_events` tables. We conclude with thoughts about the future of eBPF and osquery on Linux.
### Presentation video
<iframe width="560" height="315" src="https://www.youtube.com/embed/p3rIRJM2vwo" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
### Slide Deck
<iframe class="speakerdeck-iframe" frameborder="0" src="https://speakerdeck.com/player/a0444dd4b2b24bad8db7908590506699" title="eBPF &amp; the future of osquery on Linux" allowfullscreen="true" mozallowfullscreen="true" webkitallowfullscreen="true" style="border: 0px; background: padding-box padding-box rgba(0, 0, 0, 0.1); padding: 0px; border-radius: 6px; box-shadow: rgba(0, 0, 0, 0.2) 0px 5px 40px; width: 560px; height: 314px;" data-ratio="1.78343949044586"></iframe>
<meta name="category" value="product">
<meta name="authorGitHubUsername" value="zwass">
<meta name="authorFullName" value="Zach Wasserman">
<meta name="publishedOn" value="2021-01-25">
<meta name="articleTitle" value="eBPF & the future of osquery on Linux">
<meta name="articleImageUrl" value="../website/assets/images/articles/ebpf-the-future-of-osquery-on-linux-cover-700x394@2x.png">

View File

@ -0,0 +1,29 @@
# Fleet quick tips — identify systems where the ProcDump EULA has been accepted.
By now, youve no doubt already heard of Microsofts big email hack.
While attackers initially flew largely under the radar via an unknown vulnerability in the email software, the folks at [Volexity](https://www.volexity.com/blog/2021/03/02/active-exploitation-of-microsoft-exchange-zero-day-vulnerabilities/) observed a handful of post exploitation activities and tools that operators used to gain a foothold — one such tool being ProcDump, which attackers were observed using to dump LSASS process memory.
![Identify systems where the ProcDump EULA has been accepted with Fleet](../website/assets/images/articles/fleet-quick-tips-querying-procdump-eula-has-been-accepted-cover-700x440@2x.png)
As a possible detection method using osquery and Fleet, check out this query from [Recon InfoSec](https://rhq.reconinfosec.com/tactics/credential_access/#procdump) that looks for systems that accepted the ProcDump EULA. This query searches for a registry artifact that indicates ProcDump may have been used in a post-exploitation technique described by [Microsofts security blog](https://www.microsoft.com/security/blog/2021/03/02/hafnium-targeting-exchange-servers/).
```
SELECT datetime(mtime, unixepoch, localtime) AS EULA_accepted,path
FROM registry
WHERE path LIKE HKEY_USERS\%\Software\Sysinternals\ProcDump\EulaAccepted;
```
\*mtime = Time that EULA was accepted
For more information about the recent security breach, take a look at [Microsofts original blog post](https://www.microsoft.com/security/blog/2021/03/02/hafnium-targeting-exchange-servers/).
### Could this post be more helpful?
Let us know if you can think of any other example scenarios youd like us to cover.
<meta name="category" value="product">
<meta name="authorGitHubUsername" value="mike-j-thomas">
<meta name="authorFullName" value="Mike Thomas">
<meta name="publishedOn" value="2021-05-11">
<meta name="articleTitle" value="Fleet quick tips — identify systems where the ProcDump EULA has been accepted">
<meta name="articleImageUrl" value="../website/assets/images/articles/fleet-quick-tips-querying-procdump-eula-has-been-accepted-cover-700x440@2x.png">

View File

@ -0,0 +1,139 @@
# Generate process trees with osquery
## Rich process trees on macOS, Linux, and Windows
![Generate process trees with osquery](../website/assets/images/articles/generate-process-trees-with-osquery-cover-700x393@2x.jpeg)
Using advanced SQL syntax, it is possible to generate process trees in osquery similar to those generated by the `pstree` utility. With osquery, the generated trees can be extended to include additional information that can aid analysis.
Below is the basic structure of the query:
```
WITH target_procs AS (
SELECT * FROM processes WHERE name = 'osqueryd'
)
SELECT *
FROM (
WITH recursive parent_proc AS (
SELECT * FROM target_procs
UNION ALL
SELECT p.* FROM processes p JOIN parent_proc pp ON p.pid = pp.parent
WHERE pp.pid != pp.parent
ORDER BY pid
)
SELECT pid, parent, uid, name, path
FROM parent_proc
);
```
Running this query in `osqueryi` will generate results like:
```
+-------+--------+-----+-------------+---------------------------------------------------------------+
| pid | parent | uid | name | path |
+-------+--------+-----+-------------+---------------------------------------------------------------+
| 69230 | 1 | 0 | orbit | /private/var/lib/orbit/bin/orbit/macos/stable/orbit |
| 1 | 0 | 0 | launchd | /sbin/launchd |
| 0 | 0 | 0 | kernel_task | |
| 38179 | 4016 | 501 | osqueryd | /usr/local/bin/osqueryd |
| 4016 | 4015 | 501 | zsh | /usr/local/Cellar/zsh/5.7_1/bin/zsh |
| 4015 | 4014 | 501 | login | /usr/bin/login |
| 4014 | 4008 | 501 | iTerm2 | /Applications/iTerm.app/Contents/MacOS/iTerm2 |
| 4008 | 1 | 501 | iTerm2 | /Applications/iTerm.app/Contents/MacOS/iTerm2 |
| 1 | 0 | 0 | launchd | /sbin/launchd |
| 0 | 0 | 0 | kernel_task | |
+-------+--------+-----+-------------+---------------------------------------------------------------+
```
## How it Works
This query makes use of [SQLite Common Table Expressions (CTEs)](https://sqlite.org/lang_with.html) to recursively generate the requested data. Below we will examine the components of the query:
```
WITH target_procs AS (
SELECT * FROM processes WHERE name = 'osqueryd'
)
SELECT *
FROM (
WITH recursive parent_proc AS (
SELECT * FROM target_procs
UNION ALL
SELECT p.* FROM processes p JOIN parent_proc pp ON p.pid = pp.parent
WHERE pp.pid != pp.parent
ORDER BY pid
)
SELECT pid, parent, uid, name, path
FROM parent_proc
);
```
- Line 2 — Set the target process(es) that the query will generate trees for.
- Line 10 — Stop the recursion when the process parent is the process itself. This prevents infinite recursion on macOS and Linux for the process `0`.
- Line 11 — Order the evaluation of recursive rows by the `pid`. This generally results in the process trees being output in the correct order (though we cannot guarantee that the ordering will be correct, we will get results for all processes in the trees). On Windows it can be better to `ORDER BY start_time` as `pids` are not increasing.
- Line 13 — Choose the columns to retrieve from the results. In this example we select a limited set of columns to ease interpretation of the results.
## Extend the Concept
There are a number of ways that this query can be extended to address different needs.
### Retrieve More Details
Change the `SELECT` statement in line 13 to retrieve a different set of results. A simple case could be changing to `SELECT *` to get all the columns from the `processes` table.
A more complex scenario could be to generate the hashes of the running binaries:
```
WITH target_procs AS (
SELECT * FROM processes WHERE name = 'osqueryd'
)
SELECT *
FROM (
WITH recursive parent_proc AS (
SELECT * FROM target_procs
UNION ALL
SELECT p.* FROM processes p JOIN parent_proc pp ON p.pid = pp.parent
WHERE pp.pid != pp.parent
ORDER BY pid
)
SELECT pid, parent, uid, name, path, md5
FROM parent_proc LEFT JOIN hash USING (path)
);
```
### Target Different Processes
Change the query on line 2 to generate trees for a different set of processes. One option would be `SELECT * FROM processes WHERE pid = 1234` if we know the pid of the process we are interested in. Targeting processes by attributes that remain consistent on different machines (like `name`, unlike `pid`) is a useful technique to ensure that live queries across hosts are successful.
This can also be extended using the full power of osquery. For example, we might want the process tree of every process bound to a port:
```
WITH target_procs AS (
SELECT DISTINCT processes.*
FROM processes JOIN listening_ports USING (pid)
WHERE port != 0
)
SELECT *
FROM (
WITH recursive parent_proc AS (
SELECT * FROM target_procs
UNION ALL
SELECT p.* FROM processes p JOIN parent_proc pp ON p.pid = pp.parent
WHERE pp.pid != pp.parent
ORDER BY pid
)
SELECT pid, parent, uid, name, path
FROM parent_proc
);
```
## Wrapping Up
With this query as a building block, osquery provides the capability to generate rich process trees. Consider using this with an osquery TLS server such as [Fleet](https://fleetdm.com/) to examine this information on multiple machines at once.
<meta name="category" value="product">
<meta name="authorGitHubUsername" value="zwass">
<meta name="authorFullName" value="Zach Wasserman">
<meta name="publishedOn" value="2020-03-17">
<meta name="articleTitle" value="Generate process trees with osquery">
<meta name="articleImageUrl" value="../website/assets/images/articles/generate-process-trees-with-osquery-cover-700x393@2x.jpeg">

View File

@ -0,0 +1,45 @@
# Import and export queries and packs in Fleet
![Import and export queries and packs in Fleet](../website/assets/images/articles/import-and-export-queries-and-packs-in-fleet-cover-700x343@2x.png)
When managing multiple Fleet environments, you may want to move queries and/or packs from one environment to the other. Or, when inspired by a set of packs shared by a member of the osquery community, you might want to import these packs into your Fleet instance. To do this, you need to have access to a Unix shell and a basic knowledge of the [fleetctl CLI tool](https://www.npmjs.com/package/fleetctl).
Below are two example scenarios. For leaner instructions on how to move queries and packs from one Fleet environment to another, [check out Fleets documentation](https://github.com/fleetdm/fleet/blob/00ea74ed800568f063c6d5553f113dbd1e55a09c/docs/1-Using-Fleet/configuration-files/README.md#moving-queries-and-packs-from-one-fleet-environment-to-another).
### Example scenario 1: Moving packs, and their queries, from one Fleet environment to another
Lets say you manage your organizations staging and production servers. In order to keep your production servers speedy, youve set up two separate Fleet instances for the two environments: Staging and Production.
With this separation, you can diligently test your queries in Staging without negatively impacting the performance of servers in Production.
On Friday, after test results come in, you want to move all performant packs, and their queries, from Staging to Production. You know you can open up the Fleet UI for Production and create the packs manually, but each pack has at least 4 new queries. These packs already exist in Staging so you dont need to spend time recreating each one in Production.
Heres how you can quickly export and import the packs in 3 quick fleetctl commands:
1. Navigate to `~/.fleet/config` to find the context names for your “exporter” and “importer” environment. For the purpose of these instructions, we use the context names `staging` and `production` respectively.
2. Run the command `fleetctl get queries --yaml --context staging > queries.yml && fleetctl apply -f queries.yml --context production`. This will import all the queries from your Staging Fleet instance into your Production Fleet instance. *Note, this will also write a list of all queries in yaml syntax to a file names `queries.yml`.*
3. Run the command `fleetctl get packs --yaml --context staging > packs.yml && fleetctl apply -f packs.yml --context production`. This will import all the packs from your Staging Fleet instance into your Production Fleet instance. *Note, this will also write a list of all packs in yaml syntax to a file names `packs.yml`.*
*Note, when importing packs, you must always first import all the queries (step 2) that these packs contain.*
### Example scenario 2: Importing community packs into Fleet
You just found [a collection of awesome queries and packs for Fleet](https://github.com/palantir/osquery-configuration/tree/master/Fleet) and you want to import them into your *Staging Fleet* environment.
Heres how you can do this in 2 quick fleetctl commands.
1. Create a new file, `awesome-packs.yml` and paste in the desired packs and queries in the [correct Fleet configuration format](https://github.com/fleetdm/fleet/tree/main/docs/1-Using-Fleet/configuration-files#using-yaml-files-in-fleet).
2. Run the command `fleetctl apply -f awesome-packs.yml`.
### Could this post be more helpful?
Let us know if you can think of any other example scenarios youd like us to cover.
<meta name="category" value="product">
<meta name="authorGitHubUsername" value="noahtalerman">
<meta name="authorFullName" value="Noah Talerman">
<meta name="publishedOn" value="2021-02-16">
<meta name="articleTitle" value="Import and export queries and packs in Fleet">
<meta name="articleImageUrl" value="../website/assets/images/articles/import-and-export-queries-and-packs-in-fleet-cover-700x343@2x.png">

View File

@ -0,0 +1,40 @@
# Locate device assets in the event of an emergency.
## A simple query for IP-Geolocation
![Locate device assets in the event of an emergency](../website/assets/images/articles/locate-assets-with-osquery-cover-700x393@2x.jpeg)
In the event of an emergency or public safety concern, osquery can be easily used to identify employees the direct vicinity, so that teams can push warnings or safety precautions to their staff.
This simple strategy for obtaining the location of an osquery device utilizes the [ipapi.co](https://ipapi.co/) API to retrieve the IP geolocation of the device. Note that the device must be able to connect to the internet over HTTP, and the calculated location may be skewed by VPN, proxies, etc.
**Query:**
```
SELECT JSON_EXTRACT(result, '$.ip') AS ip,
JSON_EXTRACT(result, '$.city') AS city,
JSON_EXTRACT(result, '$.region') AS region,
JSON_EXTRACT(result, '$.country') AS country
FROM curl
WHERE url = 'http://ipapi.co/json';
```
**Sample result:**
```
+--------------+------------+------------+---------+
| ip | city | region | country |
+--------------+------------+------------+---------+
| 71.92.162.65 | Sacramento | California | US |
+--------------+------------+------------+---------+
```
Other techniques
A common technique for geolocation of macOS devices with osquery is to use the `wifi_survey` table in combination with the [Google Geolocation API](https://developers.google.com/maps/documentation/geolocation/intro#wifi_access_point_object). This strategy has become more difficult to use due to security controls introduced in macOS 10.15, and poses privacy concerns due to the precision of the location data returned by the API.
<meta name="category" value="product">
<meta name="authorGitHubUsername" value="zwass">
<meta name="authorFullName" value="Zach Wasserman">
<meta name="publishedOn" value="2021-05-11">
<meta name="articleTitle" value="Locate device assets in the event of an emergency.">
<meta name="articleImageUrl" value="../website/assets/images/articles/locate-assets-with-osquery-cover-700x393@2x.jpeg">

View File

@ -0,0 +1,68 @@
# Osquery: Consider joining against the users table
## Proper use of JOIN to return osquery data for users
![Osquery: Consider joining against the users table](../website/assets/images/articles/osquery-consider-joining-against-the-users-table-cover-700x437@2x.jpeg)
Many an osquery user has encountered a situation like the following:
```
$ osqueryi
Using a virtual database. Need help, type '.help'
osquery> SELECT uid, name FROM chrome_extensions LIMIT 3;
+-----+--------------------------------------------+
| uid | name |
+-----+--------------------------------------------+
| 501 | Slides |
| 501 | Docs |
| 501 | 1Password extension (desktop app required) |
+-----+--------------------------------------------+
osquery>
$ sudo osqueryi
Using a virtual database. Need help, type '.help'
osquery> SELECT uid, name FROM chrome_extensions LIMIT 3;
W0519 09:35:27.624747 415233472 virtual_table.cpp:959] The chrome_extensions table returns data based on the current user by default, consider JOINing against the users table
W0519 09:35:27.625207 415233472 virtual_table.cpp:974] Please see the table documentation: https://osquery.io/schema/#chrome_extensions
```
Our query runs as expected when `osqueryi` is run as a normal user, but returns a warning message and no results when run as root via `sudo osqueryi`.
This same issue manifests on many tables that include a `uid` column:
- `atom_packages`
- `authorized_keys`
- `chrome_extension_content_scripts`
- `chrome_extensions`
- `crashes`
- `docker_container_processes`
- `firefox_addons`
- `known_hosts`
- `opera_extensions`
- `safari_extensions`
- `shell_history`
- `user_ssh_keys`
### Whats going on here?
As stated in the error message, these tables return “data based on the current user by default”. When run as a normal user, the implementations know to look in paths relative to the users home directories. A query running as root does not know which directories to check.
### The solution
Show osquery which users to retrieve the data for. Typically this is achieved by a `JOIN` against the `users` table to retrieve data for every user on the system:
```
SELECT uid, name
FROM users CROSS JOIN chrome_extensions USING (uid)
```
Writing the query with this `JOIN` ensures that osquery first generates the list of users, and then provides the user `uid`s to the `chrome_extensions` table when generating that data.
Note: It is important to use `CROSS JOIN` as this tells the query optimizer not to reorder the evaluation of the tables. If we use a regular `JOIN` it is possible that reordering could result in the original error being encountered (because the `chrome_extensions` table generates with no `uid` in its context).
<meta name="category" value="product">
<meta name="authorGitHubUsername" value="zwass">
<meta name="authorFullName" value="Zach Wasserman">
<meta name="publishedOn" value="2021-05-06">
<meta name="articleTitle" value="Osquery: Consider joining against the users table">
<meta name="articleImageUrl" value="../website/assets/images/articles/osquery-consider-joining-against-the-users-table-cover-700x437@2x.jpeg">

Binary file not shown.

After

Width:  |  Height:  |  Size: 285 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 97 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 186 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

View File

@ -123,6 +123,9 @@
font-size: 16px;
}
}
iframe {
align-self: center;
}
}
[purpose='bottom-cta'] {
padding-bottom: 80px;

View File

@ -9,7 +9,7 @@
<img style="height: 28px; width: 28px; border-radius: 100%;" alt="The author's GitHub profile picture" :src="'https://github.com/'+thisPage.meta.authorGitHubUsername+'.png?size=200'">
<p class="pl-2 font-weight-bold">{{thisPage.meta.authorFullName}}</p>
</div>
<div purpose="article-content">
<div purpose="article-content" class="d-flex flex-column">
<%- partial(path.relative(path.dirname(__filename), path.resolve( sails.config.appPath, path.join(sails.config.builtStaticContent.compiledPagePartialsAppPath, thisPage.htmlId)))) %>
</div>
<hr>