Atomic_Threat_Coverage | ||
dataneeded | ||
detectionrules | ||
enrichments | ||
images | ||
loggingpolicies | ||
response_actions | ||
response_playbooks | ||
scripts | ||
scripts_v2 | ||
triggering | ||
.gitignore | ||
.gitmodules | ||
analytics.csv | ||
LICENSE | ||
Makefile | ||
README_PL.md | ||
README_RU.md | ||
README.md | ||
requirements.txt |
🇷🇺 Русская версия | 🇵🇱 Polska wersja
Atomic Threat Coverage
Automatically generated knowledge base of analytics designed to combat threats based on MITRE's ATT&CK.
Atomic Threat Coverage is tool which allows you to automatically generate knowledge base of analytics, designed to combat threats (based on the MITRE ATT&CK adversary model) from Detection, Response, Mitigation and Simulation perspectives:
- Detection Rules based on Sigma — Generic Signature Format for SIEM Systems
- Data Needed to be collected to produce detection of specific Threat
- Logging Policies need to be configured on data source to be able to collect Data Needed
- Enrichments for specific Data Needed which required for some Detection Rules
- Triggers based on Atomic Red Team — detection tests based on MITRE's ATT&CK
- Response Actions which executed during Incident Response
- Response Playbooks for reacting on specific threat, constructed from atomic Response Actions
- Hardening Policies need to be implemented to mitigate specific Threat
- Mitigation Systems need to be deployed and configured to mitigate specific Threat
Atomic Threat Coverage is highly automatable framework for accumulation, developing, explanation and sharing actionable analytics.
Description
Motivation
There are plenty decent projects which provide analytics (or functionality) of specific focus (Sigma, Atomic Red Team, MITRE CAR). All of them have one weakness — they exist in the vacuum of their area. In reality everything is tightly connected — data for alerts doesn't come from nowhere, and generated alerts don't go nowhere. Each function, i.e. data collection, security systems administration, threat detection, incident response etc are parts of big and comprehensive process, implemented by multiple departments, which demands their close collaboration.
Sometimes problems of one function could be solved by methods of other function in a cheaper, simpler and more efficient way. Most of the tasks couldn't be solved by one function at all. Each function is based on abilities and quality of others. There is no efficient way to detect and respond to threats without proper data collection and enrichment. There is no efficient way to respond to threats without understanding of which technologies/systems/measures could be used to block specific threat. There is no reason to conduct penetration test or Red Team exercise without understanding of abilities of processes, systems and personal to combat cyber threats. All of these require tight collaboration and mutual understanding of multiple departments.
In practice there are difficulties in collaboration due to:
- Absence of common threat model/classification, common terminology and language to describe threats
- Absence common goals understanding
- Absence of simple and straightforward way to explain specific requirements
- Difference in competence level (from both depth and areas perspectives)
That's why we decided to create Atomic Threat Coverage — project which connects different functions on the same Threat Centric methodology (Lockheed Martin Intelligence Driven Defense® aka MITRE Threat-based Security), threat model (MITRE ATT&CK) and provide security teams an efficient tool for collaboration on one main challenge — combating threats.
Why Atomic Threat Coverage
Work with existing [1][2][3][4] analytics/detections repositories looks like endless copy/pasting job, manual adaptation of the information into internal analytics knowledge base format, detections data model, mappings to internal valuable metrics and entities etc.
We decided to make it different.
Atomic Threat Coverage is a framework which allows you to create and maintain your own analytics repository, import analytics from other projects (like Sigma, Atomic Red Team, as well as private forks of these projects with your own analytics) and do export into human-readable wiki-style pages in two (for now) platforms:
- Atlassian Confluence pages (here is the demo of automatically generated knowledge base)
- This repo itself — automatically generated markdown formated wiki-style pages
In other words, you don't have to work on data representation layer manually, you work on meaningful atomic pieces of information (like Sigma rules), and Atomic Threat Coverage will automatically create analytics database with all entities, mapped to all meaningful, actionable metrics, ready to use, ready to share and show to leadership, customers and colleagues.
How it works
Everything starts from Sigma rule and ends up with human-readable wiki-style pages. Atomic Threat Coverage parses it and:
- Maps Detection Rule to ATT&CK Tactic and Technique using
tags
from Sigma rule - Maps Detection Rule to Data Needed using
logsource
anddetection
sections from Sigma rule - Maps Detection Rule to Triggers (Atomic Red Team tests) using
tags
from Sigma rule - Maps Detection Rule to Enrichments using existing mapping inside Detection Rule
- Maps Response Playbooks to ATT&CK Tactic and and Technique using existing mapping inside Response Playbooks
- Maps Response Playbooks to Response Actions using existing mapping inside Response Playbooks
- Maps Logging Policies to Data Needed using existing mapping inside Data Needed
- Converts everything into Confluence and Markdown wiki-style pages using jinja templates (
scripts/templates
) - Pushes all pages to local repo and Confluence server (according to configuration provided in
scripts/config.py
) - Creates
analytics.csv
andpivoting.csv
files for simple analysis of existing data - Creates
atc_export.json
— ATT&CK Navigator profile for visualisation of current detection abilities
Under the hood
Data in the repository:
├── analytics.csv
├── pivoting.csv
├── data_needed
│ ├── DN_0001_4688_windows_process_creation.yml
│ ├── DN_0002_4688_windows_process_creation_with_commandline.yml
│ └── dataneeded.yml.template
├── detection_rules
│ └── sigma/
├── enrichments
│ ├── EN_0001_cache_sysmon_event_id_1_info.yml
│ ├── EN_0002_enrich_sysmon_event_id_1_with_parent_info.yaml
│ └── enrichment.yml.template
├── logging_policies
│ ├── LP_0001_windows_audit_process_creation.yml
│ ├── LP_0002_windows_audit_process_creation_with_commandline.yml
│ └── loggingpolicy_template.yml
├── response_actions
│ ├── RA_0001_identification_get_original_email.yml
│ ├── RA_0002_identification_extract_observables_from_email.yml
│ └── respose_action.yml.template
├── response_playbooks
│ ├── RP_0001_phishing_email.yml
│ ├── RP_0002_generic_response_playbook_for_postexploitation_activities.yml
│ └── respose_playbook.yml.template
└── triggering
└── atomic-red-team/
Detection Rules
Detection Rules are unmodified Sigma rules. By default Atomic Threat Coverage uses rules from official repository but you can (should) use rules from your own private fork with analytics relevant for you.
Links to Data Needed, Trigger, and articles in ATT&CK are generated automatically. Sigma rule, Kibana query, X-Pack Watcher and GrayLog query generated and added automatically (this list could be expanded and depends on Sigma Supported Targets)
Data Needed
This entity expected to simplify communication with SIEM/LM/Data Engineering teams. It includes the next data:
- Sample of the raw log to describe what data they could expect to receive/collect
- Description of data to collect (Platform/Type/Channel/etc) — needed for calculation of mappings to Detection Rules and general description
- List of fields also needed for calculation of mappings to Detection Rules and Response Playbooks, as well as for
pivoting.csv
generation
Logging Policies
This entity expected to explain SIEM/LM/Data Engineering teams and IT departments which logging policies have to be configured to have proper Data Needed for Detection and Response to specific Threat. It also explains how exactly this policy can be configured.
Enrichments
This entity expected to simplify communication with SIEM/LM/Data Engineering teams. It includes the next data:
- List of Data Needed which could be enriched
- Description of the goal of the specific Enrichment (new fields, translation, renaming etc)
- Example of implementation (for example, Logstash config)
This way you will be able to simply explain why you need specific enrichments (mapping to Detection Rules) and specific systems for data enrichment (for example, Logstash).
Triggers
Triggers are unmodified Atomic Red Team tests. By default Atomic Threat Coverage uses atomics from official repository but you can (should) use atomics from your own private fork with analytics relevant for you.
This entity needed to test specific technical controls and detections. Detailed description could be found in official site.
Response Actions
This entity used to build Response Playbooks.
Response Playbooks
This entity used as an Incident Response plan for specific threat.
analytics.csv
Atomic Threat Coverage generates analytics.csv with list of all data mapped to each other for simple analysis. This file is suppose to answer these questions:
- What data do I need to collect to detect specific threats?
- Which Logging Policies do I need to implement to collect the data I need for detection of specific threats?
- Which Logging Policies I can install everywhere (event volume low/medium) and which only on critical hosts (high/extremely high)?
- Which data provided me most of the high fidelity alerts? (prioritisation of data collection implementation)
- etc
Ideally, this kind of mapping could provide organizations with the ability to connect Threat Coverage from detection perspective to money. Like:
- if we will collect all Data Needed from all hosts for all Detection Rules we have it would be X Events Per Second (EPS) (do calculation for a couple of weeks or so) with these resources for storage/processing (some more or less concrete number)
- if we will collect Data Needed only for high fidelity alerts and only on critical hosts, it will be Y EPS with these resources for storage/processing (again, more or less concrete number)
- etc
pivoting.csv
Atomic Threat Coverage generates pivoting.csv with list of all fields (from Data Needed) mapped to description of Data Needed for very specific purpose — it provides information about data sources where some specific data type could be found, for example domain name, username, hash etc.
Goals
- Stimulate community to use Sigma rule format (so we will have more contributors, more and better converters)
- Stimulate community to use Atomic Red Team tests format (so we will have more contributors and execution frameworks)
- Evangelize threat information sharing
- Automate most of manual work
- Provide information security community framework which will improve communication with other departments, general analytics accumulation, developing and sharing
Workflow
- Add your own custom Sigma rules/fork (if you have any) to
detection_rules
directory - Add your own custom Atomic Red Team tests/fork (if you have any) to
triggering
directory - Add Data Needed into
data_needed
directory (you can create new one using template) - Add Logging Policies into
logging_policies
directory (you can create new one using template) - Add Enrichments into
enrichments
directory (you can create new one using template) - Add Response Actions into
response_actions
directory (you can create new one using template) - Add Response Playbooks into
response_playbooks
directory (you can create new one using template) - Configure your export settings using
scripts/config.py
- Execute
make
in root directory of the repository
You don't have to add anything to make it work in your environment, you can just configure export settings using scripts/config.py
and utilise default dataset.
At the same time you can access demo of automatically generated knowledge base in Confluence to make yourself familiar with final result with default dataset.
Current Status: Alpha
The project is currently in an alpha stage. It doesn't support all existing Sigma rules (current coverage is ~80%), also have some entities to develop (like Mitigation Systems). We warmly welcome any feedback and suggestions to improve the project.
Requirements
- Unix-like OS or Windows Subsystem for Linux (WSL) (it required to execute
make
) - Python 3.7.1
- jinja2 python library
- Render Markdown app for Confluence (free open source)
FAQ
Will my private analytics (Detection Rules, Logging Policies, etc) be transferred somewhere?
No. Only to your confluence node, according to configuration provided in scripts/config.py
. Atomic Threat Coverage doesn't connect to any other remote hosts, you can easily check it.
What do you mean saying "evangelize threat information sharing" then?
We mean that you will use community compatible formats for (at least) Detection Rules (Sigma) and Triggers (Atomic Red Team), and on some maturity level you will (hopefully) have willingness to share some interesting analytics with community. It's totally up to you.
How can I add new Trigger, Detection Rule, or anything else to my private fork of Atomic Threat Coverage?
Simplest way is to follow workflow chapter, just adding your rules into pre-configured folders for specific type of analytics.
More "production" way is to configure your private forks of Sigma and Atomic Red Team projects as submodules of your Atomic Threat Coverage private fork. After that you only will need to configure path to them in scripts/config.py
, this way Atomic Threat Coverage will start using it for knowledge base generation.
Sigma doesn't support some of my Detection Rules. Does it still make sense to use Atomic Threat Coverage?
Absolutely. We also have some Detection Rules which couldn't be automatically converted to SIEM/LM queries by Sigma. We still use Sigma format for such rules putting unsupported detection logic into "condition" section. Later SIEM/LM teams manually create rules based on description in this field. ATC is not only about automatic queries generation/documentation, there are still a lot of advantages for analysis. You wouldn't be able to utilise them without Detection Rules in Sigma format.
Authors
- Daniil Yugoslavskiy, @yugoslavskiy
- Jakob Weinzettl, @mrblacyk
- Mateusz Wydra, @sn0w0tter
- Mikhail Aksenov, @AverageS
Thanks to
- Igor Ivanov, @lctrcl for collaboration on initial data types and mapping rules development
- Andrey, Polar_Letters for the logo
- Sigma, Atomic Red Team, TheHive and Elastic Common Schema projects for inspiration
- MITRE ATT&CK for making all of this possible
TODO
- Develop TheHive Case Templates generation based on Response Playbooks
- Develop docker container for the tool
- Implement "Mitigation Systems" entity
- Implement "Hardening Policies" entity
- Implement consistent Data Model (fields naming)
- Implement new entity — "Visualisation" with Kibana visualisations/dashboards stored in yaml files and option to convert them into curl commands for uploading them into Elasticsearch
Links
[1] MITRE Cyber Analytics Repository
[2] Endgame EQL Analytics Library
[3] Palantir Alerting and Detection Strategy Framework
[4] The ThreatHunting Project
License
See the LICENSE file.