Open Distro for Elasticsearch Security initial release

This commit is contained in:
Hardik Shah 2019-03-02 20:45:58 -08:00
commit 436fdd97dc
192 changed files with 28759 additions and 0 deletions

36
.gitignore vendored Normal file
View File

@ -0,0 +1,36 @@
netty*/
smoketests/
target/
test-output/
kibana*/
logstash*/
deploy_all.sh
/build.gradle
*.log
.externalToolBuilders
maven-eclipse.xml
## eclipse ignores (use 'mvn eclipse:eclipse' to build eclipse projects)
## The only configuration files which are not ignored are certain files in
## .settings (as listed below) since these files ensure common coding
## style across Eclipse and IDEA.
## Other files (.project, .classpath) should be generated through Maven which
## will correctly set the classpath based on the declared dependencies.
.project
.classpath
eclipse-build
*/.project
*/.classpath
*/eclipse-build
/.settings/
!/.settings/org.eclipse.core.resources.prefs
!/.settings/org.eclipse.jdt.core.prefs
!/.settings/org.eclipse.jdt.ui.prefs
!/.settings/org.eclipse.jdt.groovy.core.prefs
bin
elasticsearch-*/
.DS_Store
data/
puppet/.vagrant
test.sh
.vagrant/

4
CODE_OF_CONDUCT.md Normal file
View File

@ -0,0 +1,4 @@
## Code of Conduct
This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
opensource-codeofconduct@amazon.com with any additional questions or comments.

61
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,61 @@
# Contributing Guidelines
Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional
documentation, we greatly value feedback and contributions from our community.
Please read through this document before submitting any issues or pull requests to ensure we have all the necessary
information to effectively respond to your bug report or contribution.
## Reporting Bugs/Feature Requests
We welcome you to use the GitHub issue tracker to report bugs or suggest features.
When filing an issue, please check [existing open](https://github.com/mauve-hedgehog/opendistro-elasticsearch-security/issues), or [recently closed](https://github.com/mauve-hedgehog/opendistro-elasticsearch-security/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20), issues to make sure somebody else hasn't already reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:
* A reproducible test case or series of steps
* The version of our code being used
* Any modifications you've made relevant to the bug
* Anything unusual about your environment or deployment
## Contributing via Pull Requests
Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that:
1. You are working against the latest source on the *master* branch.
2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already.
3. You open an issue to discuss any significant work - we would hate for your time to be wasted.
To send us a pull request, please:
1. Fork the repository.
2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change.
3. Ensure local tests pass.
4. Commit to your fork using clear commit messages.
5. Send us a pull request, answering any default questions in the pull request interface.
6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.
GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and
[creating a pull request](https://help.github.com/articles/creating-a-pull-request/).
## Finding contributions to work on
Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ['help wanted'](https://github.com/mauve-hedgehog/OpenES-HealthService/labels/help%20wanted) issues is a great place to start.
## Code of Conduct
This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
opensource-codeofconduct@amazon.com with any additional questions or comments.
## Security issue notifications
If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue.
## Licensing
See the [LICENSE](https://github.com/mauve-hedgehog/OpenES-HealthService/blob/master/LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution.
We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes.

202
LICENSE Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

11
NOTICE.txt Normal file
View File

@ -0,0 +1,11 @@
Copyright 2015-2017 floragunn GmbH
Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
This product includes software developed by The Apache Software
Foundation (http://www.apache.org/).
This product includes software developed by The Legion of the Bouncy Castle Inc.
(http://www.bouncycastle.org)
See THIRD-PARTY.txt for additional third party licenses used by this product.

77
README.md Normal file
View File

@ -0,0 +1,77 @@
# Open Distro for Elasticsearch Security
Open Distro for Elasticsearch Security is an Elasticsearch plugin that offers encryption, authentication, and authorization. It supports authentication via Active Directory, LDAP, Kerberos, JSON web tokens, SAML, OpenID and more. It includes fine grained role-based access control to indices, documents and fields. It also provides multi-tenancy support in Kibana.
## Basic features
* Full data in transit encryption
* Node-to-node encryption
* Certificate revocation lists
* Role-based cluster level access control
* Role-based index level access control
* User-, role- and permission management
* Internal user database
* HTTP basic authentication
* PKI authentication
* Proxy authentication
* User Impersonation
## Advance features
opendistro-elasticsearch-security-advanced-modules adds:
* Active Directory / LDAP
* Kerberos / SPNEGO
* JSON web token (JWT)
* OpenID
* SAML
* Document-level security
* Field-level security
* Audit logging
* Compliance logging for GDPR, HIPAA, PCI, SOX and ISO compliance
* True Kibana multi-tenancy
* REST management API
## Documentation
Please refer to the [Official documentation] for detailed information on installing and configuring opendistro-elasticsearch-security plugin.
## Quick Start
* Install Elasticsearch
* Install the opendistro-elasticsearch-security plugin for your Elasticsearch version 6.5.4, e.g.:
```
<ES directory>/bin/elasticsearch-plugin install \
-b com.amazon.opendistroforelasticsearch:elasticsearch-security:0.7.0.0
```
* ``cd`` into ``<ES directory>/plugins/opendistro_security/tools``
* Execute ``./install_demo_configuration.sh``, ``chmod`` the script first if necessary. This will generate all required TLS certificates and add the Security Plugin Configurationto your ``elasticsearch.yml`` file.
* Start Elasticsearch
* Test the installation by visiting ``https://localhost:9200``. When prompted, use admin/admin as username and password. This user has full access to the cluster.
* Display information about the currently logged in user by visiting ``https://localhost:9200/_opendistro/_security/authinfo``.
## Config hot reloading
The Security Plugin Configuration is stored in a dedicated index in Elasticsearch itself. Changes to the configuration are pushed to this index via the command line tool. This will trigger a reload of the configuration on all nodes automatically. This has several advantages over configuration via elasticsearch.yml:
* Configuration is stored in a central place
* No configuration files on the nodes necessary
* Configuration changes do not require a restart
* Configuration changes take effect immediately
## Support
## Legal
Open Distro For Elasticsearch Security
Copyright 2019- Amazon.com, Inc. or its affiliates. All Rights Reserved.

71
THIRD-PARTY.txt Normal file
View File

@ -0,0 +1,71 @@
Lists of 69 third-party dependencies.
(The Apache Software License, Version 2.0) HPPC Collections (com.carrotsearch:hppc:0.7.1 - http://labs.carrotsearch.com/hppc.html/hppc)
(The Apache Software License, Version 2.0) Jackson-core (com.fasterxml.jackson.core:jackson-core:2.8.10 - https://github.com/FasterXML/jackson-core)
(The Apache Software License, Version 2.0) Jackson dataformat: CBOR (com.fasterxml.jackson.dataformat:jackson-dataformat-cbor:2.8.10 - http://github.com/FasterXML/jackson-dataformats-binary)
(The Apache Software License, Version 2.0) Jackson dataformat: Smile (com.fasterxml.jackson.dataformat:jackson-dataformat-smile:2.8.10 - http://github.com/FasterXML/jackson-dataformats-binary)
(The Apache Software License, Version 2.0) Jackson-dataformat-YAML (com.fasterxml.jackson.dataformat:jackson-dataformat-yaml:2.8.10 - https://github.com/FasterXML/jackson)
(The Apache Software License, Version 2.0) OpenDistro Security SSL (com.amazon.opendistroforelasticsearch:opendistro-elasticsearch-security-ssl:0.0.7.0 - https://github.com/mauve-hedgehog/opendistro-elasticsearch-security-ssl)
(Apache License 2.0) compiler (com.github.spullara.mustache.java:compiler:0.9.3 - http://github.com/spullara/mustache.java)
(The Apache Software License, Version 2.0) FindBugs-jsr305 (com.google.code.findbugs:jsr305:1.3.9 - http://findbugs.sourceforge.net/)
(Apache 2.0) error-prone annotations (com.google.errorprone:error_prone_annotations:2.0.18 - http://nexus.sonatype.org/oss-repository-hosting.html/error_prone_parent/error_prone_annotations)
(The Apache Software License, Version 2.0) Guava: Google Core Libraries for Java (com.google.guava:guava:23.0 - https://github.com/google/guava/guava)
(The Apache Software License, Version 2.0) J2ObjC Annotations (com.google.j2objc:j2objc-annotations:1.1 - https://github.com/google/j2objc/)
(The Apache Software License, Version 2.0) T-Digest (com.tdunning:t-digest:3.0 - https://github.com/tdunning/t-digest)
(Lesser General Public License (LGPL)) JTS Topology Suite (com.vividsolutions:jts:1.13 - http://sourceforge.net/projects/jts-topo-suite)
(Apache License, Version 2.0) Apache Commons CLI (commons-cli:commons-cli:1.3.1 - http://commons.apache.org/proper/commons-cli/)
(Apache License, Version 2.0) Apache Commons Codec (commons-codec:commons-codec:1.10 - http://commons.apache.org/proper/commons-codec/)
(Apache License, Version 2.0) Apache Commons IO (commons-io:commons-io:2.6 - http://commons.apache.org/proper/commons-io/)
(The Apache Software License, Version 2.0) Commons Logging (commons-logging:commons-logging:1.1.3 - http://commons.apache.org/proper/commons-logging/)
(Apache License, Version 2.0) Netty/Buffer (io.netty:netty-buffer:4.1.16.Final - http://netty.io/netty-buffer/)
(Apache License, Version 2.0) Netty/Codec (io.netty:netty-codec:4.1.16.Final - http://netty.io/netty-codec/)
(Apache License, Version 2.0) Netty/Codec/HTTP (io.netty:netty-codec-http:4.1.16.Final - http://netty.io/netty-codec-http/)
(Apache License, Version 2.0) Netty/Common (io.netty:netty-common:4.1.16.Final - http://netty.io/netty-common/)
(Apache License, Version 2.0) Netty/Handler (io.netty:netty-handler:4.1.16.Final - http://netty.io/netty-handler/)
(Apache License, Version 2.0) Netty/Resolver (io.netty:netty-resolver:4.1.16.Final - http://netty.io/netty-resolver/)
(Apache License, Version 2.0) Netty/TomcatNative [OpenSSL - Dynamic] (io.netty:netty-tcnative:2.0.7.Final - https://github.com/netty/netty-tcnative/netty-tcnative/)
(Apache License, Version 2.0) Netty/Transport (io.netty:netty-transport:4.1.16.Final - http://netty.io/netty-transport/)
(Apache 2) Joda-Time (joda-time:joda-time:2.9.9 - http://www.joda.org/joda-time/)
(Eclipse Public License 1.0) JUnit (junit:junit:4.12 - http://junit.org)
(The MIT License) JOpt Simple (net.sf.jopt-simple:jopt-simple:5.0.2 - http://pholser.github.io/jopt-simple)
(Apache License, Version 2.0) Apache HttpAsyncClient (org.apache.httpcomponents:httpasyncclient:4.1.2 - http://hc.apache.org/httpcomponents-asyncclient)
(Apache License, Version 2.0) Apache HttpClient (org.apache.httpcomponents:httpclient:4.5.2 - http://hc.apache.org/httpcomponents-client)
(Apache License, Version 2.0) Apache HttpCore (org.apache.httpcomponents:httpcore:4.4.5 - http://hc.apache.org/httpcomponents-core-ga)
(Apache License, Version 2.0) Apache HttpCore NIO (org.apache.httpcomponents:httpcore-nio:4.4.5 - http://hc.apache.org/httpcomponents-core-ga)
(Apache License, Version 2.0) Apache Log4j API (org.apache.logging.log4j:log4j-api:2.9.1 - https://logging.apache.org/log4j/2.x/log4j-api/)
(Apache License, Version 2.0) Apache Log4j Core (org.apache.logging.log4j:log4j-core:2.9.1 - https://logging.apache.org/log4j/2.x/log4j-core/)
(Apache 2) Lucene Common Analyzers (org.apache.lucene:lucene-analyzers-common:7.2.1 - http://lucene.apache.org/lucene-parent/lucene-analyzers-common)
(Apache 2) Lucene Memory (org.apache.lucene:lucene-backward-codecs:7.2.1 - http://lucene.apache.org/lucene-parent/lucene-backward-codecs)
(Apache 2) Lucene Core (org.apache.lucene:lucene-core:7.2.1 - http://lucene.apache.org/lucene-parent/lucene-core)
(Apache 2) Lucene Grouping (org.apache.lucene:lucene-grouping:7.2.1 - http://lucene.apache.org/lucene-parent/lucene-grouping)
(Apache 2) Lucene Highlighter (org.apache.lucene:lucene-highlighter:7.2.1 - http://lucene.apache.org/lucene-parent/lucene-highlighter)
(Apache 2) Lucene Join (org.apache.lucene:lucene-join:7.2.1 - http://lucene.apache.org/lucene-parent/lucene-join)
(Apache 2) Lucene Memory (org.apache.lucene:lucene-memory:7.2.1 - http://lucene.apache.org/lucene-parent/lucene-memory)
(Apache 2) Lucene Miscellaneous (org.apache.lucene:lucene-misc:7.2.1 - http://lucene.apache.org/lucene-parent/lucene-misc)
(Apache 2) Lucene Queries (org.apache.lucene:lucene-queries:7.2.1 - http://lucene.apache.org/lucene-parent/lucene-queries)
(Apache 2) Lucene QueryParsers (org.apache.lucene:lucene-queryparser:7.2.1 - http://lucene.apache.org/lucene-parent/lucene-queryparser)
(Apache 2) Lucene Sandbox (org.apache.lucene:lucene-sandbox:7.2.1 - http://lucene.apache.org/lucene-parent/lucene-sandbox)
(Apache 2) Lucene Spatial (org.apache.lucene:lucene-spatial:7.2.1 - http://lucene.apache.org/lucene-parent/lucene-spatial)
(Apache 2) Lucene Spatial Extras (org.apache.lucene:lucene-spatial-extras:7.2.1 - http://lucene.apache.org/lucene-parent/lucene-spatial-extras)
(Apache 2) Lucene Spatial 3D (org.apache.lucene:lucene-spatial3d:7.2.1 - http://lucene.apache.org/lucene-parent/lucene-spatial3d)
(Apache 2) Lucene Suggest (org.apache.lucene:lucene-suggest:7.2.1 - http://lucene.apache.org/lucene-parent/lucene-suggest)
(Apache Software License, Version 1.1) (Bouncy Castle Licence) Bouncy Castle OpenPGP API (org.bouncycastle:bcpg-jdk15on:1.59 - http://www.bouncycastle.org/java.html)
(Bouncy Castle Licence) Bouncy Castle Provider (org.bouncycastle:bcprov-jdk15on:1.59 - http://www.bouncycastle.org/java.html)
(MIT license) Animal Sniffer Annotations (org.codehaus.mojo:animal-sniffer-annotations:1.14 - http://mojo.codehaus.org/animal-sniffer/animal-sniffer-annotations)
(The Apache Software License, Version 2.0) server (org.elasticsearch:elasticsearch:6.2.0 - https://github.com/elastic/elasticsearch)
(The Apache Software License, Version 2.0) cli (org.elasticsearch:elasticsearch-cli:6.2.0 - https://github.com/elastic/elasticsearch)
(The Apache Software License, Version 2.0) elasticsearch-core (org.elasticsearch:elasticsearch-core:6.2.0 - https://github.com/elastic/elasticsearch)
(The Apache Software License, Version 2.0) Elastic JNA Distribution (org.elasticsearch:jna:4.5.1 - https://github.com/java-native-access/jna)
(The Apache Software License, Version 2.0) Elasticsearch SecureSM (org.elasticsearch:securesm:1.2 - http://nexus.sonatype.org/oss-repository-hosting.html/securesm)
(The Apache Software License, Version 2.0) rest (org.elasticsearch.client:elasticsearch-rest-client:6.2.0 - https://github.com/elastic/elasticsearch)
(The Apache Software License, Version 2.0) aggs-matrix-stats (org.elasticsearch.plugin:aggs-matrix-stats-client:6.2.0 - https://github.com/elastic/elasticsearch)
(The Apache Software License, Version 2.0) lang-mustache (org.elasticsearch.plugin:lang-mustache-client:6.2.0 - https://github.com/elastic/elasticsearch)
(The Apache Software License, Version 2.0) parent-join (org.elasticsearch.plugin:parent-join-client:6.2.0 - https://github.com/elastic/elasticsearch)
(The Apache Software License, Version 2.0) percolator (org.elasticsearch.plugin:percolator-client:6.2.0 - https://github.com/elastic/elasticsearch)
(The Apache Software License, Version 2.0) reindex (org.elasticsearch.plugin:reindex-client:6.2.0 - https://github.com/elastic/elasticsearch)
(The Apache Software License, Version 2.0) transport-netty4 (org.elasticsearch.plugin:transport-netty4-client:6.2.0 - https://github.com/elastic/elasticsearch)
(New BSD License) Hamcrest All (org.hamcrest:hamcrest-all:1.3 - https://github.com/hamcrest/JavaHamcrest/hamcrest-all)
(New BSD License) Hamcrest Core (org.hamcrest:hamcrest-core:1.3 - https://github.com/hamcrest/JavaHamcrest/hamcrest-core)
(Public Domain, per Creative Commons CC0) HdrHistogram (org.hdrhistogram:HdrHistogram:2.1.9 - http://hdrhistogram.github.io/HdrHistogram/)
(The Apache Software License, Version 2.0) Spatial4J (org.locationtech.spatial4j:spatial4j:0.6 - http://www.locationtech.org/projects/locationtech.spatial4j)
(Apache License, Version 2.0) SnakeYAML (org.yaml:snakeyaml:1.17 - http://www.snakeyaml.org)

71
dev/scan_veracode.sh Executable file
View File

@ -0,0 +1,71 @@
#!/usr/bin/env bash
export APPID=421799
#export SANDBOXID=537580
set -e
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
cd $DIR/..
if [ -z "$VERA_USER" ];then
echo "No VERA_USER set"
exit -1
fi
if [ -z "$VERA_PASSWORD" ];then
echo "No VERA_PASSWORD set"
exit -1
fi
echo "App Id: $APPID"
echo "Sandbox Id: $SANDBOXID"
echo "Build Security ..."
mvn clean package -Pveracode -DskipTests > /dev/null 2>&1
PLUGIN_FILE=($DIR/../target/veracode/opendistro_security*.zip)
FILESIZE=$(wc -c <"$PLUGIN_FILE")
echo ""
echo "Upload $PLUGIN_FILE with a size of $((FILESIZE / 1048576)) mb"
#curl -Ss --fail --compressed -u "$VERA_USER:$VERA_PASSWORD" https://analysiscenter.veracode.com/api/5.0/getapplist.do -F "include_user_info=true" | xmllint --format -
#curl -Ss --fail --compressed -u "$VERA_USER:$VERA_PASSWORD" https://analysiscenter.veracode.com/api/5.0/getsandboxlist.do -F "app_id=$APPID" | xmllint --format -
curl -Ss --fail --compressed -u "$VERA_USER:$VERA_PASSWORD" "https://analysiscenter.veracode.com/api/5.0/uploadfile.do" \
-F "app_id=$APPID" \
-F "file=@$PLUGIN_FILE" \
-F "sandbox_id=$SANDBOXID" \
| xmllint --format - 2>&1 | tee vera.log
echo ""
echo "Start pre scan"
#curl -Ss --fail --compressed -u "$VERA_USER:$VERA_PASSWORD" https://analysiscenter.veracode.com/api/5.0/beginprescan.do -F "app_id=$APPID" -F "sandbox_id=$SANDBOXID" -F "auto_scan=false" | xmllint --format -
#curl -Ss --fail --compressed -u "$VERA_USER:$VERA_PASSWORD" https://analysiscenter.veracode.com/api/5.0/getprescanresults.do -F "app_id=$APPID" -F "sandbox_id=$SANDBOXID" -F "build_id=2008250" | xmllint --format -
#curl -Ss --fail --compressed -u "$VERA_USER:$VERA_PASSWORD" "https://analysiscenter.veracode.com/api/5.0/beginscan.do" -F "app_id=$APPID" -F "sandbox_id=$SANDBOXID" -F "modules=932413446,932413464,932413518,932413454,932413453"
curl -Ss --fail --compressed -u "$VERA_USER:$VERA_PASSWORD" https://analysiscenter.veracode.com/api/5.0/beginprescan.do -F "app_id=$APPID" -F "sandbox_id=$SANDBOXID" -F "auto_scan=true" -F "scan_all_nonfatal_top_level_modules=true" | xmllint --format - 2>&1 | tee -a vera.log
echo ""
echo ""
echo "----- Veralog ------"
cat vera.log
echo "--------------------"
echo ""
echo ""
echo "Check for errors ..."
set +e
grep -i error vera.log && (echo "Error executing veracode"; exit -1)
grep -i denied vera.log && (echo "Access denied for veracode"; exit -1)
echo "No errors"
set -e
#curl -Ss --fail --compressed -u "$VERA_USER:$VERA_PASSWORD" "https://analysiscenter.veracode.com/api/5.0/beginscan.do" -F "app_id=$APPID" -F "sandbox_id=$SANDBOXID" -F "scan_all_top_level_modules=true" | xmllint --format -
#echo "Polling results"
#curl -Ss --fail --compressed -u "$VERA_USER:$VERA_PASSWORD" https://analysiscenter.veracode.com/api/5.0/beginprescan.do -F "app_id=$APPID" -F "sandbox_id=$SANDBOXID" -F "auto_scan=true" -F "scan_all_nonfatal_top_level_modules=true" | xmllint --format -
#curl -Ss --fail --compressed -u "$VERA_USER:$VERA_PASSWORD" https://analysiscenter.veracode.com/api/5.0/getbuildlist.do -F "app_id=$APPID" -F "sandbox_id=$SANDBOXID" | xmllint --format -
#curl --fail --compressed -k -v -u [api user] https://analysiscenter.veracode.com/api/5.0/detailedreport.do?build_id=49645c.

View File

@ -0,0 +1,25 @@
#
# 'description': simple summary of the plugin
description=Provide access control related features for Elasticsearch 6
#
# 'version': plugin's version
version=0.7.0.0
#
# 'name': the plugin name
name=opendistro_security
#
# 'classname': the name of the class to load, fully-qualified.
classname=com.amazon.opendistroforelasticsearch.security.OpenDistroSecurityPlugin
#
# 'java.version' version of java the code is built against
# use the system property java.specification.version
# version string must be a sequence of nonnegative decimal integers
# separated by "."'s and may have leading zeros
java.version=1.8
#
# 'elasticsearch.version' version of elasticsearch compiled against
# You will have to release a new version of the plugin for each new
# elasticsearch release. This version is checked when the plugin
# is loaded so Elasticsearch will refuse to start in the presence of
# plugins with the incorrect elasticsearch.version.
elasticsearch.version=6.5.4

79
plugin-security.policy Normal file
View File

@ -0,0 +1,79 @@
/*
* Copyright 2015-2017 floragunn GmbH
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
grant {
permission java.lang.RuntimePermission "shutdownHooks";
permission java.lang.RuntimePermission "getClassLoader";
permission java.lang.RuntimePermission "setContextClassLoader";
permission javax.security.auth.AuthPermission "modifyPrivateCredentials";
permission javax.security.auth.AuthPermission "doAs";
permission javax.security.auth.kerberos.ServicePermission "*","accept";
permission java.util.PropertyPermission "*","read,write";
//Enable when we switch to UnboundID LDAP SDK
//permission java.util.PropertyPermission "*", "read,write";
//permission java.lang.RuntimePermission "setFactory";
//permission javax.net.ssl.SSLPermission "setHostnameVerifier";
//below permissions are needed for netty native open ssl support
permission java.lang.RuntimePermission "accessClassInPackage.sun.misc";
permission java.lang.reflect.ReflectPermission "suppressAccessChecks";
permission java.security.SecurityPermission "getProperty.ssl.KeyManagerFactory.algorithm";
permission java.lang.RuntimePermission "accessDeclaredMembers";
permission java.lang.RuntimePermission "accessClassInPackage.sun.security.x509";
permission java.lang.RuntimePermission "accessClassInPackage.sun.nio.ch";
permission java.io.FilePermission "/proc/sys/net/core/somaxconn","read";
permission java.security.SecurityPermission "setProperty.ocsp.enable";
permission java.net.NetPermission "getNetworkInformation";
permission java.net.NetPermission "getProxySelector";
permission java.net.SocketPermission "*", "connect,accept,resolve";
permission java.security.SecurityPermission "putProviderProperty.BC";
permission java.security.SecurityPermission "insertProvider.BC";
permission java.lang.RuntimePermission "accessUserInformation";
permission java.security.SecurityPermission "org.apache.xml.security.register";
permission java.util.PropertyPermission "org.apache.xml.security.ignoreLineBreaks", "write";
permission java.lang.RuntimePermission "createClassLoader";
//Java 9+
permission java.lang.RuntimePermission "accessClassInPackage.com.sun.jndi.ldap";
};
grant codeBase "${codebase.netty-common}" {
permission java.lang.RuntimePermission "loadLibrary.*";
};

436
pom.xml Normal file
View File

@ -0,0 +1,436 @@
<?xml version="1.0" encoding="UTF-8"?><!--
~ Copyright 2015-2018 _floragunn_ GmbH
~ Licensed under the Apache License, Version 2.0 (the "License");
~ you may not use this file except in compliance with the License.
~ You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<!--
~ Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
~
~ Licensed under the Apache License, Version 2.0 (the "License").
~ You may not use this file except in compliance with the License.
~ A copy of the License is located at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ or in the "license" file accompanying this file. This file is distributed
~ on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
~ express or implied. See the License for the specific language governing
~ permissions and limitations under the License.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.amazon.opendistroforelasticsearch</groupId>
<artifactId>opendistro_security_parent</artifactId>
<version>0.7.0.0</version>
</parent>
<artifactId>opendistro_security</artifactId>
<packaging>jar</packaging>
<version>0.7.0.0</version>
<name>Open Distro Security for Elasticsearch</name>
<description>Provide access control related features for Elasticsearch 6</description>
<url>https://github.com/mauve-hedgehog/opendistro-elasticsearch-security</url>
<inceptionYear>2015</inceptionYear>
<licenses>
<license>
<name>The Apache Software License, Version 2.0</name>
<url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
<distribution>repo</distribution>
</license>
</licenses>
<properties>
<opendistro_security_ssl.version>0.7.0.0</opendistro_security_ssl.version>
<opendistro_security_advanced_modules.version>0.7.0.0</opendistro_security_advanced_modules.version>
<elasticsearch.version>6.5.4</elasticsearch.version>
<!-- deps -->
<netty-native.version>2.0.15.Final</netty-native.version>
<bc.version>1.60</bc.version>
<log4j.version>2.11.1</log4j.version>
<guava.version>25.1-jre</guava.version>
<commons.cli.version>1.3.1</commons.cli.version>
<!-- assembly descriptors -->
<elasticsearch.assembly.descriptor>${basedir}/src/main/assemblies/plugin.xml</elasticsearch.assembly.descriptor>
<securitystandalone.descriptor>${basedir}/src/main/assemblies/securityadmin-standalone.xml</securitystandalone.descriptor>
<veracode.descriptor>${basedir}/src/main/assemblies/veracode.xml</veracode.descriptor>
<!-- Test only -->
<mockito.version>1.10.19</mockito.version>
</properties>
<scm>
<url>https://github.com/mauve-hedgehog/opendistro-elasticsearch-security</url>
<connection>scm:git:git@github.com:mauve-hedgehog/opendistro-elasticsearch-security.git</connection>
<developerConnection>scm:git:git@github.com:mauve-hedgehog/opendistro-elasticsearch-security.git</developerConnection>
<tag>0.7.0.0</tag>
</scm>
<issueManagement>
<system>GitHub</system>
<url>https://github.com/mauve-hedgehog/opendistro-elasticsearch-security/issues</url>
</issueManagement>
<dependencies>
<!-- Elastic Search Security SSL -->
<dependency>
<groupId>com.amazon.opendistroforelasticsearch</groupId>
<artifactId>opendistro_security_ssl</artifactId>
<version>${opendistro_security_ssl.version}</version>
</dependency>
<!-- Netty 4 transport -->
<dependency>
<groupId>org.elasticsearch.plugin</groupId>
<artifactId>transport-netty4-client</artifactId>
<version>${elasticsearch.version}</version>
<exclusions>
<exclusion>
<artifactId>jna</artifactId>
<groupId>org.elasticsearch</groupId>
</exclusion>
<exclusion>
<artifactId>jts</artifactId>
<groupId>com.vividsolutions</groupId>
</exclusion>
<exclusion>
<artifactId>log4j-api</artifactId>
<groupId>org.apache.logging.log4j</groupId>
</exclusion>
<exclusion>
<artifactId>spatial4j</artifactId>
<groupId>org.locationtech.spatial4j</groupId>
</exclusion>
</exclusions>
</dependency>
<!-- Guava -->
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>${guava.version}</version>
</dependency>
<!-- Apache commons cli -->
<dependency>
<groupId>commons-cli</groupId>
<artifactId>commons-cli</artifactId>
<version>${commons.cli.version}</version>
</dependency>
<!-- Bouncycastle -->
<dependency>
<groupId>org.bouncycastle</groupId>
<artifactId>bcpg-jdk15on</artifactId>
<version>${bc.version}</version>
</dependency>
<!-- provided scoped deps -->
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>${elasticsearch.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>${log4j.version}</version>
<scope>provided</scope>
</dependency>
<!-- Only test scoped dependencies hereafter -->
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.hamcrest</groupId>
<artifactId>hamcrest-all</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-tcnative</artifactId>
<version>${netty-native.version}</version>
<classifier>${os.detected.classifier}</classifier>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.elasticsearch.plugin</groupId>
<artifactId>reindex-client</artifactId>
<version>${elasticsearch.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.elasticsearch.plugin</groupId>
<artifactId>percolator-client</artifactId>
<version>${elasticsearch.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.elasticsearch.plugin</groupId>
<artifactId>lang-mustache-client</artifactId>
<version>${elasticsearch.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.elasticsearch.plugin</groupId>
<artifactId>parent-join-client</artifactId>
<version>${elasticsearch.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.elasticsearch.plugin</groupId>
<artifactId>aggs-matrix-stats-client</artifactId>
<version>${elasticsearch.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-all</artifactId>
<version>${mockito.version}</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<forkCount>3</forkCount>
<reuseForks>true</reuseForks>
<!--
<parallel>methods</parallel>
<threadCount>1</threadCount>
-->
<systemPropertyVariables>
<forkno>fork_${surefire.forkNumber}</forkno>
</systemPropertyVariables>
<includes>
<include>**/*.java</include>
</includes>
</configuration>
</plugin>
</plugins>
<extensions>
<extension>
<groupId>com.gkatzioura.maven.cloud</groupId>
<artifactId>s3-storage-wagon</artifactId>
<version>1.8</version>
</extension>
</extensions>
</build>
<profiles>
<profile>
<id>advanced</id>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<executions>
<execution>
<id>plugin</id>
<phase>package</phase>
<configuration>
<appendAssemblyId>false</appendAssemblyId>
<tarLongFileMode>posix</tarLongFileMode>
<outputDirectory>${project.build.directory}/releases/</outputDirectory>
<descriptors>
<descriptor>${elasticsearch.assembly.descriptor}</descriptor>
</descriptors>
</configuration>
<goals>
<goal>single</goal>
</goals>
</execution>
<execution>
<id>securityadmin</id>
<phase>package</phase>
<configuration>
<appendAssemblyId>true</appendAssemblyId>
<tarLongFileMode>posix</tarLongFileMode>
<outputDirectory>${project.build.directory}/releases/</outputDirectory>
<descriptors>
<descriptor>${securitystandalone.descriptor}</descriptor>
</descriptors>
</configuration>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>com.floragunn</groupId>
<artifactId>checksum-maven-plugin</artifactId>
<version>1.7.1</version>
<executions>
<execution>
<goals>
<goal>files</goal>
</goals>
<phase>package</phase>
</execution>
</executions>
<configuration>
<fileSets>
<fileSet>
<directory>${project.build.directory}/releases/</directory>
<includes>
<include>*.zip</include>
</includes>
<excludes>
<exclude>*securityadmin*</exclude>
</excludes>
</fileSet>
</fileSets>
<algorithms>
<algorithm>SHA-512</algorithm>
</algorithms>
<individualFiles>true</individualFiles>
<appendFilename>true</appendFilename>
<attachChecksums>true</attachChecksums>
<csvSummary>false</csvSummary>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>com.amazon.opendistroforelasticsearch</groupId>
<artifactId>opendistro_security_advanced_modules</artifactId>
<version>${opendistro_security_advanced_modules.version}</version>
<exclusions>
<exclusion>
<artifactId>jna</artifactId>
<groupId>org.elasticsearch</groupId>
</exclusion>
<exclusion>
<artifactId>jts</artifactId>
<groupId>com.vividsolutions</groupId>
</exclusion>
<exclusion>
<artifactId>log4j-api</artifactId>
<groupId>org.apache.logging.log4j</groupId>
</exclusion>
<exclusion>
<artifactId>spatial4j</artifactId>
<groupId>org.locationtech.spatial4j</groupId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.elasticsearch.plugin</groupId>
<artifactId>lang-mustache-client</artifactId>
<version>${elasticsearch.version}</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.plugin</groupId>
<artifactId>parent-join-client</artifactId>
<version>${elasticsearch.version}</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.plugin</groupId>
<artifactId>aggs-matrix-stats-client</artifactId>
<version>${elasticsearch.version}</version>
</dependency>
</dependencies>
</profile>
<profile>
<id>veracode</id>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<executions>
<execution>
<id>veracode</id>
<phase>package</phase>
<configuration>
<appendAssemblyId>true</appendAssemblyId>
<tarLongFileMode>posix</tarLongFileMode>
<outputDirectory>${project.build.directory}/veracode/</outputDirectory>
<descriptors>
<descriptor>${veracode.descriptor}</descriptor>
</descriptors>
</configuration>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>com.amazon.opendistroforelasticsearch</groupId>
<artifactId>opendistro_security_advanced_modules</artifactId>
<version>${opendistro_security_advanced_modules.version}</version>
</dependency>
<!-- omit netty-tcnative and conscrypt-openjdk-uber if scan
should be done without natives -->
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-tcnative</artifactId>
<version>${netty-native.version}</version>
<classifier>linux-x86_64</classifier>
</dependency>
<dependency>
<groupId>org.conscrypt</groupId>
<artifactId>conscrypt-openjdk-uber</artifactId>
<version>1.0.0.RC9</version>
</dependency>
</dependencies>
</profile>
</profiles>
</project>

View File

@ -0,0 +1,137 @@
UNLIMITED:
readonly: true
permissions:
- "*"
###### INDEX LEVEL ######
INDICES_ALL:
readonly: true
permissions:
- "indices:*"
# for backward compatibility
ALL:
readonly: true
permissions:
- INDICES_ALL
MANAGE:
readonly: true
permissions:
- "indices:monitor/*"
- "indices:admin/*"
CREATE_INDEX:
readonly: true
permissions:
- "indices:admin/create"
- "indices:admin/mapping/put"
MANAGE_ALIASES:
readonly: true
permissions:
- "indices:admin/aliases*"
# for backward compatibility
MONITOR:
readonly: true
permissions:
- INDICES_MONITOR
INDICES_MONITOR:
readonly: true
permissions:
- "indices:monitor/*"
DATA_ACCESS:
readonly: true
permissions:
- "indices:data/*"
- CRUD
WRITE:
readonly: true
permissions:
- "indices:data/write*"
- "indices:admin/mapping/put"
READ:
readonly: true
permissions:
- "indices:data/read*"
- "indices:admin/mappings/fields/get*"
DELETE:
readonly: true
permissions:
- "indices:data/write/delete*"
CRUD:
readonly: true
permissions:
- READ
- WRITE
SEARCH:
readonly: true
permissions:
- "indices:data/read/search*"
- "indices:data/read/msearch*"
- SUGGEST
SUGGEST:
readonly: true
permissions:
- "indices:data/read/suggest*"
INDEX:
readonly: true
permissions:
- "indices:data/write/index*"
- "indices:data/write/update*"
- "indices:admin/mapping/put"
- "indices:data/write/bulk*"
GET:
readonly: true
permissions:
- "indices:data/read/get*"
- "indices:data/read/mget*"
###### CLUSTER LEVEL ######
CLUSTER_ALL:
readonly: true
permissions:
- "cluster:*"
CLUSTER_MONITOR:
readonly: true
permissions:
- "cluster:monitor/*"
CLUSTER_COMPOSITE_OPS_RO:
readonly: true
permissions:
- "indices:data/read/mget"
- "indices:data/read/msearch"
- "indices:data/read/mtv"
- "indices:data/read/coordinate-msearch*"
- "indices:admin/aliases/exists*"
- "indices:admin/aliases/get*"
- "indices:data/read/scroll"
CLUSTER_COMPOSITE_OPS:
readonly: true
permissions:
- "indices:data/write/bulk"
- "indices:admin/aliases*"
- "indices:data/write/reindex"
- CLUSTER_COMPOSITE_OPS_RO
MANAGE_SNAPSHOTS:
readonly: true
permissions:
- "cluster:admin/snapshot/*"
- "cluster:admin/repository/*"

221
securityconfig/config.yml Normal file
View File

@ -0,0 +1,221 @@
# This is the main Open Distro Security configuration file where authentication
# and authorization is defined.
#
# You need to configure at least one authentication domain in the authc of this file.
# An authentication domain is responsible for extracting the user credentials from
# the request and for validating them against an authentication backend like Active Directory for example.
#
# If more than one authentication domain is configured the first one which succeeds wins.
# If all authentication domains fail then the request is unauthenticated.
# In this case an exception is thrown and/or the HTTP status is set to 401.
#
# After authentication authorization (authz) will be applied. There can be zero or more authorizers which collect
# the roles from a given backend for the authenticated user.
#
# Both, authc and auth can be enabled/disabled separately for REST and TRANSPORT layer. Default is true for both.
# http_enabled: true
# transport_enabled: true
#
# 5.x Migration: "enabled: true/false" will also be respected currently but only to provide backward compatibility.
#
# For HTTP it is possible to allow anonymous authentication. If that is the case then the HTTP authenticators try to
# find user credentials in the HTTP request. If credentials are found then the user gets regularly authenticated.
# If none can be found the user will be authenticated as an "anonymous" user. This user has always the username "anonymous"
# and one role named "anonymous_backendrole".
# If you enable anonymous authentication all HTTP authenticators will not challenge.
#
#
# Note: If you define more than one HTTP authenticators make sure to put non-challenging authenticators like "proxy" or "clientcert"
# first and the challenging one last.
# Because it's not possible to challenge a client with two different authentication methods (for example
# Kerberos and Basic) only one can have the challenge flag set to true. You can cope with this situation
# by using pre-authentication, e.g. sending a HTTP Basic authentication header in the request.
#
# Default value of the challenge flag is true.
#
#
# HTTP
# basic (challenging)
# proxy (not challenging, needs xff)
# kerberos (challenging)
# clientcert (not challenging, needs https)
# jwt (not challenging)
# host (not challenging) #DEPRECATED, will be removed in a future version.
# host based authentication is configurable in roles_mapping
# Authc
# internal
# noop
# ldap
# Authz
# ldap
# noop
opendistro_security:
dynamic:
# Set filtered_alias_mode to 'disallow' to forbid more than 2 filtered aliases per index
# Set filtered_alias_mode to 'warn' to allow more than 2 filtered aliases per index but warns about it (default)
# Set filtered_alias_mode to 'nowarn' to allow more than 2 filtered aliases per index silently
#filtered_alias_mode: warn
#kibana:
# Kibana multitenancy -
# see <TBD>
# To make this work you need to install <TBD>
#multitenancy_enabled: true
#server_username: kibanaserver
#index: '.kibana'
#do_not_fail_on_forbidden: false
http:
anonymous_auth_enabled: false
xff:
enabled: false
internalProxies: '192\.168\.0\.10|192\.168\.0\.11' # regex pattern
#internalProxies: '.*' # trust all internal proxies, regex pattern
remoteIpHeader: 'x-forwarded-for'
proxiesHeader: 'x-forwarded-by'
#trustedProxies: '.*' # trust all external proxies, regex pattern
###### see https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html for regex help
###### more information about XFF https://en.wikipedia.org/wiki/X-Forwarded-For
###### and here https://tools.ietf.org/html/rfc7239
###### and https://tomcat.apache.org/tomcat-8.0-doc/config/valve.html#Remote_IP_Valve
authc:
kerberos_auth_domain:
http_enabled: false
transport_enabled: false
order: 6
http_authenticator:
type: kerberos
challenge: true
config:
# If true a lot of kerberos/security related debugging output will be logged to standard out
krb_debug: false
# If true then the realm will be stripped from the user name
strip_realm_from_principal: true
authentication_backend:
type: noop
basic_internal_auth_domain:
http_enabled: true
transport_enabled: true
order: 4
http_authenticator:
type: basic
challenge: true
authentication_backend:
type: intern
proxy_auth_domain:
http_enabled: false
transport_enabled: false
order: 3
http_authenticator:
type: proxy
challenge: false
config:
user_header: "x-proxy-user"
roles_header: "x-proxy-roles"
authentication_backend:
type: noop
jwt_auth_domain:
http_enabled: false
transport_enabled: false
order: 0
http_authenticator:
type: jwt
challenge: false
config:
signing_key: "base64 encoded HMAC key or public RSA/ECDSA pem key"
jwt_header: "Authorization"
jwt_url_parameter: null
roles_key: null
subject_key: null
authentication_backend:
type: noop
clientcert_auth_domain:
http_enabled: false
transport_enabled: false
order: 2
http_authenticator:
type: clientcert
config:
username_attribute: cn #optional, if omitted DN becomes username
challenge: false
authentication_backend:
type: noop
ldap:
http_enabled: false
transport_enabled: false
order: 5
http_authenticator:
type: basic
challenge: false
authentication_backend:
# LDAP authentication backend (authenticate users against a LDAP or Active Directory)
type: ldap
config:
# enable ldaps
enable_ssl: false
# enable start tls, enable_ssl should be false
enable_start_tls: false
# send client certificate
enable_ssl_client_auth: false
# verify ldap hostname
verify_hostnames: true
hosts:
- localhost:8389
bind_dn: null
password: null
userbase: 'ou=people,dc=example,dc=com'
# Filter to search for users (currently in the whole subtree beneath userbase)
# {0} is substituted with the username
usersearch: '(sAMAccountName={0})'
# Use this attribute from the user as username (if not set then DN is used)
username_attribute: null
authz:
roles_from_myldap:
http_enabled: false
transport_enabled: false
authorization_backend:
# LDAP authorization backend (gather roles from a LDAP or Active Directory, you have to configure the above LDAP authentication backend settings too)
type: ldap
config:
# enable ldaps
enable_ssl: false
# enable start tls, enable_ssl should be false
enable_start_tls: false
# send client certificate
enable_ssl_client_auth: false
# verify ldap hostname
verify_hostnames: true
hosts:
- localhost:8389
bind_dn: null
password: null
rolebase: 'ou=groups,dc=example,dc=com'
# Filter to search for roles (currently in the whole subtree beneath rolebase)
# {0} is substituted with the DN of the user
# {1} is substituted with the username
# {2} is substituted with an attribute value from user's directory entry, of the authenticated user. Use userroleattribute to specify the name of the attribute
rolesearch: '(member={0})'
# Specify the name of the attribute which value should be substituted with {2} above
userroleattribute: null
# Roles as an attribute of the user entry
userrolename: disabled
#userrolename: memberOf
# The attribute in a role entry containing the name of that role, Default is "name".
# Can also be "dn" to use the full DN as rolename.
rolename: cn
# Resolve nested roles transitive (roles which are members of other roles and so on ...)
resolve_nested_roles: true
userbase: 'ou=people,dc=example,dc=com'
# Filter to search for users (currently in the whole subtree beneath userbase)
# {0} is substituted with the username
usersearch: '(uid={0})'
# Skip users matching a user name, a wildcard or a regex pattern
#skip_users:
# - 'cn=Michael Jackson,ou*people,o=TEST'
# - '/\S*/'
roles_from_another_ldap:
enabled: false
authorization_backend:
type: ldap
#config goes here ...

View File

@ -0,0 +1,199 @@
############## Open Distro Security configuration ###############
###########################################################
# Add the following settings to your standard elasticsearch.yml
# alongside with the Open Distro Security TLS settings.
# Settings must always be the same on all nodes in the cluster.
############## Common configuration settings ##############
# Enable or disable the Open Distro Security enterprise modules
# By default enterprise modules are enabled. If you use any of the modules in production you need
# to obtain a license. If you want to use the free Community Edition, you can switch
# all enterprise features off by setting the following key to false
opendistro_security.enterprise_modules_enabled: true
# Specify a list of DNs which denote the other nodes in the cluster.
# This settings support wildcards and regular expressions
# This setting only has effect if 'opendistro_security.cert.intercluster_request_evaluator_class' is not set.
opendistro_security.nodes_dn:
- "CN=*.example.com, OU=SSL, O=Test, L=Test, C=DE"
- "CN=node.other.com, OU=SSL, O=Test, L=Test, C=DE"
# Defines the DNs (distinguished names) of certificates
# to which admin privileges should be assigned (mandatory)
opendistro_security.authcz.admin_dn:
- "CN=kirk,OU=client,O=client,l=tEst, C=De"
# Define how backend roles should be mapped to Open Distro Security roles
# MAPPING_ONLY - mappings must be configured explicitely in roles_mapping.yml (default)
# BACKENDROLES_ONLY - backend roles are mapped to Open Distro Security rules directly. Settings in roles_mapping.yml have no effect.
# BOTH - backend roles are mapped to Open Distro Security roles mapped directly and via roles_mapping.yml in addition
opendistro_security.roles_mapping_resolution: MAPPING_ONLY
############## REST Management API configuration settings ##############
# Enable or disable role based access to the REST management API
# Default is that no role is allowed to access the REST management API.
#opendistro_security.restapi.roles_enabled: ["all_access","xyz_role"]
# Disable particular endpoints and their HTTP methods for roles.
# By default all endpoints/methods are allowed.
#opendistro_security.restapi.endpoints_disabled.<role>.<endpoint>: <array of http methods>
# Example:
#opendistro_security.restapi.endpoints_disabled.all_access.ACTIONGROUPS: ["PUT","POST","DELETE"]
#opendistro_security.restapi.endpoints_disabled.xyz_role.LICENSE: ["DELETE"]
# The following endpoints exist:
# ACTIONGROUPS
# CACHE
# CONFIG
# ROLES
# ROLESMAPPING
# INTERNALUSERS
# SYSTEMINFO
# PERMISSIONSINFO
############## Auditlog configuration settings ##############
# General settings
# Enable/disable rest request logging (default: true)
#opendistro_security.audit.enable_rest: true
# Enable/disable transport request logging (default: false)
#opendistro_security.audit.enable_transport: false
# Enable/disable bulk request logging (default: false)
# If enabled all subrequests in bulk requests will be logged too
#opendistro_security.audit.resolve_bulk_requests: false
# Disable some categories
#opendistro_security.audit.config.disabled_categories: ["AUTHENTICATED","GRANTED_PRIVILEGES"]
# Disable some requests (wildcard or regex of actions or rest request paths)
#opendistro_security.audit.ignore_requests: ["indices:data/read/*","*_bulk"]
# Tune threadpool size, default is 10 and 0 means disabled
#opendistro_security.audit.threadpool.size: 0
# Tune threadpool max size queue length, default is 100000
#opendistro_security.audit.threadpool.max_queue_len: 100000
# If enable_request_details is true then the audit log event will also contain
# details like the search query. Default is false.
#opendistro_security.audit.enable_request_details: true
# Ignore users, e.g. do not log audit requests from that users (default: no ignored users)
#opendistro_security.audit.ignore_users: ['kibanaserver','some*user','/also.*regex possible/']"
# Destination of the auditlog events
opendistro_security.audit.type: internal_elasticsearch
#opendistro_security.audit.type: external_elasticsearch
#opendistro_security.audit.type: debug
#opendistro_security.audit.type: webhook
# external_elasticsearch settings
#opendistro_security.audit.config.http_endpoints: ['localhost:9200','localhost:9201','localhost:9202']"
# Auditlog index can be a static one or one with a date pattern (default is 'auditlog6')
#opendistro_security.audit.config.index: auditlog6 # make sure you secure this index properly
#opendistro_security.audit.config.index: "'auditlog6-'YYYY.MM.dd" #rotates index daily - make sure you secure this index properly
#opendistro_security.audit.config.type: auditlog
#opendistro_security.audit.config.username: auditloguser
#opendistro_security.audit.config.password: auditlogpassword
#opendistro_security.audit.config.enable_ssl: false
#opendistro_security.audit.config.verify_hostnames: false
#opendistro_security.audit.config.enable_ssl_client_auth: false
#opendistro_security.audit.config.cert_alias: mycert
#opendistro_security.audit.config.pemkey_filepath: key.pem
#opendistro_security.audit.config.pemkey_content: <...pem base 64 content>
#opendistro_security.audit.config.pemkey_password: secret
#opendistro_security.audit.config.pemcert_filepath: cert.pem
#opendistro_security.audit.config.pemcert_content: <...pem base 64 content>
#opendistro_security.audit.config.pemtrustedcas_filepath: ca.pem
#opendistro_security.audit.config.pemtrustedcas_content: <...pem base 64 content>
# webhook settings
#opendistro_security.audit.config.webhook.url: "http://mywebhook/endpoint"
# One of URL_PARAMETER_GET,URL_PARAMETER_POST,TEXT,JSON,SLACK
#opendistro_security.audit.config.webhook.format: JSON
#opendistro_security.audit.config.webhook.ssl.verify: false
#opendistro_security.audit.config.webhook.ssl.pemtrustedcas_filepath: ca.pem
#opendistro_security.audit.config.webhook.ssl.pemtrustedcas_content: <...pem base 64 content>
# log4j settings
#opendistro_security.audit.config.log4j.logger_name: auditlogger
#opendistro_security.audit.config.log4j.level: INFO
############## Kerberos configuration settings ##############
# If Kerberos authentication should be used you have to configure:
# The Path to the krb5.conf file
# Can be absolute or relative to the Elasticsearch config directory
#opendistro_security.kerberos.krb5_filepath: '/etc/krb5.conf'
# The Path to the keytab where the acceptor_principal credentials are stored.
# Must be relative to the Elasticsearch config directory
#opendistro_security.kerberos.acceptor_keytab_filepath: 'eskeytab.tab'
# Acceptor (Server) Principal name, must be present in acceptor_keytab_path file
#opendistro_security.kerberos.acceptor_principal: 'HTTP/localhost'
############## Advanced configuration settings ##############
# Enable transport layer impersonation
# Allow DNs (distinguished names) to impersonate as other users
#opendistro_security.authcz.impersonation_dn:
# "CN=spock,OU=client,O=client,L=Test,C=DE":
# - worf
# "cn=webuser,ou=IT,ou=IT,dc=company,dc=com":
# - user2
# - user1
# Enable rest layer impersonation
# Allow users to impersonate as other users
#opendistro_security.authcz.rest_impersonation_user:
# "picard":
# - worf
# "john":
# - steve
# - martin
# If this is set to true Open Distro Security will automatically initialize the configuration index
# with the files in the config directory if the index does not exist.
# WARNING: This will use well-known default passwords.
# Use only in a private network/environment.
#opendistro_security.allow_default_init_opendistrosecurityindex: false
# If this is set to true then allow to startup with demo certificates.
# These are certificates issued by floragunn GmbH for demo purposes.
# WARNING: This certificates are well known and therefore unsafe
# Use only in a private network/environment.
#opendistro_security.allow_unsafe_democertificates: false
############## Expert settings ##############
# WARNING: Expert settings, do only use if you know what you are doing
# If you set wrong values here this this could be a security risk
# or make Open Distro Security stop working
# Name of the index where .opendistro_security stores its configuration.
#opendistro_security.config_index_name: .opendistro_security
# This defines the OID of server node certificates
#opendistro_security.cert.oid: '1.2.3.4.5.5'
# This specifies the implementation of com.amazon.opendistroforelasticsearch.security.transport.InterClusterRequestEvaluator
# that is used to determine inter-cluster request.
# Instances of com.amazon.opendistroforelasticsearch.security.transport.InterClusterRequestEvaluator must implement a single argument
# constructor that takes an org.elasticsearch.common.settings.Settings
#opendistro_security.cert.intercluster_request_evaluator_class: com.amazon.opendistroforelasticsearch.security.transport.DefaultInterClusterRequestEvaluator
# Allow snapshot restore for normal users
# By default only requests signed by an admin TLS certificate can do this
# To enable snapshot restore for normal users set 'opendistro_security.enable_snapshot_restore_privilege: true'
# The user who wants to restore a snapshot must have the 'cluster:admin/snapshot/restore' privilege and must also have
# "indices:admin/create" and "indices:data/write/index" for the indices to be restores.
# A snapshot can only be restored when it does not contain global state and does not restore the '.opendistro_security' index
# If 'opendistro_security.check_snapshot_restore_write_privileges: false' is set then the additional indices checks are omitted.
# This makes it less secure.
#opendistro_security.enable_snapshot_restore_privilege: true
#opendistro_security.check_snapshot_restore_write_privileges: false
# Authentication cache timeout in minutes (A value of 0 disables caching, default is 60)
#opendistro_security.cache.ttl_minutes: 60
# Disable Open Distro Security
# WARNING: This can expose your configuration (including passwords) to the public.
#opendistro_security.disabled: false

View File

@ -0,0 +1,45 @@
# This is the internal user database
# The hash value is a bcrypt hash and can be generated with plugin/tools/hash.sh
#password is: admin
admin:
readonly: true
hash: $2a$12$VcCDgh2NDk07JGN0rjGbM.Ad41qVR/YFJcgHp0UGns5JDymv..TOG
roles:
- admin
attributes:
#no dots allowed in attribute names
attribute1: value1
attribute2: value2
attribute3: value3
#password is: logstash
logstash:
hash: $2a$12$u1ShR4l4uBS3Uv59Pa2y5.1uQuZBrZtmNfqB3iM/.jL0XoV9sghS2
roles:
- logstash
#password is: kibanaserver
kibanaserver:
readonly: true
hash: $2a$12$4AcgAt3xwOWadA5s5blL6ev39OXDNhmOesEoo33eZtrq2N0YrU3H.
#password is: kibanaro
kibanaro:
hash: $2a$12$JJSXNfTowz7Uu5ttXfeYpeYE0arACvcwlPBStB1F.MI7f0U9Z4DGC
roles:
- kibanauser
- readall
#password is: readall
readall:
hash: $2a$12$ae4ycwzwvLtZxwZ82RmiEunBbIPiAmGZduBAjKN0TXdwQFtCwARz2
#password is: readall
roles:
- readall
#password is: snapshotrestore
snapshotrestore:
hash: $2y$12$DpwmetHKwgYnorbgdvORCenv4NAK8cPUg8AI6pxLCuWf/ALc0.v7W
roles:
- snapshotrestore

183
securityconfig/roles.yml Normal file
View File

@ -0,0 +1,183 @@
#<role_name>:
# cluster:
# - '<permission>'
# indices:
# '<indexname or alias>':
# '<type>':
# - '<permission>'
# _dls_: '<dls query>'
# _fls_:
# - '<field>'
# - '<field>'
# When a user make a request to Elasticsearch then the following roles will be evaluated to see if the user has
# permissions for the request. A request is always associated with an action and is executed against and index (or alias)
# and a type. If a request is executed against all indices (or all types) then the asterix ('*') is needed.
# Every role a user has will be examined if it allows the action against an index (or type). At least one role must match
# for the request to be successful. If no role match then the request will be denied. Currently a match must happen within
# one single role - that means that permissions can not span multiple roles.
# For <permission>, <indexname or alias> and <type> simple wildcards and regular expressions are possible.
# A asterix (*) will match any character sequence (or an empty sequence)
# A question mark (?) will match any single character (but NOT empty character)
# Example: '*my*index' will match 'my_first_index' as well as 'myindex' but not 'myindex1'
# Example: '?kibana' will match '.kibana' but not 'kibana'
# To use a full blown regex you have to pre- and apend a '/' to use regex instead of simple wildcards
# '/<java regex>/'
# Example: '/\S*/' will match any non whitespace characters
# Important:
# Index, alias or type names can not contain dots (.) in the <indexname or alias> or <type> expression.
# Reason is that we currently parse the config file into a elasticsearch settings object which cannot cope with dots in keys.
# Workaround: Just configure something like '?kibana' instead of '.kibana' or 'my?index' instead of 'my.index'
# This limitation will likely removed with next version of Open Distro Security
# Allows everything, but no changes to .opendistro_security configuration index
all_access:
readonly: true
cluster:
- UNLIMITED
indices:
'*':
'*':
- UNLIMITED
tenants:
admin_tenant: RW
# Read all, but no write permissions
readall:
readonly: true
cluster:
- CLUSTER_COMPOSITE_OPS_RO
indices:
'*':
'*':
- READ
# Read all and monitor, but no write permissions
readall_and_monitor:
cluster:
- CLUSTER_MONITOR
- CLUSTER_COMPOSITE_OPS_RO
indices:
'*':
'*':
- READ
# For users which use kibana, access to indices must be granted separately
kibana_user:
readonly: true
cluster:
- INDICES_MONITOR
- CLUSTER_COMPOSITE_OPS
indices:
'?kibana':
'*':
- MANAGE
- INDEX
- READ
- DELETE
'?kibana-6':
'*':
- MANAGE
- INDEX
- READ
- DELETE
'?kibana_*':
'*':
- MANAGE
- INDEX
- READ
- DELETE
'?tasks':
'*':
- INDICES_ALL
'?management-beats':
'*':
- INDICES_ALL
'*':
'*':
- indices:data/read/field_caps*
- indices:data/read/xpack/rollup*
- indices:admin/mappings/get*
- indices:admin/get
# For the kibana server
kibana_server:
readonly: true
cluster:
- CLUSTER_MONITOR
- CLUSTER_COMPOSITE_OPS
- cluster:admin/xpack/monitoring*
- indices:admin/template*
- indices:data/read/scroll*
indices:
'?kibana':
'*':
- INDICES_ALL
'?kibana-6':
'*':
- INDICES_ALL
'?kibana_*':
'*':
- INDICES_ALL
'?reporting*':
'*':
- INDICES_ALL
'?monitoring*':
'*':
- INDICES_ALL
'?tasks':
'*':
- INDICES_ALL
'?management-beats*':
'*':
- INDICES_ALL
'*':
'*':
- "indices:admin/aliases*"
# For logstash and beats
logstash:
cluster:
- CLUSTER_MONITOR
- CLUSTER_COMPOSITE_OPS
- indices:admin/template/get
- indices:admin/template/put
indices:
'logstash-*':
'*':
- CRUD
- CREATE_INDEX
'*beat*':
'*':
- CRUD
- CREATE_INDEX
# Allows adding and modifying repositories and creating and restoring snapshots
manage_snapshots:
cluster:
- MANAGE_SNAPSHOTS
indices:
'*':
'*':
- "indices:data/write/index"
- "indices:admin/create"
# Allows user to access security rest apis programatically
security_rest_api_access:
readonly: true
# Restrict users so they can only view visualization and dashboard on kibana
kibana_read_only:
readonly: true
# Allows each user to access own named index
own_index:
cluster:
- CLUSTER_COMPOSITE_OPS
indices:
'${user_name}':
'*':
- INDICES_ALL

View File

@ -0,0 +1,34 @@
# In this file users, backendroles and hosts can be mapped to Open Distro Security roles.
# Permissions for Open Distro Security roles are configured in roles.yml
all_access:
readonly: true
backendroles:
- admin
logstash:
backendroles:
- logstash
kibana_server:
readonly: true
users:
- kibanaserver
kibana_user:
backendroles:
- kibanauser
readall:
readonly: true
backendroles:
- readall
manage_snapshots:
readonly: true
backendroles:
- snapshotrestore
own_index:
users:
- '*'

10
settings.xml Normal file
View File

@ -0,0 +1,10 @@
<?xml version="1.0"?>
<settings>
<servers>
<server>
<id>ossrh-fg</id>
<username>${env.SONATYPE_USER}</username>
<password>${env.SONATYPE_PASSWORD}</password>
</server>
</servers>
</settings>

View File

@ -0,0 +1,43 @@
<?xml version="1.0"?>
<assembly>
<id>plugin</id>
<formats>
<format>zip</format>
</formats>
<includeBaseDirectory>false</includeBaseDirectory>
<dependencySets>
<dependencySet>
<useStrictFiltering>true</useStrictFiltering>
<outputDirectory>${file.separator}</outputDirectory>
<useProjectArtifact>true</useProjectArtifact>
<useTransitiveFiltering>true</useTransitiveFiltering>
</dependencySet>
</dependencySets>
<fileSets>
<fileSet>
<directory>${basedir}/src/main/assemblies/</directory>
<outputDirectory>${file.separator}</outputDirectory>
<filtered>false</filtered>
<includes>
<include>LICENSE</include>
</includes>
</fileSet>
<fileSet>
<directory>${project.basedir}</directory>
<outputDirectory>${file.separator}</outputDirectory>
<includes>
<include>tools/**</include>
<include>securityconfig/**</include>
</includes>
<fileMode>0755</fileMode>
</fileSet>
<fileSet>
<outputDirectory>${file.separator}</outputDirectory>
<includes>
<include>plugin-descriptor.properties</include>
<include>plugin-security.policy</include>
</includes>
</fileSet>
</fileSets>
</assembly>

View File

@ -0,0 +1,43 @@
<?xml version="1.0"?>
<assembly>
<id>securityadmin-standalone</id>
<formats>
<format>zip</format>
<format>tar.gz</format>
</formats>
<includeBaseDirectory>false</includeBaseDirectory>
<fileSets>
<fileSet>
<directory>${project.basedir}</directory>
<outputDirectory>${file.separator}</outputDirectory>
<includes>
<include>tools/**</include>
</includes>
<fileMode>0755</fileMode>
</fileSet>
<fileSet>
<directory>${project.basedir}</directory>
<outputDirectory>${file.separator}deps</outputDirectory>
<includes>
<include>securityconfig/**</include>
</includes>
<fileMode>0755</fileMode>
</fileSet>
</fileSets>
<dependencySets>
<dependencySet>
<outputDirectory>${file.separator}deps</outputDirectory>
<useProjectArtifact>true</useProjectArtifact>
<unpack>false</unpack>
<scope>provided</scope>
</dependencySet>
<dependencySet>
<outputDirectory>${file.separator}deps</outputDirectory>
<useProjectArtifact>true</useProjectArtifact>
<unpack>false</unpack>
<excludes>
<exclude>com.amazon.opendistroforelasticsearch:dlic*</exclude>
</excludes>
</dependencySet>
</dependencySets>
</assembly>

View File

@ -0,0 +1,24 @@
<?xml version="1.0"?>
<assembly>
<id>veracode</id>
<formats>
<format>zip</format>
</formats>
<includeBaseDirectory>false</includeBaseDirectory>
<dependencySets>
<dependencySet>
<useStrictFiltering>true</useStrictFiltering>
<outputDirectory>${file.separator}elasticsearch${file.separator}</outputDirectory>
<useProjectAttachments>true</useProjectAttachments>
<useTransitiveFiltering>true</useTransitiveFiltering>
<scope>compile</scope>
<excludes>
<exclude>org.elasticsearch:jna</exclude>
<exclude>org.apache.lucene:lucene-backward-codecs</exclude>
<exclude>org.apache.logging.log4j:log4j-core</exclude>
<exclude>*:*:*:test*:*</exclude>
</excludes>
</dependencySet>
</dependencySets>
</assembly>

View File

@ -0,0 +1,55 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.action.configupdate;
import org.elasticsearch.action.Action;
import org.elasticsearch.client.ElasticsearchClient;
public class ConfigUpdateAction extends Action<ConfigUpdateRequest, ConfigUpdateResponse, ConfigUpdateRequestBuilder> {
public static final ConfigUpdateAction INSTANCE = new ConfigUpdateAction();
public static final String NAME = "cluster:admin/opendistro_security/config/update";
protected ConfigUpdateAction() {
super(NAME);
}
@Override
public ConfigUpdateRequestBuilder newRequestBuilder(final ElasticsearchClient client) {
return new ConfigUpdateRequestBuilder(client, this);
}
@Override
public ConfigUpdateResponse newResponse() {
return new ConfigUpdateResponse();
}
}

View File

@ -0,0 +1,87 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.action.configupdate;
import java.io.IOException;
import java.util.Arrays;
import org.elasticsearch.action.support.nodes.BaseNodeResponse;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
public class ConfigUpdateNodeResponse extends BaseNodeResponse {
private String[] updatedConfigTypes;
private String message;
ConfigUpdateNodeResponse() {
}
public ConfigUpdateNodeResponse(final DiscoveryNode node, String[] updatedConfigTypes, String message) {
super(node);
this.updatedConfigTypes = updatedConfigTypes;
this.message = message;
}
public static ConfigUpdateNodeResponse readNodeResponse(StreamInput in) throws IOException {
ConfigUpdateNodeResponse nodeResponse = new ConfigUpdateNodeResponse();
nodeResponse.readFrom(in);
return nodeResponse;
}
public String[] getUpdatedConfigTypes() {
return updatedConfigTypes==null?null:Arrays.copyOf(updatedConfigTypes, updatedConfigTypes.length);
}
public String getMessage() {
return message;
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeStringArray(updatedConfigTypes);
out.writeOptionalString(message);
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
updatedConfigTypes = in.readStringArray();
message = in.readOptionalString();
}
@Override
public String toString() {
return "ConfigUpdateNodeResponse [updatedConfigTypes=" + Arrays.toString(updatedConfigTypes) + ", message=" + message + "]";
}
}

View File

@ -0,0 +1,80 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.action.configupdate;
import java.io.IOException;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.support.nodes.BaseNodesRequest;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
public class ConfigUpdateRequest extends BaseNodesRequest<ConfigUpdateRequest> {
private String[] configTypes;
public ConfigUpdateRequest() {
super();
}
public ConfigUpdateRequest(final String[] configTypes) {
super();
this.configTypes = configTypes;
}
@Override
public void readFrom(final StreamInput in) throws IOException {
super.readFrom(in);
this.configTypes = in.readStringArray();
}
@Override
public void writeTo(final StreamOutput out) throws IOException {
super.writeTo(out);
out.writeStringArray(configTypes);
}
public String[] getConfigTypes() {
return configTypes;
}
public void setConfigTypes(final String[] configTypes) {
this.configTypes = configTypes;
}
@Override
public ActionRequestValidationException validate() {
if (configTypes == null || configTypes.length == 0) {
return new ActionRequestValidationException();
}
return null;
}
}

View File

@ -0,0 +1,51 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.action.configupdate;
import org.elasticsearch.action.support.nodes.NodesOperationRequestBuilder;
import org.elasticsearch.client.ClusterAdminClient;
import org.elasticsearch.client.ElasticsearchClient;
public class ConfigUpdateRequestBuilder extends
NodesOperationRequestBuilder<ConfigUpdateRequest, ConfigUpdateResponse, ConfigUpdateRequestBuilder> {
public ConfigUpdateRequestBuilder(final ClusterAdminClient client) {
this(client, ConfigUpdateAction.INSTANCE);
}
public ConfigUpdateRequestBuilder(final ElasticsearchClient client, final ConfigUpdateAction action) {
super(client, action, new ConfigUpdateRequest());
}
public ConfigUpdateRequestBuilder setShardId(final String[] configTypes) {
request().setConfigTypes(configTypes);
return this;
}
}

View File

@ -0,0 +1,60 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.action.configupdate;
import java.io.IOException;
import java.util.List;
import org.elasticsearch.action.FailedNodeException;
import org.elasticsearch.action.support.nodes.BaseNodesResponse;
import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
public class ConfigUpdateResponse extends BaseNodesResponse<ConfigUpdateNodeResponse> {
public ConfigUpdateResponse() {
}
public ConfigUpdateResponse(final ClusterName clusterName, List<ConfigUpdateNodeResponse> nodes, List<FailedNodeException> failures) {
super(clusterName, nodes, failures);
}
@Override
public List<ConfigUpdateNodeResponse> readNodesFrom(final StreamInput in) throws IOException {
return in.readList(ConfigUpdateNodeResponse::readNodeResponse);
}
@Override
public void writeNodesTo(final StreamOutput out, List<ConfigUpdateNodeResponse> nodes) throws IOException {
out.writeStreamableList(nodes);
}
}

View File

@ -0,0 +1,128 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.action.configupdate;
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import org.elasticsearch.action.FailedNodeException;
import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.nodes.BaseNodeRequest;
import org.elasticsearch.action.support.nodes.TransportNodesAction;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.inject.Provider;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.XContentHelper;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
import com.amazon.opendistroforelasticsearch.security.auth.BackendRegistry;
import com.amazon.opendistroforelasticsearch.security.configuration.ConfigurationRepository;
import com.amazon.opendistroforelasticsearch.security.configuration.IndexBaseConfigurationRepository;
public class TransportConfigUpdateAction
extends
TransportNodesAction<ConfigUpdateRequest, ConfigUpdateResponse, TransportConfigUpdateAction.NodeConfigUpdateRequest, ConfigUpdateNodeResponse> {
private final Provider<BackendRegistry> backendRegistry;
private final ConfigurationRepository configurationRepository;
@Inject
public TransportConfigUpdateAction(final Settings settings,
final ThreadPool threadPool, final ClusterService clusterService, final TransportService transportService,
final IndexBaseConfigurationRepository configurationRepository, final ActionFilters actionFilters, final IndexNameExpressionResolver indexNameExpressionResolver,
Provider<BackendRegistry> backendRegistry) {
super(settings, ConfigUpdateAction.NAME, threadPool, clusterService, transportService, actionFilters,
indexNameExpressionResolver, ConfigUpdateRequest::new, TransportConfigUpdateAction.NodeConfigUpdateRequest::new,
ThreadPool.Names.MANAGEMENT, ConfigUpdateNodeResponse.class);
this.configurationRepository = configurationRepository;
this.backendRegistry = backendRegistry;
}
public static class NodeConfigUpdateRequest extends BaseNodeRequest {
ConfigUpdateRequest request;
public NodeConfigUpdateRequest() {
}
public NodeConfigUpdateRequest(final String nodeId, final ConfigUpdateRequest request) {
super(nodeId);
this.request = request;
}
@Override
public void readFrom(final StreamInput in) throws IOException {
super.readFrom(in);
request = new ConfigUpdateRequest();
request.readFrom(in);
}
@Override
public void writeTo(final StreamOutput out) throws IOException {
super.writeTo(out);
request.writeTo(out);
}
}
protected NodeConfigUpdateRequest newNodeRequest(final String nodeId, final ConfigUpdateRequest request) {
return new NodeConfigUpdateRequest(nodeId, request);
}
@Override
protected ConfigUpdateNodeResponse newNodeResponse() {
return new ConfigUpdateNodeResponse(clusterService.localNode(), new String[0], null);
}
@Override
protected ConfigUpdateResponse newResponse(ConfigUpdateRequest request, List<ConfigUpdateNodeResponse> responses,
List<FailedNodeException> failures) {
return new ConfigUpdateResponse(this.clusterService.getClusterName(), responses, failures);
}
@Override
protected ConfigUpdateNodeResponse nodeOperation(final NodeConfigUpdateRequest request) {
final Map<String, Settings> setn = configurationRepository.reloadConfiguration(Arrays.asList(request.request.getConfigTypes()));
backendRegistry.get().invalidateCache();
return new ConfigUpdateNodeResponse(clusterService.localNode(), setn.keySet().toArray(new String[0]), null);
}
}

View File

@ -0,0 +1,77 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.action.whoami;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.HandledTransportAction;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
import com.amazon.opendistroforelasticsearch.security.configuration.AdminDNs;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
import com.amazon.opendistroforelasticsearch.security.support.HeaderHelper;
import com.amazon.opendistroforelasticsearch.security.user.User;
public class TransportWhoAmIAction
extends
HandledTransportAction<WhoAmIRequest, WhoAmIResponse> {
private final AdminDNs adminDNs;
@Inject
public TransportWhoAmIAction(final Settings settings,
final ThreadPool threadPool, final ClusterService clusterService, final TransportService transportService,
final AdminDNs adminDNs, final ActionFilters actionFilters, final IndexNameExpressionResolver indexNameExpressionResolver) {
super(settings, WhoAmIAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver, WhoAmIRequest::new);
this.adminDNs = adminDNs;
}
@Override
protected void doExecute(WhoAmIRequest request, ActionListener<WhoAmIResponse> listener) {
final User user = threadPool.getThreadContext().getTransient(ConfigConstants.OPENDISTRO_SECURITY_USER);
final String dn = user==null?threadPool.getThreadContext().getTransient(ConfigConstants.OPENDISTRO_SECURITY_SSL_TRANSPORT_PRINCIPAL):user.getName();
final boolean isAdmin = adminDNs.isAdminDN(dn);
final boolean isAuthenticated = isAdmin?true: user != null;
final boolean isNodeCertificateRequest = HeaderHelper.isInterClusterRequest(threadPool.getThreadContext()) ||
HeaderHelper.isTrustedClusterRequest(threadPool.getThreadContext());
listener.onResponse(new WhoAmIResponse(dn, isAdmin, isAuthenticated, isNodeCertificateRequest));
}
}

View File

@ -0,0 +1,55 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.action.whoami;
import org.elasticsearch.action.Action;
import org.elasticsearch.client.ElasticsearchClient;
public class WhoAmIAction extends Action<WhoAmIRequest, WhoAmIResponse, WhoAmIRequestBuilder> {
public static final WhoAmIAction INSTANCE = new WhoAmIAction();
public static final String NAME = "cluster:admin/opendistro_security/whoami";
protected WhoAmIAction() {
super(NAME);
}
@Override
public WhoAmIRequestBuilder newRequestBuilder(final ElasticsearchClient client) {
return new WhoAmIRequestBuilder(client, this);
}
@Override
public WhoAmIResponse newResponse() {
return new WhoAmIResponse(null, false, false, false);
}
}

View File

@ -0,0 +1,54 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.action.whoami;
import java.io.IOException;
import org.elasticsearch.action.support.nodes.BaseNodesRequest;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
public class WhoAmIRequest extends BaseNodesRequest<WhoAmIRequest> {
public WhoAmIRequest() {
super();
}
@Override
public void readFrom(final StreamInput in) throws IOException {
super.readFrom(in);
}
@Override
public void writeTo(final StreamOutput out) throws IOException {
super.writeTo(out);
}
}

View File

@ -0,0 +1,46 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.action.whoami;
import org.elasticsearch.action.ActionRequestBuilder;
import org.elasticsearch.client.ClusterAdminClient;
import org.elasticsearch.client.ElasticsearchClient;
public class WhoAmIRequestBuilder extends
ActionRequestBuilder<WhoAmIRequest, WhoAmIResponse, WhoAmIRequestBuilder> {
public WhoAmIRequestBuilder(final ClusterAdminClient client) {
this(client, WhoAmIAction.INSTANCE);
}
public WhoAmIRequestBuilder(final ElasticsearchClient client, final WhoAmIAction action) {
super(client, action, new WhoAmIRequest());
}
}

View File

@ -0,0 +1,107 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.action.whoami;
import java.io.IOException;
import org.elasticsearch.action.ActionResponse;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
public class WhoAmIResponse extends ActionResponse implements ToXContent {
private String dn;
private boolean isAdmin;
private boolean isAuthenticated;
private boolean isNodeCertificateRequest;
public WhoAmIResponse(String dn, boolean isAdmin, boolean isAuthenticated, boolean isNodeCertificateRequest) {
this.dn = dn;
this.isAdmin = isAdmin;
this.isAuthenticated = isAuthenticated;
this.isNodeCertificateRequest = isNodeCertificateRequest;
}
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeString(dn);
out.writeBoolean(isAdmin);
out.writeBoolean(isAuthenticated);
out.writeBoolean(isNodeCertificateRequest);
}
@Override
public void readFrom(StreamInput in) throws IOException {
super.readFrom(in);
dn = in.readString();
isAdmin = in.readBoolean();
isAuthenticated = in.readBoolean();
isNodeCertificateRequest = in.readBoolean();
}
public String getDn() {
return dn;
}
public boolean isAdmin() {
return isAdmin;
}
public boolean isAuthenticated() {
return isAuthenticated;
}
public boolean isNodeCertificateRequest() {
return isNodeCertificateRequest;
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
builder.startObject("whoami");
builder.field("dn", dn);
builder.field("is_admin", isAdmin);
builder.field("is_authenticated", isAuthenticated);
builder.field("is_node_certificate_request", isNodeCertificateRequest);
builder.endObject();
return builder;
}
@Override
public String toString() {
return Strings.toString(this, true, true);
}
}

View File

@ -0,0 +1,87 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.auditlog;
import java.io.Closeable;
import java.util.Map;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.index.engine.Engine.Delete;
import org.elasticsearch.index.engine.Engine.DeleteResult;
import org.elasticsearch.index.engine.Engine.Index;
import org.elasticsearch.index.engine.Engine.IndexResult;
import org.elasticsearch.index.get.GetResult;
import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.tasks.Task;
import org.elasticsearch.transport.TransportRequest;
import com.amazon.opendistroforelasticsearch.security.compliance.ComplianceConfig;
public interface AuditLog extends Closeable {
//login
void logFailedLogin(String effectiveUser, boolean securityadmin, String initiatingUser, TransportRequest request, Task task);
void logFailedLogin(String effectiveUser, boolean securityadmin, String initiatingUser, RestRequest request);
void logSucceededLogin(String effectiveUser, boolean securityadmin, String initiatingUser, TransportRequest request, String action, Task task);
void logSucceededLogin(String effectiveUser, boolean securityadmin, String initiatingUser, RestRequest request);
//privs
void logMissingPrivileges(String privilege, String effectiveUser, RestRequest request);
void logMissingPrivileges(String privilege, TransportRequest request, Task task);
void logGrantedPrivileges(String privilege, TransportRequest request, Task task);
//spoof
void logBadHeaders(TransportRequest request, String action, Task task);
void logBadHeaders(RestRequest request);
void logSecurityIndexAttempt(TransportRequest request, String action, Task task);
void logSSLException(TransportRequest request, Throwable t, String action, Task task);
void logSSLException(RestRequest request, Throwable t);
void logDocumentRead(String index, String id, ShardId shardId, Map<String, String> fieldNameValues, ComplianceConfig complianceConfig);
void logDocumentWritten(ShardId shardId, GetResult originalIndex, Index currentIndex, IndexResult result, ComplianceConfig complianceConfig);
void logDocumentDeleted(ShardId shardId, Delete delete, DeleteResult result);
void logExternalConfig(Settings settings, Environment environment);
// compliance config
void setComplianceConfig(ComplianceConfig complianceConfig);
public enum Origin {
REST, TRANSPORT, LOCAL
}
public enum Operation {
CREATE, UPDATE, DELETE
}
}

View File

@ -0,0 +1,90 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.auditlog;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.tasks.Task;
import org.elasticsearch.transport.TransportRequest;
import com.amazon.opendistroforelasticsearch.security.ssl.SslExceptionHandler;
public class AuditLogSslExceptionHandler implements SslExceptionHandler{
private final AuditLog auditLog;
public AuditLogSslExceptionHandler(final AuditLog auditLog) {
super();
this.auditLog = auditLog;
}
@Override
public void logError(Throwable t, RestRequest request, int type) {
switch (type) {
case 0:
auditLog.logSSLException(request, t);
break;
case 1:
auditLog.logBadHeaders(request);
break;
default:
break;
}
}
@Override
public void logError(Throwable t, boolean isRest) {
if (isRest) {
auditLog.logSSLException(null, t);
} else {
auditLog.logSSLException(null, t, null, null);
}
}
@Override
public void logError(Throwable t, TransportRequest request, String action, Task task, int type) {
switch (type) {
case 0:
if(t instanceof ElasticsearchException) {
auditLog.logMissingPrivileges(action, request, task);
} else {
auditLog.logSSLException(request, t, action, task);
}
break;
case 1:
auditLog.logBadHeaders(request, action, task);
break;
default:
break;
}
}
}

View File

@ -0,0 +1,142 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.auditlog;
import java.io.IOException;
import java.util.Map;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.index.engine.Engine.Delete;
import org.elasticsearch.index.engine.Engine.DeleteResult;
import org.elasticsearch.index.engine.Engine.Index;
import org.elasticsearch.index.engine.Engine.IndexResult;
import org.elasticsearch.index.get.GetResult;
import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.tasks.Task;
import org.elasticsearch.transport.TransportRequest;
import com.amazon.opendistroforelasticsearch.security.compliance.ComplianceConfig;
public class NullAuditLog implements AuditLog {
@Override
public void close() throws IOException {
//noop, intentionally left empty
}
@Override
public void logFailedLogin(String effectiveUser, boolean securityadmin, String initiatingUser, TransportRequest request, Task task) {
//noop, intentionally left empty
}
@Override
public void logFailedLogin(String effectiveUser, boolean securityadmin, String initiatingUser, RestRequest request) {
//noop, intentionally left empty
}
@Override
public void logSucceededLogin(String effectiveUser, boolean securityadmin, String initiatingUser, TransportRequest request, String action, Task task) {
//noop, intentionally left empty
}
@Override
public void logSucceededLogin(String effectiveUser, boolean securityadmin, String initiatingUser, RestRequest request) {
//noop, intentionally left empty
}
@Override
public void logMissingPrivileges(String privilege, TransportRequest request, Task task) {
//noop, intentionally left empty
}
@Override
public void logGrantedPrivileges(String privilege, TransportRequest request, Task task) {
//noop, intentionally left empty
}
@Override
public void logBadHeaders(TransportRequest request, String action, Task task) {
//noop, intentionally left empty
}
@Override
public void logBadHeaders(RestRequest request) {
//noop, intentionally left empty
}
@Override
public void logSecurityIndexAttempt(TransportRequest request, String action, Task task) {
//noop, intentionally left empty
}
@Override
public void logSSLException(TransportRequest request, Throwable t, String action, Task task) {
//noop, intentionally left empty
}
@Override
public void logSSLException(RestRequest request, Throwable t) {
//noop, intentionally left empty
}
@Override
public void logMissingPrivileges(String privilege, String effectiveUser, RestRequest request) {
//noop, intentionally left empty
}
@Override
public void logDocumentRead(String index, String id, ShardId shardId, Map<String, String> fieldNameValues, ComplianceConfig complianceConfig) {
//noop, intentionally left empty
}
@Override
public void logDocumentWritten(ShardId shardId, GetResult originalIndex, Index currentIndex, IndexResult result, ComplianceConfig complianceConfig) {
//noop, intentionally left empty
}
@Override
public void logDocumentDeleted(ShardId shardId, Delete delete, DeleteResult result) {
//noop, intentionally left empty
}
@Override
public void logExternalConfig(Settings settings, Environment environment) {
//noop, intentionally left empty
}
@Override
public void setComplianceConfig(ComplianceConfig complianceConfig) {
//noop, intentionally left empty
}
}

View File

@ -0,0 +1,76 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.auth;
import java.util.Objects;
public class AuthDomain implements Comparable<AuthDomain> {
private final AuthenticationBackend backend;
private final HTTPAuthenticator httpAuthenticator;
private final int order;
private final boolean challenge;
public AuthDomain(final AuthenticationBackend backend, final HTTPAuthenticator httpAuthenticator, boolean challenge, final int order) {
super();
this.backend = Objects.requireNonNull(backend);
this.httpAuthenticator = httpAuthenticator;
this.order = order;
this.challenge = challenge;
}
public boolean isChallenge() {
return challenge;
}
public AuthenticationBackend getBackend() {
return backend;
}
public HTTPAuthenticator getHttpAuthenticator() {
return httpAuthenticator;
}
public int getOrder() {
return order;
}
@Override
public String toString() {
return "AuthDomain [backend=" + backend + ", httpAuthenticator=" + httpAuthenticator + ", order=" + order + ", challenge="
+ challenge + "]";
}
@Override
public int compareTo(final AuthDomain o) {
return Integer.compare(this.order, o.order);
}
}

View File

@ -0,0 +1,83 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.auth;
import org.elasticsearch.ElasticsearchSecurityException;
import com.amazon.opendistroforelasticsearch.security.user.AuthCredentials;
import com.amazon.opendistroforelasticsearch.security.user.User;
/**
* Open Distro Security custom authentication backends need to implement this interface.
* <p/>
* Authentication backends verify {@link AuthCredentials} and, if successfully verified, return a {@link User}.
* <p/>
* Implementation classes must provide a public constructor
* <p/>
* {@code public MyHTTPAuthenticator(org.elasticsearch.common.settings.Settings settings, java.nio.file.Path configPath)}
* <p/>
* The constructor should not throw any exception in case of an initialization problem.
* Instead catch all exceptions and log a appropriate error message. A logger can be instantiated like:
* <p/>
* {@code private final Logger log = LogManager.getLogger(this.getClass());}
*
* <p/>
*/
public interface AuthenticationBackend {
/**
* The type (name) of the authenticator. Only for logging.
* @return the type
*/
String getType();
/**
* Validate credentials and return an authenticated user (or throw an ElasticsearchSecurityException)
* <p/>
* Results of this method are normally cached so that we not need to query the backend for every authentication attempt.
* <p/>
* @param The credentials to be validated, never null
* @return the authenticated User, never null
* @throws ElasticsearchSecurityException in case an authentication failure
* (when credentials are incorrect, the user does not exist or the backend is not reachable)
*/
User authenticate(AuthCredentials credentials) throws ElasticsearchSecurityException;
/**
*
* Lookup for a specific user in the authentication backend
*
* @param user The user for which the authentication backend should be queried
* @return true if the user exists in the authentication backend, false otherwise
*/
boolean exists(User user);
}

View File

@ -0,0 +1,75 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.auth;
import org.elasticsearch.ElasticsearchSecurityException;
import com.amazon.opendistroforelasticsearch.security.user.AuthCredentials;
import com.amazon.opendistroforelasticsearch.security.user.User;
/**
* Open Distro Security custom authorization backends need to implement this interface.
* <p/>
* Authorization backends populate a prior authenticated {@link User} with roles who's the user is a member of.
* <p/>
* Implementation classes must provide a public constructor
* <p/>
* {@code public MyHTTPAuthenticator(org.elasticsearch.common.settings.Settings settings, java.nio.file.Path configPath)}
* <p/>
* The constructor should not throw any exception in case of an initialization problem.
* Instead catch all exceptions and log a appropriate error message. A logger can be instantiated like:
* <p/>
* {@code private final Logger log = LogManager.getLogger(this.getClass());}
*
* <p/>
*/
public interface AuthorizationBackend {
/**
* The type (name) of the authorizer. Only for logging.
* @return the type
*/
String getType();
/**
* Populate a {@link User} with roles. This method will not be called for cached users.
* <p/>
* Add them by calling either {@code user.addRole()} or {@code user.addRoles()}
* </P>
* @param user The authenticated user to populate with roles, never null
* @param credentials Credentials to authenticate to the authorization backend, maybe null.
* <em>This parameter is for future usage, currently always empty credentials are passed!</em>
* @throws ElasticsearchSecurityException in case when the authorization backend cannot be reached
* or the {@code credentials} are insufficient to authenticate to the authorization backend.
*/
void fillRoles(User user, AuthCredentials credentials) throws ElasticsearchSecurityException;
}

View File

@ -0,0 +1,787 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.auth;
import java.nio.file.Path;
import java.util.HashMap;
import java.util.HashSet;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.SortedSet;
import java.util.TreeSet;
import java.util.concurrent.Callable;
import java.util.concurrent.TimeUnit;
import javax.naming.InvalidNameException;
import javax.naming.ldap.LdapName;
import javax.naming.ldap.Rdn;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.ElasticsearchSecurityException;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.rest.BytesRestResponse;
import org.elasticsearch.rest.RestChannel;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.tasks.Task;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportRequest;
import com.amazon.opendistroforelasticsearch.security.auditlog.AuditLog;
import com.amazon.opendistroforelasticsearch.security.auth.internal.InternalAuthenticationBackend;
import com.amazon.opendistroforelasticsearch.security.auth.internal.NoOpAuthenticationBackend;
import com.amazon.opendistroforelasticsearch.security.auth.internal.NoOpAuthorizationBackend;
import com.amazon.opendistroforelasticsearch.security.configuration.AdminDNs;
import com.amazon.opendistroforelasticsearch.security.configuration.ConfigurationChangeListener;
import com.amazon.opendistroforelasticsearch.security.http.HTTPBasicAuthenticator;
import com.amazon.opendistroforelasticsearch.security.http.HTTPClientCertAuthenticator;
import com.amazon.opendistroforelasticsearch.security.http.HTTPProxyAuthenticator;
import com.amazon.opendistroforelasticsearch.security.http.XFFResolver;
import com.amazon.opendistroforelasticsearch.security.ssl.util.Utils;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
import com.amazon.opendistroforelasticsearch.security.support.HTTPHelper;
import com.amazon.opendistroforelasticsearch.security.support.ReflectionHelper;
import com.amazon.opendistroforelasticsearch.security.user.AuthCredentials;
import com.amazon.opendistroforelasticsearch.security.user.User;
import com.google.common.base.Strings;
import com.google.common.cache.Cache;
import com.google.common.cache.CacheBuilder;
import com.google.common.cache.RemovalListener;
import com.google.common.cache.RemovalNotification;
public class BackendRegistry implements ConfigurationChangeListener {
protected final Logger log = LogManager.getLogger(this.getClass());
private final Map<String, String> authImplMap = new HashMap<>();
private final SortedSet<AuthDomain> restAuthDomains = new TreeSet<>();
private final Set<AuthorizationBackend> restAuthorizers = new HashSet<>();
private final SortedSet<AuthDomain> transportAuthDomains = new TreeSet<>();
private final Set<AuthorizationBackend> transportAuthorizers = new HashSet<>();
private final List<Destroyable> destroyableComponents = new LinkedList<>();
private volatile boolean initialized;
private final AdminDNs adminDns;
private final XFFResolver xffResolver;
private volatile boolean anonymousAuthEnabled = false;
private final Settings esSettings;
private final Path configPath;
private final InternalAuthenticationBackend iab;
private final AuditLog auditLog;
private final ThreadPool threadPool;
private final UserInjector userInjector;
private final int ttlInMin;
private Cache<AuthCredentials, User> userCache;
private Cache<String, User> userCacheTransport;
private Cache<AuthCredentials, User> authenticatedUserCacheTransport;
private Cache<String, User> restImpersonationCache;
private volatile String transportUsernameAttribute = null;
private void createCaches() {
userCache = CacheBuilder.newBuilder()
.expireAfterWrite(ttlInMin, TimeUnit.MINUTES)
.removalListener(new RemovalListener<AuthCredentials, User>() {
@Override
public void onRemoval(RemovalNotification<AuthCredentials, User> notification) {
log.debug("Clear user cache for {} due to {}", notification.getKey().getUsername(), notification.getCause());
}
}).build();
userCacheTransport = CacheBuilder.newBuilder()
.expireAfterWrite(ttlInMin, TimeUnit.MINUTES)
.removalListener(new RemovalListener<String, User>() {
@Override
public void onRemoval(RemovalNotification<String, User> notification) {
log.debug("Clear user cache for {} due to {}", notification.getKey(), notification.getCause());
}
}).build();
authenticatedUserCacheTransport = CacheBuilder.newBuilder()
.expireAfterWrite(ttlInMin, TimeUnit.MINUTES)
.removalListener(new RemovalListener<AuthCredentials, User>() {
@Override
public void onRemoval(RemovalNotification<AuthCredentials, User> notification) {
log.debug("Clear user cache for {} due to {}", notification.getKey().getUsername(), notification.getCause());
}
}).build();
restImpersonationCache = CacheBuilder.newBuilder()
.expireAfterWrite(ttlInMin, TimeUnit.MINUTES)
.removalListener(new RemovalListener<String, User>() {
@Override
public void onRemoval(RemovalNotification<String, User> notification) {
log.debug("Clear user cache for {} due to {}", notification.getKey(), notification.getCause());
}
}).build();
}
public BackendRegistry(final Settings settings, final Path configPath, final AdminDNs adminDns,
final XFFResolver xffResolver, final InternalAuthenticationBackend iab, final AuditLog auditLog, final ThreadPool threadPool) {
this.adminDns = adminDns;
this.esSettings = settings;
this.configPath = configPath;
this.xffResolver = xffResolver;
this.iab = iab;
this.auditLog = auditLog;
this.threadPool = threadPool;
this.userInjector = new UserInjector(settings, threadPool, auditLog, xffResolver);
authImplMap.put("intern_c", InternalAuthenticationBackend.class.getName());
authImplMap.put("intern_z", NoOpAuthorizationBackend.class.getName());
authImplMap.put("internal_c", InternalAuthenticationBackend.class.getName());
authImplMap.put("internal_z", NoOpAuthorizationBackend.class.getName());
authImplMap.put("noop_c", NoOpAuthenticationBackend.class.getName());
authImplMap.put("noop_z", NoOpAuthorizationBackend.class.getName());
authImplMap.put("ldap_c", "com.amazon.dlic.auth.ldap.backend.LDAPAuthenticationBackend");
authImplMap.put("ldap_z", "com.amazon.dlic.auth.ldap.backend.LDAPAuthorizationBackend");
authImplMap.put("basic_h", HTTPBasicAuthenticator.class.getName());
authImplMap.put("proxy_h", HTTPProxyAuthenticator.class.getName());
authImplMap.put("clientcert_h", HTTPClientCertAuthenticator.class.getName());
authImplMap.put("kerberos_h", "com.amazon.dlic.auth.http.kerberos.HTTPSpnegoAuthenticator");
authImplMap.put("jwt_h", "com.amazon.dlic.auth.http.jwt.HTTPJwtAuthenticator");
authImplMap.put("openid_h", "com.amazon.dlic.auth.http.jwt.keybyoidc.HTTPJwtKeyByOpenIdConnectAuthenticator");
authImplMap.put("saml_h", "com.amazon.dlic.auth.http.saml.HTTPSamlAuthenticator");
this.ttlInMin = settings.getAsInt(ConfigConstants.OPENDISTRO_SECURITY_CACHE_TTL_MINUTES, 60);
createCaches();
}
public boolean isInitialized() {
return initialized;
}
public void invalidateCache() {
userCache.invalidateAll();
userCacheTransport.invalidateAll();
authenticatedUserCacheTransport.invalidateAll();
restImpersonationCache.invalidateAll();
}
@Override
public void onChange(final Settings settings) {
//TODO synchronize via semaphore/atomicref
restAuthDomains.clear();
transportAuthDomains.clear();
restAuthorizers.clear();
transportAuthorizers.clear();
invalidateCache();
destroyDestroyables();
transportUsernameAttribute = settings.get("opendistro_security.dynamic.transport_userrname_attribute", null);
anonymousAuthEnabled = settings.getAsBoolean("opendistro_security.dynamic.http.anonymous_auth_enabled", false)
&& !esSettings.getAsBoolean(ConfigConstants.OPENDISTRO_SECURITY_COMPLIANCE_DISABLE_ANONYMOUS_AUTHENTICATION, false);
final Map<String, Settings> authzDyn = settings.getGroups("opendistro_security.dynamic.authz");
for (final String ad : authzDyn.keySet()) {
final Settings ads = authzDyn.get(ad);
final boolean enabled = ads.getAsBoolean("enabled", true);
final boolean httpEnabled = enabled && ads.getAsBoolean("http_enabled", true);
final boolean transportEnabled = enabled && ads.getAsBoolean("transport_enabled", true);
if (httpEnabled || transportEnabled) {
try {
final String authzBackendClazz = ads.get("authorization_backend.type", "noop");
final AuthorizationBackend authorizationBackend;
if(authzBackendClazz.equals(InternalAuthenticationBackend.class.getName()) //NOSONAR
|| authzBackendClazz.equals("internal")
|| authzBackendClazz.equals("intern")) {
authorizationBackend = iab;
ReflectionHelper.addLoadedModule(InternalAuthenticationBackend.class);
} else {
authorizationBackend = newInstance(
authzBackendClazz,"z",
Settings.builder().put(esSettings).put(ads.getAsSettings("authorization_backend.config")).build(), configPath);
}
if (httpEnabled) {
restAuthorizers.add(authorizationBackend);
}
if (transportEnabled) {
transportAuthorizers.add(authorizationBackend);
}
if (authorizationBackend instanceof Destroyable) {
this.destroyableComponents.add((Destroyable) authorizationBackend);
}
} catch (final Exception e) {
log.error("Unable to initialize AuthorizationBackend {} due to {}", ad, e.toString(),e);
}
}
}
final Map<String, Settings> dyn = settings.getGroups("opendistro_security.dynamic.authc");
for (final String ad : dyn.keySet()) {
final Settings ads = dyn.get(ad);
final boolean enabled = ads.getAsBoolean("enabled", true);
final boolean httpEnabled = enabled && ads.getAsBoolean("http_enabled", true);
final boolean transportEnabled = enabled && ads.getAsBoolean("transport_enabled", true);
if (httpEnabled || transportEnabled) {
try {
AuthenticationBackend authenticationBackend;
final String authBackendClazz = ads.get("authentication_backend.type", InternalAuthenticationBackend.class.getName());
if(authBackendClazz.equals(InternalAuthenticationBackend.class.getName()) //NOSONAR
|| authBackendClazz.equals("internal")
|| authBackendClazz.equals("intern")) {
authenticationBackend = iab;
ReflectionHelper.addLoadedModule(InternalAuthenticationBackend.class);
} else {
authenticationBackend = newInstance(
authBackendClazz,"c",
Settings.builder().put(esSettings).put(ads.getAsSettings("authentication_backend.config")).build(), configPath);
}
String httpAuthenticatorType = ads.get("http_authenticator.type"); //no default
HTTPAuthenticator httpAuthenticator = httpAuthenticatorType==null?null: (HTTPAuthenticator) newInstance(httpAuthenticatorType,"h",
Settings.builder().put(esSettings).put(ads.getAsSettings("http_authenticator.config")).build(), configPath);
final AuthDomain _ad = new AuthDomain(authenticationBackend, httpAuthenticator,
ads.getAsBoolean("http_authenticator.challenge", true), ads.getAsInt("order", 0));
if (httpEnabled && _ad.getHttpAuthenticator() != null) {
restAuthDomains.add(_ad);
}
if (transportEnabled) {
transportAuthDomains.add(_ad);
}
if (httpAuthenticator instanceof Destroyable) {
this.destroyableComponents.add((Destroyable) httpAuthenticator);
}
if (authenticationBackend instanceof Destroyable) {
this.destroyableComponents.add((Destroyable) authenticationBackend);
}
} catch (final Exception e) {
log.error("Unable to initialize auth domain {} due to {}", ad, e.toString(), e);
}
}
}
//Open Distro Security no default authc
initialized = !restAuthDomains.isEmpty() || anonymousAuthEnabled;
}
public User authenticate(final TransportRequest request, final String sslPrincipal, final Task task, final String action) {
if(log.isDebugEnabled() && request.remoteAddress() != null) {
log.debug("Transport authentication request from {}", request.remoteAddress());
}
User origPKIUser = new User(sslPrincipal);
if(adminDns.isAdmin(origPKIUser)) {
auditLog.logSucceededLogin(origPKIUser.getName(), true, null, request, action, task);
return origPKIUser;
}
final String authorizationHeader = threadPool.getThreadContext().getHeader("Authorization");
//Use either impersonation OR credentials authentication
//if both is supplied credentials authentication win
final AuthCredentials creds = HTTPHelper.extractCredentials(authorizationHeader, log);
User impersonatedTransportUser = null;
if(creds != null) {
if(log.isDebugEnabled()) {
log.debug("User {} submitted also basic credentials: {}", origPKIUser.getName(), creds);
}
}
//loop over all transport auth domains
for (final AuthDomain authDomain: transportAuthDomains) {
User authenticatedUser = null;
if(creds == null) {
//no credentials submitted
//impersonation possible
impersonatedTransportUser = impersonate(request, origPKIUser);
origPKIUser = resolveTransportUsernameAttribute(origPKIUser);
authenticatedUser = checkExistsAndAuthz(userCacheTransport, impersonatedTransportUser==null?origPKIUser:impersonatedTransportUser, authDomain.getBackend(), transportAuthorizers);
} else {
//auth credentials submitted
//impersonation not possible, if requested it will be ignored
authenticatedUser = authcz(authenticatedUserCacheTransport, creds, authDomain.getBackend(), transportAuthorizers);
}
if(authenticatedUser == null) {
if(log.isDebugEnabled()) {
log.debug("Cannot authenticate user {} (or add roles) with authdomain {}/{}, try next", creds==null?(impersonatedTransportUser==null?origPKIUser.getName():impersonatedTransportUser.getName()):creds.getUsername(), authDomain.getBackend().getType(), authDomain.getOrder());
}
continue;
}
if(adminDns.isAdmin(authenticatedUser)) {
log.error("Cannot authenticate user because admin user is not permitted to login");
auditLog.logFailedLogin(authenticatedUser.getName(), true, null, request, task);
return null;
}
if(log.isDebugEnabled()) {
log.debug("User '{}' is authenticated", authenticatedUser);
}
auditLog.logSucceededLogin(authenticatedUser.getName(), false, impersonatedTransportUser==null?null:origPKIUser.getName(), request, action, task);
return authenticatedUser;
}//end looping auth domains
//auditlog
if(creds == null) {
auditLog.logFailedLogin(impersonatedTransportUser==null?origPKIUser.getName():impersonatedTransportUser.getName(), false, impersonatedTransportUser==null?null:origPKIUser.getName(), request, task);
} else {
auditLog.logFailedLogin(creds.getUsername(), false, null, request, task);
}
log.warn("Transport authentication finally failed for {} from {}", creds == null ? impersonatedTransportUser==null?origPKIUser.getName():impersonatedTransportUser.getName():creds.getUsername(), request.remoteAddress());
return null;
}
/**
*
* @param request
* @param channel
* @return The authenticated user, null means another roundtrip
* @throws ElasticsearchSecurityException
*/
public boolean authenticate(final RestRequest request, final RestChannel channel, final ThreadContext threadContext) {
final String sslPrincipal = (String) threadPool.getThreadContext().getTransient(ConfigConstants.OPENDISTRO_SECURITY_SSL_PRINCIPAL);
if(adminDns.isAdminDN(sslPrincipal)) {
//PKI authenticated REST call
threadPool.getThreadContext().putTransient(ConfigConstants.OPENDISTRO_SECURITY_USER, new User(sslPrincipal));
auditLog.logSucceededLogin(sslPrincipal, true, null, request);
return true;
}
if (userInjector.injectUser(request)) {
// ThreadContext injected user
return true;
}
if (!isInitialized()) {
log.error("Not yet initialized (you may need to run securityadmin)");
channel.sendResponse(new BytesRestResponse(RestStatus.SERVICE_UNAVAILABLE, "Open Distro Security not initialized (SG11)."));
return false;
}
final TransportAddress remoteAddress = xffResolver.resolve(request);
if(log.isDebugEnabled()) {
log.debug("Rest authentication request from {} [original: {}]", remoteAddress, request.getRemoteAddress());
}
threadContext.putTransient(ConfigConstants.OPENDISTRO_SECURITY_REMOTE_ADDRESS, remoteAddress);
boolean authenticated = false;
User authenticatedUser = null;
AuthCredentials authCredenetials = null;
HTTPAuthenticator firstChallengingHttpAuthenticator = null;
//loop over all http/rest auth domains
for (final AuthDomain authDomain: restAuthDomains) {
final HTTPAuthenticator httpAuthenticator = authDomain.getHttpAuthenticator();
if(authDomain.isChallenge() && firstChallengingHttpAuthenticator == null) {
firstChallengingHttpAuthenticator = httpAuthenticator;
}
if(log.isTraceEnabled()) {
log.trace("Try to extract auth creds from {} http authenticator", httpAuthenticator.getType());
}
final AuthCredentials ac;
try {
ac = httpAuthenticator.extractCredentials(request, threadContext);
} catch (Exception e1) {
if(log.isDebugEnabled()) {
log.debug("'{}' extracting credentials from {} http authenticator", e1.toString(), httpAuthenticator.getType(), e1);
}
continue;
}
authCredenetials = ac;
if (ac == null) {
//no credentials found in request
if(anonymousAuthEnabled) {
continue;
}
if(authDomain.isChallenge() && httpAuthenticator.reRequestAuthentication(channel, null)) {
auditLog.logFailedLogin("<NONE>", false, null, request);
log.trace("No 'Authorization' header, send 401 and 'WWW-Authenticate Basic'");
return false;
} else {
//no reRequest possible
log.trace("No 'Authorization' header, send 403");
continue;
}
} else {
org.apache.logging.log4j.ThreadContext.put("user", ac.getUsername());
if (!ac.isComplete()) {
//credentials found in request but we need another client challenge
if(httpAuthenticator.reRequestAuthentication(channel, ac)) {
//auditLog.logFailedLogin(ac.getUsername()+" <incomplete>", request); --noauditlog
return false;
} else {
//no reRequest possible
continue;
}
}
}
//http completed
authenticatedUser = authcz(userCache, ac, authDomain.getBackend(), restAuthorizers);
if(authenticatedUser == null) {
if(log.isDebugEnabled()) {
log.debug("Cannot authenticate user {} (or add roles) with authdomain {}/{}, try next", ac.getUsername(), authDomain.getBackend().getType(), authDomain.getOrder());
}
continue;
}
if(adminDns.isAdmin(authenticatedUser)) {
log.error("Cannot authenticate user because admin user is not permitted to login via HTTP");
auditLog.logFailedLogin(authenticatedUser.getName(), true, null, request);
channel.sendResponse(new BytesRestResponse(RestStatus.FORBIDDEN, "Cannot authenticate user because admin user is not permitted to login via HTTP"));
return false;
}
final String tenant = Utils.coalesce(request.header("securitytenant"), request.header("security_tenant"));
if(log.isDebugEnabled()) {
log.debug("User '{}' is authenticated", authenticatedUser);
log.debug("securitytenant '{}'", tenant);
}
authenticatedUser.setRequestedTenant(tenant);
authenticated = true;
break;
}//end looping auth domains
if(authenticated) {
final User impersonatedUser = impersonate(request, authenticatedUser);
threadContext.putTransient(ConfigConstants.OPENDISTRO_SECURITY_USER, impersonatedUser==null?authenticatedUser:impersonatedUser);
auditLog.logSucceededLogin((impersonatedUser==null?authenticatedUser:impersonatedUser).getName(), false, authenticatedUser.getName(), request);
} else {
if(log.isDebugEnabled()) {
log.debug("User still not authenticated after checking {} auth domains", restAuthDomains.size());
}
if(authCredenetials == null && anonymousAuthEnabled) {
threadContext.putTransient(ConfigConstants.OPENDISTRO_SECURITY_USER, User.ANONYMOUS);
auditLog.logSucceededLogin(User.ANONYMOUS.getName(), false, null, request);
if(log.isDebugEnabled()) {
log.debug("Anonymous User is authenticated");
}
return true;
}
if(firstChallengingHttpAuthenticator != null) {
if(log.isDebugEnabled()) {
log.debug("Rerequest with {}", firstChallengingHttpAuthenticator.getClass());
}
if(firstChallengingHttpAuthenticator.reRequestAuthentication(channel, null)) {
if(log.isDebugEnabled()) {
log.debug("Rerequest {} failed", firstChallengingHttpAuthenticator.getClass());
}
log.warn("Authentication finally failed for {} from {}", authCredenetials == null ? null:authCredenetials.getUsername(), remoteAddress);
auditLog.logFailedLogin(authCredenetials == null ? null:authCredenetials.getUsername(), false, null, request);
return false;
}
}
log.warn("Authentication finally failed for {} from {}", authCredenetials == null ? null:authCredenetials.getUsername(), remoteAddress);
auditLog.logFailedLogin(authCredenetials == null ? null:authCredenetials.getUsername(), false, null, request);
channel.sendResponse(new BytesRestResponse(RestStatus.UNAUTHORIZED, "Authentication finally failed"));
return false;
}
return authenticated;
}
/**
* no auditlog, throw no exception, does also authz for all authorizers
*
* @param cache
* @param ac
* @param authDomain
* @return null if user cannot b authenticated
*/
private User checkExistsAndAuthz(final Cache<String, User> cache, final User user, final AuthenticationBackend authenticationBackend, final Set<AuthorizationBackend> authorizers) {
if(user == null) {
return null;
}
try {
return cache.get(user.getName(), new Callable<User>() {
@Override
public User call() throws Exception {
if(log.isDebugEnabled()) {
log.debug(user.getName()+" not cached, return from "+authenticationBackend.getType()+" backend directly");
}
if(authenticationBackend.exists(user)) {
for (final AuthorizationBackend ab : authorizers) {
try {
ab.fillRoles(user, new AuthCredentials(user.getName()));
} catch (Exception e) {
log.error("Cannot retrieve roles for {} from {} due to {}", user.getName(), ab.getType(), e.toString(), e);
}
}
return user;
}
if(log.isDebugEnabled()) {
log.debug("User "+user.getName()+" does not exist in "+authenticationBackend.getType());
}
return null;
}
});
} catch (Exception e) {
if(log.isDebugEnabled()) {
log.debug("Can not check and authorize "+user.getName()+" due to "+e.toString(), e);
}
return null;
}
}
/**
* no auditlog, throw no exception, does also authz for all authorizers
*
* @param cache
* @param ac
* @param authDomain
* @return null if user cannot b authenticated
*/
private User authcz(final Cache<AuthCredentials, User> cache, final AuthCredentials ac, final AuthenticationBackend authBackend, final Set<AuthorizationBackend> authorizers) {
if(ac == null) {
return null;
}
try {
//noop backend configured and no authorizers
//that mean authc and authz was completely done via HTTP (like JWT or PKI)
if(authBackend.getClass() == NoOpAuthenticationBackend.class && authorizers.isEmpty()) {
//no cache
return authBackend.authenticate(ac);
}
return cache.get(ac, new Callable<User>() {
@Override
public User call() throws Exception {
if(log.isDebugEnabled()) {
log.debug(ac.getUsername()+" not cached, return from "+authBackend.getType()+" backend directly");
}
final User authenticatedUser = authBackend.authenticate(ac);
for (final AuthorizationBackend ab : authorizers) {
try {
ab.fillRoles(authenticatedUser, new AuthCredentials(authenticatedUser.getName()));
} catch (Exception e) {
log.error("Cannot retrieve roles for {} from {} due to {}", authenticatedUser, ab.getType(), e.toString(), e);
}
}
return authenticatedUser;
}
});
} catch (Exception e) {
if(log.isDebugEnabled()) {
log.debug("Can not authenticate "+ac.getUsername()+" due to "+e.toString(), e);
}
return null;
} finally {
ac.clearSecrets();
}
}
private User impersonate(final TransportRequest tr, final User origPKIuser) throws ElasticsearchSecurityException {
final String impersonatedUser = threadPool.getThreadContext().getHeader("opendistro_security_impersonate_as");
if(Strings.isNullOrEmpty(impersonatedUser)) {
return null; //nothing to do
}
if (!isInitialized()) {
throw new ElasticsearchSecurityException("Could not check for impersonation because Open Distro Security is not yet initialized");
}
if (origPKIuser == null) {
throw new ElasticsearchSecurityException("no original PKI user found");
}
User aU = origPKIuser;
if (adminDns.isAdminDN(impersonatedUser)) {
throw new ElasticsearchSecurityException("'"+origPKIuser.getName() + "' is not allowed to impersonate as an adminuser '" + impersonatedUser+"'");
}
try {
if (impersonatedUser != null && !adminDns.isTransportImpersonationAllowed(new LdapName(origPKIuser.getName()), impersonatedUser)) {
throw new ElasticsearchSecurityException("'"+origPKIuser.getName() + "' is not allowed to impersonate as '" + impersonatedUser+"'");
} else if (impersonatedUser != null) {
aU = new User(impersonatedUser);
if(log.isDebugEnabled()) {
log.debug("Impersonate from '{}' to '{}'",origPKIuser.getName(), impersonatedUser);
}
}
} catch (final InvalidNameException e1) {
throw new ElasticsearchSecurityException("PKI does not have a valid name ('" + origPKIuser.getName() + "'), should never happen",
e1);
}
return aU;
}
private User impersonate(final RestRequest request, final User originalUser) throws ElasticsearchSecurityException {
final String impersonatedUserHeader = request.header("opendistro_security_impersonate_as");
if (Strings.isNullOrEmpty(impersonatedUserHeader) || originalUser == null) {
return null; // nothing to do
}
if (!isInitialized()) {
throw new ElasticsearchSecurityException("Could not check for impersonation because Open Distro Security is not yet initialized");
}
if (adminDns.isAdminDN(impersonatedUserHeader)) {
throw new ElasticsearchSecurityException("It is not allowed to impersonate as an adminuser '" + impersonatedUserHeader + "'",
RestStatus.FORBIDDEN);
}
if (!adminDns.isRestImpersonationAllowed(originalUser.getName(), impersonatedUserHeader)) {
throw new ElasticsearchSecurityException("'" + originalUser.getName() + "' is not allowed to impersonate as '" + impersonatedUserHeader
+ "'", RestStatus.FORBIDDEN);
} else {
//loop over all http/rest auth domains
for (final AuthDomain authDomain: restAuthDomains) {
final AuthenticationBackend authenticationBackend = authDomain.getBackend();
final User impersonatedUser = checkExistsAndAuthz(restImpersonationCache, new User(impersonatedUserHeader), authenticationBackend, restAuthorizers);
if(impersonatedUser == null) {
log.debug("Unable to impersonate rest user from '{}' to '{}' because the impersonated user does not exists in {}, try next ...", originalUser.getName(), impersonatedUserHeader, authenticationBackend.getType());
continue;
}
if (log.isDebugEnabled()) {
log.debug("Impersonate rest user from '{}' to '{}'", originalUser.getName(), impersonatedUserHeader);
}
return impersonatedUser;
}
log.debug("Unable to impersonate rest user from '{}' to '{}' because the impersonated user does not exists", originalUser.getName(), impersonatedUserHeader);
throw new ElasticsearchSecurityException("No such user:" + impersonatedUserHeader, RestStatus.FORBIDDEN);
}
}
private <T> T newInstance(final String clazzOrShortcut, String type, final Settings settings, final Path configPath) {
String clazz = clazzOrShortcut;
boolean isEnterprise = false;
if(authImplMap.containsKey(clazz+"_"+type)) {
clazz = authImplMap.get(clazz+"_"+type);
} else {
isEnterprise = true;
}
if(ReflectionHelper.isEnterpriseAAAModule(clazz)) {
isEnterprise = true;
}
return ReflectionHelper.instantiateAAA(clazz, settings, configPath, isEnterprise);
}
private void destroyDestroyables() {
for (Destroyable destroyable : this.destroyableComponents) {
try {
destroyable.destroy();
} catch (Exception e) {
log.error("Error while destroying " + destroyable, e);
}
}
this.destroyableComponents.clear();
}
private User resolveTransportUsernameAttribute(User pkiUser) {
//#547
if(transportUsernameAttribute != null && !transportUsernameAttribute.isEmpty()) {
try {
final LdapName sslPrincipalAsLdapName = new LdapName(pkiUser.getName());
for(final Rdn rdn: sslPrincipalAsLdapName.getRdns()) {
if(rdn.getType().equals(transportUsernameAttribute)) {
return new User((String) rdn.getValue());
}
}
} catch (InvalidNameException e) {
//cannot happen
}
}
return pkiUser;
}
}

View File

@ -0,0 +1,35 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.auth;
public interface Destroyable {
void destroy();
}

View File

@ -0,0 +1,90 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.auth;
import org.elasticsearch.ElasticsearchSecurityException;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.rest.RestChannel;
import org.elasticsearch.rest.RestRequest;
import com.amazon.opendistroforelasticsearch.security.user.AuthCredentials;
/**
* Open Distro Security custom HTTP authenticators need to implement this interface.
* <p/>
* A HTTP authenticator extracts {@link AuthCredentials} from a {@link RestRequest}
* <p/>
*
* Implementation classes must provide a public constructor
* <p/>
* {@code public MyHTTPAuthenticator(org.elasticsearch.common.settings.Settings settings, java.nio.file.Path configPath)}
* <p/>
* The constructor should not throw any exception in case of an initialization problem.
* Instead catch all exceptions and log a appropriate error message. A logger can be instantiated like:
* <p/>
* {@code private final Logger log = LogManager.getLogger(this.getClass());}
* <p/>
*/
public interface HTTPAuthenticator {
/**
* The type (name) of the authenticator. Only for logging.
* @return the type
*/
String getType();
/**
* Extract {@link AuthCredentials} from {@link RestRequest}
*
* @param request The rest request
* @param context The current thread context
* @return The authentication credentials (complete or incomplete) or null when no credentials are found in the request
* <p>
* When the credentials could be fully extracted from the request {@code .markComplete()} must be called on the {@link AuthCredentials} which are returned.
* If the authentication flow needs another roundtrip with the request originator do not mark it as complete.
* @throws ElasticsearchSecurityException
*/
AuthCredentials extractCredentials(RestRequest request, ThreadContext context) throws ElasticsearchSecurityException;
/**
* If the {@code extractCredentials()} call was not successful or the authentication flow needs another roundtrip this method
* will be called. If the custom HTTP authenticator does not support this method is a no-op and false should be returned.
*
* If the custom HTTP authenticator does support re-request authentication or supports authentication flows with multiple roundtrips
* then the response should be sent (through the channel) and true must be returned.
*
* @param channel The rest channel to sent back the response via {@code channel.sendResponse()}
* @param credentials The credentials from the prior authentication attempt
* @return false if re-request is not supported/necessary, true otherwise.
* If true is returned {@code channel.sendResponse()} must be called so that the request completes.
*/
boolean reRequestAuthentication(final RestChannel channel, AuthCredentials credentials);
}

View File

@ -0,0 +1,156 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.auth;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.util.Arrays;
import java.util.Map;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.threadpool.ThreadPool;
import com.amazon.opendistroforelasticsearch.security.auditlog.AuditLog;
import com.amazon.opendistroforelasticsearch.security.http.XFFResolver;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
import com.amazon.opendistroforelasticsearch.security.support.OpenDistroSecurityUtils;
import com.amazon.opendistroforelasticsearch.security.user.User;
import com.google.common.base.Strings;
public class UserInjector {
protected final Logger log = LogManager.getLogger(UserInjector.class);
private final ThreadPool threadPool;
private final AuditLog auditLog;
private final XFFResolver xffResolver;
private final Boolean injectUserEnabled;
UserInjector(Settings settings, ThreadPool threadPool, AuditLog auditLog, XFFResolver xffResolver) {
this.threadPool = threadPool;
this.auditLog = auditLog;
this.xffResolver = xffResolver;
this.injectUserEnabled = settings.getAsBoolean(ConfigConstants.OPENDISTRO_SECURITY_UNSUPPORTED_INJECT_USER_ENABLED, false);
}
boolean injectUser(RestRequest request) {
if (!injectUserEnabled) {
return false;
}
String injectedUserString = threadPool.getThreadContext().getTransient(ConfigConstants.OPENDISTRO_SECURITY_INJECTED_USER);
if (log.isDebugEnabled()) {
log.debug("Injected user string: {}", injectedUserString);
}
if (Strings.isNullOrEmpty(injectedUserString)) {
return false;
}
// username|role1,role2|remoteIP:port|attributeKey,attributeValue,attributeKey,attributeValue, ...|requestedTenant
String[] parts = injectedUserString.split("\\|");
if (parts.length == 0) {
log.error("User string malformed, could not extract parts. User string was '{}.' User injection failed.", injectedUserString);
return false;
}
// username
if (Strings.isNullOrEmpty(parts[0])) {
log.error("Username must not be null, user string was '{}.' User injection failed.", injectedUserString);
return false;
}
final User user = new User(parts[0]);
// backend roles
if (parts.length > 1 && !Strings.isNullOrEmpty(parts[1])) {
if (parts[1].length() > 0) {
user.addRoles(Arrays.asList(parts[1].split(",")));
}
}
// custom attributes
if (parts.length > 3 && !Strings.isNullOrEmpty(parts[3])) {
Map<String, String> attributes = OpenDistroSecurityUtils.mapFromArray((parts[3].split(",")));
if (attributes == null) {
log.error("Could not parse custom attributes {}, user injection failed.", parts[3]);
return false;
} else {
user.addAttributes(attributes);
}
}
// requested tenant
if (parts.length > 4 && !Strings.isNullOrEmpty(parts[4])) {
user.setRequestedTenant(parts[4]);
}
// remote IP - we can set it only once, so we do it last. If non is given,
// BackendRegistry/XFFResolver will do the job
if (parts.length > 2 && !Strings.isNullOrEmpty(parts[2])) {
// format is ip:port
String[] ipAndPort = parts[2].split(":");
if (ipAndPort.length != 2) {
log.error("Remote address must have format ip:port, was: {}. User injection failed.", parts[2]);
return false;
} else {
try {
InetAddress iAdress = InetAddress.getByName(ipAndPort[0]);
int port = Integer.parseInt(ipAndPort[1]);
threadPool.getThreadContext().putTransient(ConfigConstants.OPENDISTRO_SECURITY_REMOTE_ADDRESS, new TransportAddress(iAdress, port));
} catch (UnknownHostException | NumberFormatException e) {
log.error("Cannot parse remote IP or port: {}, user injection failed.", parts[2], e);
return false;
}
}
} else {
threadPool.getThreadContext().putTransient(ConfigConstants.OPENDISTRO_SECURITY_REMOTE_ADDRESS, xffResolver.resolve(request));
}
// mark user injected for proper admin handling
user.setInjected(true);
threadPool.getThreadContext().putTransient(ConfigConstants.OPENDISTRO_SECURITY_USER, user);
auditLog.logSucceededLogin(parts[0], true, null, request);
if (log.isTraceEnabled()) {
log.trace("Injected user object:{} ", user.toString());
}
return true;
}
}

View File

@ -0,0 +1,176 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.auth.internal;
import java.nio.ByteBuffer;
import java.nio.CharBuffer;
import java.nio.charset.StandardCharsets;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import org.bouncycastle.crypto.generators.OpenBSDBCrypt;
import org.elasticsearch.ElasticsearchSecurityException;
import org.elasticsearch.common.settings.Settings;
import com.amazon.opendistroforelasticsearch.security.auth.AuthenticationBackend;
import com.amazon.opendistroforelasticsearch.security.auth.AuthorizationBackend;
import com.amazon.opendistroforelasticsearch.security.configuration.ConfigurationRepository;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
import com.amazon.opendistroforelasticsearch.security.user.AuthCredentials;
import com.amazon.opendistroforelasticsearch.security.user.User;
public class InternalAuthenticationBackend implements AuthenticationBackend, AuthorizationBackend {
private final ConfigurationRepository configurationRepository;
public InternalAuthenticationBackend(final ConfigurationRepository configurationRepository) {
super();
this.configurationRepository = configurationRepository;
}
@Override
public boolean exists(User user) {
final Settings cfg = getConfigSettings();
if (cfg == null) {
return false;
}
String hashed = cfg.get(user.getName() + ".hash");
if (hashed == null) {
for(String username:cfg.names()) {
String u = cfg.get(username + ".username");
if(user.getName().equals(u)) {
hashed = cfg.get(username+ ".hash");
break;
}
}
if(hashed == null) {
return false;
}
}
final List<String> roles = cfg.getAsList(user.getName() + ".roles", Collections.emptyList());
if(roles != null) {
user.addRoles(roles);
}
return true;
}
@Override
public User authenticate(final AuthCredentials credentials) {
final Settings cfg = getConfigSettings();
if (cfg == null) {
throw new ElasticsearchSecurityException("Internal authentication backend not configured. May be Open Distro Security is not initialized");
}
String hashed = cfg.get(credentials.getUsername() + ".hash");
if (hashed == null) {
for(String username:cfg.names()) {
String u = cfg.get(username + ".username");
if(credentials.getUsername().equals(u)) {
hashed = cfg.get(username+ ".hash");
break;
}
}
if(hashed == null) {
throw new ElasticsearchSecurityException(credentials.getUsername() + " not found");
}
}
final byte[] password = credentials.getPassword();
if(password == null || password.length == 0) {
throw new ElasticsearchSecurityException("empty passwords not supported");
}
ByteBuffer wrap = ByteBuffer.wrap(password);
CharBuffer buf = StandardCharsets.UTF_8.decode(wrap);
char[] array = new char[buf.limit()];
buf.get(array);
Arrays.fill(password, (byte)0);
try {
if (OpenBSDBCrypt.checkPassword(hashed, array)) {
final List<String> roles = cfg.getAsList(credentials.getUsername() + ".roles", Collections.emptyList());
final Settings customAttributes = cfg.getAsSettings(credentials.getUsername() + ".attributes");
if(customAttributes != null) {
for(String attributeName: customAttributes.names()) {
credentials.addAttribute("attr.internal."+attributeName, customAttributes.get(attributeName));
}
}
return new User(credentials.getUsername(), roles, credentials);
} else {
throw new ElasticsearchSecurityException("password does not match");
}
} finally {
Arrays.fill(wrap.array(), (byte)0);
Arrays.fill(buf.array(), '\0');
Arrays.fill(array, '\0');
}
}
@Override
public String getType() {
return "internal";
}
private Settings getConfigSettings() {
return configurationRepository.getConfiguration(ConfigConstants.CONFIGNAME_INTERNAL_USERS, false);
}
@Override
public void fillRoles(User user, AuthCredentials credentials) throws ElasticsearchSecurityException {
final Settings cfg = getConfigSettings();
if (cfg == null) {
throw new ElasticsearchSecurityException("Internal authentication backend not configured. May be Open Distro Security is not initialized.");
}
final List<String> roles = cfg.getAsList(credentials.getUsername() + ".roles", Collections.emptyList());
if(roles != null && !roles.isEmpty() && user != null) {
user.addRoles(roles);
}
}
}

View File

@ -0,0 +1,62 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.auth.internal;
import java.nio.file.Path;
import org.elasticsearch.common.settings.Settings;
import com.amazon.opendistroforelasticsearch.security.auth.AuthenticationBackend;
import com.amazon.opendistroforelasticsearch.security.user.AuthCredentials;
import com.amazon.opendistroforelasticsearch.security.user.User;
public class NoOpAuthenticationBackend implements AuthenticationBackend {
public NoOpAuthenticationBackend(final Settings settings, final Path configPath) {
super();
}
@Override
public String getType() {
return "noop";
}
@Override
public User authenticate(final AuthCredentials credentials) {
return new User(credentials.getUsername(), credentials.getBackendRoles(), credentials);
}
@Override
public boolean exists(User user) {
return true;
}
}

View File

@ -0,0 +1,57 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.auth.internal;
import java.nio.file.Path;
import org.elasticsearch.common.settings.Settings;
import com.amazon.opendistroforelasticsearch.security.auth.AuthorizationBackend;
import com.amazon.opendistroforelasticsearch.security.user.AuthCredentials;
import com.amazon.opendistroforelasticsearch.security.user.User;
public class NoOpAuthorizationBackend implements AuthorizationBackend {
public NoOpAuthorizationBackend(final Settings settings, final Path configPath) {
super();
}
@Override
public String getType() {
return "noop";
}
@Override
public void fillRoles(final User user, final AuthCredentials authCreds) {
// no-op
}
}

View File

@ -0,0 +1,326 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.compliance;
import java.nio.charset.StandardCharsets;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ExecutionException;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.joda.time.DateTime;
import org.joda.time.DateTimeZone;
import org.joda.time.format.DateTimeFormat;
import org.joda.time.format.DateTimeFormatter;
import com.amazon.opendistroforelasticsearch.security.auditlog.AuditLog;
import com.amazon.opendistroforelasticsearch.security.resolver.IndexResolverReplacer;
import com.amazon.opendistroforelasticsearch.security.resolver.IndexResolverReplacer.Resolved;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
import com.amazon.opendistroforelasticsearch.security.support.WildcardMatcher;
import com.google.common.cache.CacheBuilder;
import com.google.common.cache.CacheLoader;
import com.google.common.cache.LoadingCache;
public class ComplianceConfig {
private final Logger log = LogManager.getLogger(getClass());
private final Settings settings;
private final Map<String, Set<String>> readEnabledFields = new HashMap<>(100);
private final List<String> watchedWriteIndices;
private DateTimeFormatter auditLogPattern = null;
private String auditLogIndex = null;
private final boolean logDiffsForWrite;
private final boolean logWriteMetadataOnly;
private final boolean logReadMetadataOnly;
private final boolean logExternalConfig;
private final boolean logInternalConfig;
private final LoadingCache<String, Set<String>> cache;
private final Set<String> immutableIndicesPatterns;
private final byte[] salt16;
private final String opendistrosecurityIndex;
private final IndexResolverReplacer irr;
private final Environment environment;
private final AuditLog auditLog;
private volatile boolean enabled = true;
private volatile boolean externalConfigLogged = false;
public ComplianceConfig(final Environment environment, final IndexResolverReplacer irr, final AuditLog auditLog) {
super();
this.settings = environment.settings();
this.environment = environment;
this.irr = irr;
this.auditLog = auditLog;
final List<String> watchedReadFields = this.settings.getAsList(ConfigConstants.OPENDISTRO_SECURITY_COMPLIANCE_HISTORY_READ_WATCHED_FIELDS,
Collections.emptyList(), false);
watchedWriteIndices = settings.getAsList(ConfigConstants.OPENDISTRO_SECURITY_COMPLIANCE_HISTORY_WRITE_WATCHED_INDICES, Collections.emptyList());
logDiffsForWrite = settings.getAsBoolean(ConfigConstants.OPENDISTRO_SECURITY_COMPLIANCE_HISTORY_WRITE_LOG_DIFFS, false);
logWriteMetadataOnly = settings.getAsBoolean(ConfigConstants.OPENDISTRO_SECURITY_COMPLIANCE_HISTORY_WRITE_METADATA_ONLY, false);
logReadMetadataOnly = settings.getAsBoolean(ConfigConstants.OPENDISTRO_SECURITY_COMPLIANCE_HISTORY_READ_METADATA_ONLY, false);
logExternalConfig = settings.getAsBoolean(ConfigConstants.OPENDISTRO_SECURITY_COMPLIANCE_HISTORY_EXTERNAL_CONFIG_ENABLED, false);
logInternalConfig = settings.getAsBoolean(ConfigConstants.OPENDISTRO_SECURITY_COMPLIANCE_HISTORY_INTERNAL_CONFIG_ENABLED, false);
immutableIndicesPatterns = new HashSet<String>(settings.getAsList(ConfigConstants.OPENDISTRO_SECURITY_COMPLIANCE_IMMUTABLE_INDICES, Collections.emptyList()));
final String saltAsString = settings.get(ConfigConstants.OPENDISTRO_SECURITY_COMPLIANCE_SALT, ConfigConstants.OPENDISTRO_SECURITY_COMPLIANCE_SALT_DEFAULT);
final byte[] saltAsBytes = saltAsString.getBytes(StandardCharsets.UTF_8);
if(saltAsString.equals(ConfigConstants.OPENDISTRO_SECURITY_COMPLIANCE_SALT_DEFAULT)) {
log.warn("If you plan to use field masking pls configure "+ConfigConstants.OPENDISTRO_SECURITY_COMPLIANCE_SALT+" to be a random string of 16 chars length identical on all nodes");
}
if(saltAsBytes.length < 16) {
throw new ElasticsearchException(ConfigConstants.OPENDISTRO_SECURITY_COMPLIANCE_SALT+" must at least contain 16 bytes");
}
if(saltAsBytes.length > 16) {
log.warn(ConfigConstants.OPENDISTRO_SECURITY_COMPLIANCE_SALT+" is greater than 16 bytes. Only the first 16 bytes are used for salting");
}
salt16 = Arrays.copyOf(saltAsBytes, 16);
this.opendistrosecurityIndex = settings.get(ConfigConstants.OPENDISTRO_SECURITY_CONFIG_INDEX_NAME, ConfigConstants.OPENDISTRO_SECURITY_DEFAULT_CONFIG_INDEX);
//opendistro_security.compliance.pii_fields:
// - indexpattern,fieldpattern,fieldpattern,....
for(String watchedReadField: watchedReadFields) {
final List<String> split = new ArrayList<>(Arrays.asList(watchedReadField.split(",")));
if(split.isEmpty()) {
continue;
} else if(split.size() == 1) {
readEnabledFields.put(split.get(0), Collections.singleton("*"));
} else {
Set<String> _fields = new HashSet<String>(split.subList(1, split.size()));
readEnabledFields.put(split.get(0), _fields);
}
}
final String type = settings.get(ConfigConstants.OPENDISTRO_SECURITY_AUDIT_TYPE_DEFAULT, null);
if("internal_elasticsearch".equalsIgnoreCase(type)) {
final String index = settings.get(ConfigConstants.OPENDISTRO_SECURITY_AUDIT_CONFIG_DEFAULT_PREFIX + ConfigConstants.OPENDISTRO_SECURITY_AUDIT_ES_INDEX,"'security-auditlog-'YYYY.MM.dd");
try {
auditLogPattern = DateTimeFormat.forPattern(index); //throws IllegalArgumentException if no pattern
} catch (IllegalArgumentException e) {
//no pattern
auditLogIndex = index;
} catch (Exception e) {
log.error("Unable to check if auditlog index {} is part of compliance setup", index, e);
}
}
log.info("PII configuration [auditLogPattern={}, auditLogIndex={}]: {}", auditLogPattern, auditLogIndex, readEnabledFields);
cache = CacheBuilder.newBuilder()
.maximumSize(1000)
.build(new CacheLoader<String, Set<String>>() {
@Override
public Set<String> load(String index) throws Exception {
return getFieldsForIndex0(index);
}
});
}
public boolean isLogExternalConfig() {
return logExternalConfig;
}
public boolean isExternalConfigLogged() {
return externalConfigLogged;
}
public void setExternalConfigLogged(boolean externalConfigLogged) {
this.externalConfigLogged = externalConfigLogged;
}
public boolean isEnabled() {
return this.enabled;
}
//cached
@SuppressWarnings("unchecked")
private Set<String> getFieldsForIndex0(String index) {
if(index == null) {
return Collections.EMPTY_SET;
}
if(auditLogIndex != null && auditLogIndex.equalsIgnoreCase(index)) {
return Collections.EMPTY_SET;
}
if(auditLogPattern != null) {
if(index.equalsIgnoreCase(getExpandedIndexName(auditLogPattern, null))) {
return Collections.EMPTY_SET;
}
}
final Set<String> tmp = new HashSet<String>(100);
for(String indexPattern: readEnabledFields.keySet()) {
if(indexPattern != null && !indexPattern.isEmpty() && WildcardMatcher.match(indexPattern, index)) {
tmp.addAll(readEnabledFields.get(indexPattern));
}
}
return tmp;
}
private String getExpandedIndexName(DateTimeFormatter indexPattern, String index) {
if(indexPattern == null) {
return index;
}
return indexPattern.print(DateTime.now(DateTimeZone.UTC));
}
//do not check for isEnabled
public boolean writeHistoryEnabledForIndex(String index) {
if(index == null) {
return false;
}
if(opendistrosecurityIndex.equals(index)) {
return logInternalConfig;
}
if(auditLogIndex != null && auditLogIndex.equalsIgnoreCase(index)) {
return false;
}
if(auditLogPattern != null) {
if(index.equalsIgnoreCase(getExpandedIndexName(auditLogPattern, null))) {
return false;
}
}
return WildcardMatcher.matchAny(watchedWriteIndices, index);
}
//no patterns here as parameters
//check for isEnabled
public boolean readHistoryEnabledForIndex(String index) {
if(!this.enabled) {
return false;
}
if(opendistrosecurityIndex.equals(index)) {
return logInternalConfig;
}
try {
return !cache.get(index).isEmpty();
} catch (ExecutionException e) {
log.error(e);
return true;
}
}
//no patterns here as parameters
//check for isEnabled
public boolean readHistoryEnabledForField(String index, String field) {
if(!this.enabled) {
return false;
}
if(opendistrosecurityIndex.equals(index)) {
return logInternalConfig;
}
try {
final Set<String> fields = cache.get(index);
if(fields.isEmpty()) {
return false;
}
return WildcardMatcher.matchAny(fields, field);
} catch (ExecutionException e) {
log.error(e);
return true;
}
}
public boolean logDiffsForWrite() {
return !logWriteMetadataOnly() && logDiffsForWrite;
}
public boolean logWriteMetadataOnly() {
return logWriteMetadataOnly;
}
public boolean logReadMetadataOnly() {
return logReadMetadataOnly;
}
public Settings getSettings() {
return settings;
}
public Environment getEnvironment() {
return environment;
}
//check for isEnabled
public boolean isIndexImmutable(Object request) {
if(!this.enabled) {
return false;
}
if(immutableIndicesPatterns.isEmpty()) {
return false;
}
final Resolved resolved = irr.resolveRequest(request);
final Set<String> allIndices = resolved.getAllIndices();
//assert allIndices.size() == 1:"only one index here, not "+allIndices;
//assert allIndices.contains("_all"):"no _all in "+allIndices;
//assert allIndices.contains("*"):"no * in "+allIndices;
//assert allIndices.contains(""):"no EMPTY in "+allIndices;
return WildcardMatcher.matchAny(immutableIndicesPatterns, allIndices);
}
public byte[] getSalt16() {
return salt16.clone();
}
}

View File

@ -0,0 +1,46 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.compliance;
import org.elasticsearch.index.IndexService;
import org.elasticsearch.index.shard.IndexingOperationListener;
/**
* noop impl
*
*
*/
public class ComplianceIndexingOperationListener implements IndexingOperationListener {
public void setIs(IndexService is) {
//noop
}
}

View File

@ -0,0 +1,97 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.configuration;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import org.elasticsearch.common.settings.Settings;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
public class ActionGroupHolder {
final ConfigurationRepository configurationRepository;
public ActionGroupHolder(final ConfigurationRepository configurationRepository) {
this.configurationRepository = configurationRepository;
}
public Set<String> getGroupMembers(final String groupname) {
final Settings actionGroups = getSettings();
if (actionGroups == null) {
return Collections.emptySet();
}
return resolve(actionGroups, groupname);
}
private Set<String> resolve(final Settings actionGroups, final String entry) {
final Set<String> ret = new HashSet<String>();
List<String> en = actionGroups.getAsList(entry);
if (en.isEmpty()) {
// try Open Distro Security format including readonly and permissions key
en = actionGroups.getAsList(entry +"." + ConfigConstants.CONFIGKEY_ACTION_GROUPS_PERMISSIONS);
}
for (String string: en) {
if (actionGroups.names().contains(string)) {
ret.addAll(resolve(actionGroups,string));
} else {
ret.add(string);
}
}
return ret;
}
public Set<String> resolvedActions(final List<String> actions) {
final Set<String> resolvedActions = new HashSet<String>();
for (String string: actions) {
final Set<String> groups = getGroupMembers(string);
if (groups.isEmpty()) {
resolvedActions.add(string);
} else {
resolvedActions.addAll(groups);
}
}
return resolvedActions;
}
private Settings getSettings() {
return configurationRepository.getConfiguration(ConfigConstants.CONFIGNAME_ACTION_GROUPS, false);
}
}

View File

@ -0,0 +1,159 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.configuration;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import javax.naming.InvalidNameException;
import javax.naming.ldap.LdapName;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.common.settings.Settings;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
import com.amazon.opendistroforelasticsearch.security.support.WildcardMatcher;
import com.amazon.opendistroforelasticsearch.security.user.User;
import com.google.common.collect.ArrayListMultimap;
import com.google.common.collect.ListMultimap;
public class AdminDNs {
protected final Logger log = LogManager.getLogger(AdminDNs.class);
private final Set<LdapName> adminDn = new HashSet<LdapName>();
private final Set<String> adminUsernames = new HashSet<String>();
private final ListMultimap<LdapName, String> allowedImpersonations = ArrayListMultimap.<LdapName, String> create();
private final ListMultimap<String, String> allowedRestImpersonations = ArrayListMultimap.<String, String> create();
private boolean injectUserEnabled;
private boolean injectAdminUserEnabled;
public AdminDNs(final Settings settings) {
this.injectUserEnabled = settings.getAsBoolean(ConfigConstants.OPENDISTRO_SECURITY_UNSUPPORTED_INJECT_USER_ENABLED, false);
this.injectAdminUserEnabled = settings.getAsBoolean(ConfigConstants.OPENDISTRO_SECURITY_UNSUPPORTED_INJECT_ADMIN_USER_ENABLED, false);
final List<String> adminDnsA = settings.getAsList(ConfigConstants.OPENDISTRO_SECURITY_AUTHCZ_ADMIN_DN, Collections.emptyList());
for (String dn:adminDnsA) {
try {
log.debug("{} is registered as an admin dn", dn);
adminDn.add(new LdapName(dn));
} catch (final InvalidNameException e) {
// make sure to log correctly depending on user injection settings
if (injectUserEnabled && injectAdminUserEnabled) {
if (log.isDebugEnabled()) {
log.debug("Admin DN not an LDAP name, but admin user injection enabled. Will add {} to admin usernames", dn);
}
adminUsernames.add(dn);
} else {
log.error("Unable to parse admin dn {}",dn, e);
}
}
}
log.debug("Loaded {} admin DN's {}",adminDn.size(), adminDn);
final Settings impersonationDns = settings.getByPrefix(ConfigConstants.OPENDISTRO_SECURITY_AUTHCZ_IMPERSONATION_DN+".");
for (String dnString:impersonationDns.keySet()) {
try {
allowedImpersonations.putAll(new LdapName(dnString), settings.getAsList(ConfigConstants.OPENDISTRO_SECURITY_AUTHCZ_IMPERSONATION_DN+"."+dnString));
} catch (final InvalidNameException e) {
log.error("Unable to parse allowedImpersonations dn {}",dnString, e);
}
}
log.debug("Loaded {} impersonation DN's {}",allowedImpersonations.size(), allowedImpersonations);
final Settings impersonationUsersRest = settings.getByPrefix(ConfigConstants.OPENDISTRO_SECURITY_AUTHCZ_REST_IMPERSONATION_USERS+".");
for (String user:impersonationUsersRest.keySet()) {
allowedRestImpersonations.putAll(user, settings.getAsList(ConfigConstants.OPENDISTRO_SECURITY_AUTHCZ_REST_IMPERSONATION_USERS+"."+user));
}
log.debug("Loaded {} impersonation users for REST {}",allowedRestImpersonations.size(), allowedRestImpersonations);
}
public boolean isAdmin(User user) {
if (isAdminDN(user.getName())) {
return true;
}
// ThreadContext injected user, may be admin user, only if both flags are enabled and user is injected
if (injectUserEnabled && injectAdminUserEnabled && user.isInjected() && adminUsernames.contains(user.getName())) {
return true;
}
return false;
}
public boolean isAdminDN(String dn) {
if(dn == null) return false;
try {
return isAdminDN(new LdapName(dn));
} catch (InvalidNameException e) {
return false;
}
}
private boolean isAdminDN(LdapName dn) {
if(dn == null) return false;
boolean isAdmin = adminDn.contains(dn);
if (log.isTraceEnabled()) {
log.trace("Is principal {} an admin cert? {}", dn.toString(), isAdmin);
}
return isAdmin;
}
public boolean isTransportImpersonationAllowed(LdapName dn, String impersonated) {
if(dn == null) return false;
if(isAdminDN(dn)) {
return true;
}
return WildcardMatcher.matchAny(this.allowedImpersonations.get(dn), impersonated);
}
public boolean isRestImpersonationAllowed(final String originalUser, final String impersonated) {
if(originalUser == null) {
return false;
}
return WildcardMatcher.matchAny(this.allowedRestImpersonations.get(originalUser), impersonated);
}
}

View File

@ -0,0 +1,120 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.configuration;
import java.util.Iterator;
import java.util.List;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.Version;
import org.elasticsearch.cluster.ClusterChangedEvent;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.ClusterStateListener;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.cluster.node.DiscoveryNodes;
import org.elasticsearch.index.Index;
public class ClusterInfoHolder implements ClusterStateListener {
protected final Logger log = LogManager.getLogger(this.getClass());
private volatile Boolean has5xNodes = null;
private volatile Boolean has5xIndices = null;
private volatile DiscoveryNodes nodes = null;
private volatile Boolean isLocalNodeElectedMaster = null;;
@Override
public void clusterChanged(ClusterChangedEvent event) {
if(has5xNodes == null || event.nodesChanged()) {
has5xNodes = Boolean.valueOf(clusterHas5xNodes(event.state()));
if(log.isTraceEnabled()) {
log.trace("has5xNodes: {}", has5xNodes);
}
}
final List<String> indicesCreated = event.indicesCreated();
final List<Index> indicesDeleted = event.indicesDeleted();
if(has5xIndices == null || !indicesCreated.isEmpty() || !indicesDeleted.isEmpty()) {
has5xIndices = Boolean.valueOf(clusterHas5xIndices(event.state()));
if(log.isTraceEnabled()) {
log.trace("has5xIndices: {}", has5xIndices);
}
}
if(nodes == null || event.nodesChanged()) {
nodes = event.state().nodes();
if(log.isDebugEnabled()) {
log.debug("Cluster Info Holder now initialized for 'nodes'");
}
}
isLocalNodeElectedMaster = event.localNodeMaster()?Boolean.TRUE:Boolean.FALSE;
}
public Boolean getHas5xNodes() {
return has5xNodes;
}
public Boolean getHas5xIndices() {
return has5xIndices;
}
public Boolean isLocalNodeElectedMaster() {
return isLocalNodeElectedMaster;
}
public Boolean hasNode(DiscoveryNode node) {
if(nodes == null) {
if(log.isDebugEnabled()) {
log.debug("Cluster Info Holder not initialized yet for 'nodes'");
}
return null;
}
return nodes.nodeExists(node)?Boolean.TRUE:Boolean.FALSE;
}
private static boolean clusterHas5xNodes(ClusterState state) {
return state.nodes().getMinNodeVersion().before(Version.V_6_0_0_alpha1);
}
private static boolean clusterHas5xIndices(ClusterState state) {
final Iterator<IndexMetaData> indices = state.metaData().indices().valuesIt();
for(;indices.hasNext();) {
final IndexMetaData indexMetaData = indices.next();
if(indexMetaData.getCreationVersion().before(Version.V_6_0_0_alpha1)) {
return true;
}
}
return false;
}
}

View File

@ -0,0 +1,102 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.configuration;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
public class CompatConfig implements ConfigurationChangeListener {
private final Logger log = LogManager.getLogger(getClass());
private final Settings staticSettings;
private Settings dynamicSecurityConfig;
public CompatConfig(final Environment environment) {
super();
this.staticSettings = environment.settings();
}
@Override
public void onChange(final Settings dynamicSecurityConfig) {
this.dynamicSecurityConfig = dynamicSecurityConfig;
log.debug("dynamicSecurityConfig updated?: {}", (dynamicSecurityConfig != null));
}
//true is default
public boolean restAuthEnabled() {
final boolean restInitiallyDisabled = staticSettings.getAsBoolean(ConfigConstants.OPENDISTRO_SECURITY_UNSUPPORTED_DISABLE_REST_AUTH_INITIALLY, false);
if(restInitiallyDisabled) {
if(dynamicSecurityConfig == null) {
if(log.isTraceEnabled()) {
log.trace("dynamicSecurityConfig is null, initially static restDisabled");
}
return false;
} else {
final boolean restDynamicallyDisabled = dynamicSecurityConfig.getAsBoolean("opendistro_security.dynamic.disable_rest_auth", false);
if(log.isTraceEnabled()) {
log.trace("opendistro_security.dynamic.disable_rest_auth {}", restDynamicallyDisabled);
}
return !restDynamicallyDisabled;
}
} else {
return true;
}
}
//true is default
public boolean transportInterClusterAuthEnabled() {
final boolean interClusterAuthInitiallyDisabled = staticSettings.getAsBoolean(ConfigConstants.OPENDISTRO_SECURITY_UNSUPPORTED_DISABLE_INTERTRANSPORT_AUTH_INITIALLY, false);
if(interClusterAuthInitiallyDisabled) {
if(dynamicSecurityConfig == null) {
if(log.isTraceEnabled()) {
log.trace("dynamicSecurityConfig is null, initially static interClusterAuthDisabled");
}
return false;
} else {
final boolean interClusterAuthDynamicallyDisabled = dynamicSecurityConfig.getAsBoolean("opendistro_security.dynamic.disable_intertransport_auth", false);
if(log.isTraceEnabled()) {
log.trace("opendistro_security.dynamic.disable_intertransport_auth {}", interClusterAuthDynamicallyDisabled);
}
return !interClusterAuthDynamicallyDisabled;
}
} else {
return true;
}
}
}

View File

@ -0,0 +1,43 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.configuration;
import org.elasticsearch.action.get.MultiGetResponse.Failure;
import org.elasticsearch.common.settings.Settings;
public interface ConfigCallback {
void success(String id, Settings settings);
void noData(String id);
void singleFailure(Failure failure);
void failure(Throwable t);
}

View File

@ -0,0 +1,45 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.configuration;
import org.elasticsearch.common.settings.Settings;
/**
* Callback function on change particular configuration
*/
public interface ConfigurationChangeListener {
/**
* @param configuration not null updated configuration on that was subscribe current listener
*/
void onChange(Settings configuration);
}

View File

@ -0,0 +1,208 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.configuration;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.ExceptionsHelper;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.get.MultiGetItemResponse;
import org.elasticsearch.action.get.MultiGetRequest;
import org.elasticsearch.action.get.MultiGetResponse;
import org.elasticsearch.action.get.MultiGetResponse.Failure;
import org.elasticsearch.client.Client;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
import org.elasticsearch.common.xcontent.XContentHelper;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.threadpool.ThreadPool;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
import com.amazon.opendistroforelasticsearch.security.support.OpenDistroSecurityDeprecationHandler;
class ConfigurationLoader {
protected final Logger log = LogManager.getLogger(this.getClass());
private final Client client;
//private final ThreadContext threadContext;
private final String opendistrosecurityIndex;
ConfigurationLoader(final Client client, ThreadPool threadPool, final Settings settings) {
super();
this.client = client;
//this.threadContext = threadPool.getThreadContext();
this.opendistrosecurityIndex = settings.get(ConfigConstants.OPENDISTRO_SECURITY_CONFIG_INDEX_NAME, ConfigConstants.OPENDISTRO_SECURITY_DEFAULT_CONFIG_INDEX);
log.debug("Index is: {}", opendistrosecurityIndex);
}
Map<String, Settings> load(final String[] events, long timeout, TimeUnit timeUnit) throws InterruptedException, TimeoutException {
final CountDownLatch latch = new CountDownLatch(events.length);
final Map<String, Settings> rs = new HashMap<String, Settings>(events.length);
loadAsync(events, new ConfigCallback() {
@Override
public void success(String id, Settings settings) {
if(latch.getCount() <= 0) {
log.error("Latch already counted down (for {} of {}) (index={})", id, Arrays.toString(events), opendistrosecurityIndex);
}
rs.put(id, settings);
latch.countDown();
if(log.isDebugEnabled()) {
log.debug("Received config for {} (of {}) with current latch value={}", id, Arrays.toString(events), latch.getCount());
}
}
@Override
public void singleFailure(Failure failure) {
log.error("Failure {} retrieving configuration for {} (index={})", failure==null?null:failure.getMessage(), Arrays.toString(events), opendistrosecurityIndex);
}
@Override
public void noData(String id) {
log.warn("No data for {} while retrieving configuration for {} (index={})", id, Arrays.toString(events), opendistrosecurityIndex);
}
@Override
public void failure(Throwable t) {
log.error("Exception {} while retrieving configuration for {} (index={})",t,t.toString(), Arrays.toString(events), opendistrosecurityIndex);
}
});
if(!latch.await(timeout, timeUnit)) {
//timeout
throw new TimeoutException("Timeout after "+timeout+""+timeUnit+" while retrieving configuration for "+Arrays.toString(events)+ "(index="+opendistrosecurityIndex+")");
}
return rs;
}
void loadAsync(final String[] events, final ConfigCallback callback) {
if(events == null || events.length == 0) {
log.warn("No config events requested to load");
return;
}
final MultiGetRequest mget = new MultiGetRequest();
for (int i = 0; i < events.length; i++) {
final String event = events[i];
mget.add(opendistrosecurityIndex, "security", event);
}
mget.refresh(true);
mget.realtime(true);
//try(StoredContext ctx = threadContext.stashContext()) {
// threadContext.putHeader(ConfigConstants.OPENDISTRO_SECURITY_CONF_REQUEST_HEADER, "true");
{
client.multiGet(mget, new ActionListener<MultiGetResponse>() {
@Override
public void onResponse(MultiGetResponse response) {
MultiGetItemResponse[] responses = response.getResponses();
for (int i = 0; i < responses.length; i++) {
MultiGetItemResponse singleResponse = responses[i];
if(singleResponse != null && !singleResponse.isFailed()) {
GetResponse singleGetResponse = singleResponse.getResponse();
if(singleGetResponse.isExists() && !singleGetResponse.isSourceEmpty()) {
//success
final Settings _settings = toSettings(singleGetResponse.getSourceAsBytesRef(), singleGetResponse.getId());
if(_settings != null) {
callback.success(singleGetResponse.getId(), _settings);
} else {
log.error("Cannot parse settings for "+singleGetResponse.getId());
}
} else {
//does not exist or empty source
callback.noData(singleGetResponse.getId());
}
} else {
//failure
callback.singleFailure(singleResponse==null?null:singleResponse.getFailure());
}
}
}
@Override
public void onFailure(Exception e) {
callback.failure(e);
}
});
}
}
private Settings toSettings(final BytesReference ref, final String id) {
if (ref == null || ref.length() == 0) {
log.error("Empty or null byte reference for {}", id);
return null;
}
XContentParser parser = null;
try {
parser = XContentHelper.createParser(NamedXContentRegistry.EMPTY, OpenDistroSecurityDeprecationHandler.INSTANCE, ref, XContentType.JSON);
parser.nextToken();
parser.nextToken();
if(!id.equals((parser.currentName()))) {
log.error("Cannot parse config for type {} because {}!={}", id, id, parser.currentName());
return null;
}
parser.nextToken();
return Settings.builder().loadFromStream("dummy.json", new ByteArrayInputStream(parser.binaryValue()), true).build();
} catch (final IOException e) {
throw ExceptionsHelper.convertToElastic(e);
} finally {
if(parser != null) {
try {
parser.close();
} catch (IOException e) {
//ignore
}
}
}
}
}

View File

@ -0,0 +1,97 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.configuration;
import java.util.Collection;
import java.util.Map;
import org.elasticsearch.common.settings.Settings;
/**
* Abstraction layer over Open Distro Security configuration repository
*/
public interface ConfigurationRepository {
/**
* Load configuration from persistence layer
*
* @param configurationType not null configuration identifier
* @return configuration found by specified type in persistence layer or {@code null} if persistence layer
* doesn't have configuration by requested type, or persistence layer not ready yet
* @throws NullPointerException if specified configuration type is null or empty
*/
Settings getConfiguration(String configurationType, boolean triggerComplianceWhenCached);
/**
* Bulk load configuration from persistence layer
*
* @param configTypes not null collection with not null configuration identifiers by that need load configurations
* @return not null map where key it configuration type for found configuration and value it not null {@link Settings}
* that represent configuration for correspond type. If by requested type configuration absent in persistence layer,
* they will be absent in result map
* @throws NullPointerException if specified collection with type null or contain null or empty types
*/
//Map<String, Settings> getConfiguration(Collection<String> configTypes);
/**
* Bulk reload configuration from persistence layer. If configuration was modify manually bypassing business logic define
* in {@link ConfigurationRepository}, this method should catch up it logic. This method can be very slow, because it skip
* all caching logic and should be use only as a last resort.
*
* @param configTypes not null collection with not null configuration identifiers by that need load configurations
* @return not null map where key it configuration type for found configuration and value it not null {@link Settings}
* that represent configuration for correspond type. If by requested type configuration absent in persistence layer,
* they will be absent in result map
* @throws NullPointerException if specified collection with type null or contain null or empty types
*/
Map<String, Settings> reloadConfiguration(Collection<String> configTypes);
/**
* Save changed configuration in persistence layer. After save, changes will be available for
* read via {@link ConfigurationRepository#getConfiguration(String)}
*
* @param configurationType not null configuration identifier
* @param settings not null configuration that need persist
* @throws NullPointerException if specified configuration is null or configuration type is null or empty
*/
void persistConfiguration(String configurationType, Settings settings);
/**
* Subscribe on configuration change
*
* @param configurationType not null and not empty configuration type of which changes need notify listener
* @param listener not null callback function that will be execute when specified type will modify
* @throws NullPointerException if specified configuration type is null or empty, or callback function is null
*/
void subscribeOnChange(String configurationType, ConfigurationChangeListener listener);
}

View File

@ -0,0 +1,58 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.configuration;
import java.util.Map;
import java.util.Set;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.ActionRequest;
public interface DlsFlsRequestValve {
/**
*
* @param request
* @param listener
* @return false to stop
*/
boolean invoke(ActionRequest request, ActionListener<?> listener, Map<String,Set<String>> allowedFlsFields, final Map<String,Set<String>> maskedFields, Map<String,Set<String>> queries);
public static class NoopDlsFlsRequestValve implements DlsFlsRequestValve {
@Override
public boolean invoke(ActionRequest request, ActionListener<?> listener, Map<String,Set<String>> allowedFlsFields, final Map<String,Set<String>> maskedFields, Map<String,Set<String>> queries) {
return true;
}
}
}

View File

@ -0,0 +1,186 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.configuration;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Set;
import org.apache.lucene.index.BinaryDocValues;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.FieldInfo;
import org.apache.lucene.index.FieldInfos;
import org.apache.lucene.index.FilterDirectoryReader;
import org.apache.lucene.index.FilterLeafReader;
import org.apache.lucene.index.LeafMetaData;
import org.apache.lucene.index.LeafReader;
import org.apache.lucene.index.NumericDocValues;
import org.apache.lucene.index.PointValues;
import org.apache.lucene.index.SortedDocValues;
import org.apache.lucene.index.SortedNumericDocValues;
import org.apache.lucene.index.SortedSetDocValues;
import org.apache.lucene.index.Terms;
import org.apache.lucene.util.Bits;
import org.elasticsearch.index.mapper.MapperService;
import com.google.common.collect.Sets;
class EmptyFilterLeafReader extends FilterLeafReader {
private static final Set<String> metaFields = Sets.union(Sets.newHashSet("_version"),
Sets.newHashSet(MapperService.getAllMetaFields()));
private final FieldInfo[] fi;
EmptyFilterLeafReader(final LeafReader delegate) {
super(delegate);
final FieldInfos infos = delegate.getFieldInfos();
final List<FieldInfo> lfi = new ArrayList<FieldInfo>(metaFields.size());
for(String metaField: metaFields) {
final FieldInfo _fi = infos.fieldInfo(metaField);
if(_fi != null) {
lfi.add(_fi);
}
}
fi = lfi.toArray(new FieldInfo[0]);
}
private static class EmptySubReaderWrapper extends FilterDirectoryReader.SubReaderWrapper {
@Override
public LeafReader wrap(final LeafReader reader) {
return new EmptyFilterLeafReader(reader);
}
}
static class EmptyDirectoryReader extends FilterDirectoryReader {
public EmptyDirectoryReader(final DirectoryReader in) throws IOException {
super(in, new EmptySubReaderWrapper());
}
@Override
protected DirectoryReader doWrapDirectoryReader(final DirectoryReader in) throws IOException {
return new EmptyDirectoryReader(in);
}
@Override
public CacheHelper getReaderCacheHelper() {
return in.getReaderCacheHelper();
}
}
private boolean isMeta(String field) {
return metaFields.contains(field);
}
@Override
public FieldInfos getFieldInfos() {
return new FieldInfos(fi);
}
@Override
public NumericDocValues getNumericDocValues(final String field) throws IOException {
return isMeta(field) ? in.getNumericDocValues(field) : null;
}
@Override
public BinaryDocValues getBinaryDocValues(final String field) throws IOException {
return isMeta(field) ? in.getBinaryDocValues(field) : null;
}
@Override
public SortedDocValues getSortedDocValues(final String field) throws IOException {
return isMeta(field) ? in.getSortedDocValues(field) : null;
}
@Override
public SortedNumericDocValues getSortedNumericDocValues(final String field) throws IOException {
return isMeta(field) ? in.getSortedNumericDocValues(field) : null;
}
@Override
public SortedSetDocValues getSortedSetDocValues(final String field) throws IOException {
return isMeta(field) ? in.getSortedSetDocValues(field) : null;
}
@Override
public NumericDocValues getNormValues(final String field) throws IOException {
return isMeta(field) ? in.getNormValues(field) : null;
}
@Override
public PointValues getPointValues(String field) throws IOException {
return isMeta(field) ? in.getPointValues(field) : null;
}
@Override
public Terms terms(String field) throws IOException {
return isMeta(field) ? in.terms(field) : null;
}
@Override
public LeafMetaData getMetaData() {
return in.getMetaData();
}
@Override
public Bits getLiveDocs() {
return new Bits.MatchNoBits(0);
}
@Override
public int numDocs() {
return 0;
}
@Override
public LeafReader getDelegate() {
return in;
}
@Override
public int maxDoc() {
return in.maxDoc();
}
@Override
public CacheHelper getCoreCacheHelper() {
return in.getCoreCacheHelper();
}
@Override
public CacheHelper getReaderCacheHelper() {
return in.getReaderCacheHelper();
}
}

View File

@ -0,0 +1,433 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.configuration;
import java.io.File;
import java.nio.file.Path;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.regex.Pattern;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.admin.cluster.health.ClusterHealthRequest;
import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;
import org.elasticsearch.action.admin.indices.create.CreateIndexRequest;
import org.elasticsearch.action.admin.indices.exists.indices.IndicesExistsRequest;
import org.elasticsearch.action.admin.indices.exists.indices.IndicesExistsResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.health.ClusterHealthStatus;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.component.LifecycleListener;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.common.util.concurrent.ThreadContext.StoredContext;
import org.elasticsearch.env.Environment;
import org.elasticsearch.threadpool.ThreadPool;
import com.amazon.opendistroforelasticsearch.security.auditlog.AuditLog;
import com.amazon.opendistroforelasticsearch.security.compliance.ComplianceConfig;
import com.amazon.opendistroforelasticsearch.security.ssl.util.ExceptionUtils;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
import com.amazon.opendistroforelasticsearch.security.support.ConfigHelper;
import com.google.common.collect.ArrayListMultimap;
import com.google.common.collect.Maps;
import com.google.common.collect.Multimap;
public class IndexBaseConfigurationRepository implements ConfigurationRepository {
private static final Logger LOGGER = LogManager.getLogger(IndexBaseConfigurationRepository.class);
private static final Pattern DLS_PATTERN = Pattern.compile(".+\\.indices\\..+\\._dls_=.+", Pattern.DOTALL);
private static final Pattern FLS_PATTERN = Pattern.compile(".+\\.indices\\..+\\._fls_\\.[0-9]+=.+", Pattern.DOTALL);
private final String opendistrosecurityIndex;
private final Client client;
private final ConcurrentMap<String, Settings> typeToConfig;
private final Multimap<String, ConfigurationChangeListener> configTypeToChancheListener;
private final ConfigurationLoader cl;
private final LegacyConfigurationLoader legacycl;
private final Settings settings;
private final ClusterService clusterService;
private final AuditLog auditLog;
private final ComplianceConfig complianceConfig;
private ThreadPool threadPool;
private IndexBaseConfigurationRepository(Settings settings, final Path configPath, ThreadPool threadPool,
Client client, ClusterService clusterService, AuditLog auditLog, ComplianceConfig complianceConfig) {
this.opendistrosecurityIndex = settings.get(ConfigConstants.OPENDISTRO_SECURITY_CONFIG_INDEX_NAME, ConfigConstants.OPENDISTRO_SECURITY_DEFAULT_CONFIG_INDEX);
this.settings = settings;
this.client = client;
this.threadPool = threadPool;
this.clusterService = clusterService;
this.auditLog = auditLog;
this.complianceConfig = complianceConfig;
this.typeToConfig = Maps.newConcurrentMap();
this.configTypeToChancheListener = ArrayListMultimap.create();
cl = new ConfigurationLoader(client, threadPool, settings);
legacycl = new LegacyConfigurationLoader(client, threadPool, settings);
final AtomicBoolean installDefaultConfig = new AtomicBoolean();
clusterService.addLifecycleListener(new LifecycleListener() {
@Override
public void afterStart() {
final Thread bgThread = new Thread(new Runnable() {
@Override
public void run() {
try {
if(installDefaultConfig.get()) {
try {
String lookupDir = System.getProperty("security.default_init.dir");
final String cd = lookupDir != null? (lookupDir+"/") : new Environment(settings, configPath).pluginsFile().toAbsolutePath().toString()+"/opendistro_security/securityconfig/";
File confFile = new File(cd+"config.yml");
if(confFile.exists()) {
final ThreadContext threadContext = threadPool.getThreadContext();
try(StoredContext ctx = threadContext.stashContext()) {
threadContext.putHeader(ConfigConstants.OPENDISTRO_SECURITY_CONF_REQUEST_HEADER, "true");
LOGGER.info("Will create {} index so we can apply default config", opendistrosecurityIndex);
Map<String, Object> indexSettings = new HashMap<>();
indexSettings.put("index.number_of_shards", 1);
indexSettings.put("index.auto_expand_replicas", "0-all");
boolean ok = client.admin().indices().create(new CreateIndexRequest(opendistrosecurityIndex)
.settings(indexSettings))
.actionGet().isAcknowledged();
if(ok) {
ConfigHelper.uploadFile(client, cd+"config.yml", opendistrosecurityIndex, "config");
ConfigHelper.uploadFile(client, cd+"roles.yml", opendistrosecurityIndex, "roles");
ConfigHelper.uploadFile(client, cd+"roles_mapping.yml", opendistrosecurityIndex, "rolesmapping");
ConfigHelper.uploadFile(client, cd+"internal_users.yml", opendistrosecurityIndex, "internalusers");
ConfigHelper.uploadFile(client, cd+"action_groups.yml", opendistrosecurityIndex, "actiongroups");
LOGGER.info("Default config applied");
}
}
} else {
LOGGER.error("{} does not exist", confFile.getAbsolutePath());
}
} catch (Exception e) {
LOGGER.debug("Cannot apply default config (this is not an error!) due to {}", e.getMessage());
}
}
LOGGER.debug("Node started, try to initialize it. Wait for at least yellow cluster state....");
ClusterHealthResponse response = null;
try {
response = client.admin().cluster().health(new ClusterHealthRequest(opendistrosecurityIndex).waitForYellowStatus()).actionGet();
} catch (Exception e1) {
LOGGER.debug("Catched a {} but we just try again ...", e1.toString());
}
while(response == null || response.isTimedOut() || response.getStatus() == ClusterHealthStatus.RED) {
LOGGER.debug("index '{}' not healthy yet, we try again ... (Reason: {})", opendistrosecurityIndex, response==null?"no response":(response.isTimedOut()?"timeout":"other, maybe red cluster"));
try {
Thread.sleep(500);
} catch (InterruptedException e1) {
//ignore
Thread.currentThread().interrupt();
}
try {
response = client.admin().cluster().health(new ClusterHealthRequest(opendistrosecurityIndex).waitForYellowStatus()).actionGet();
} catch (Exception e1) {
LOGGER.debug("Catched again a {} but we just try again ...", e1.toString());
}
continue;
}
while(true) {
try {
LOGGER.debug("Try to load config ...");
reloadConfiguration(Arrays.asList(new String[] { "config", "roles", "rolesmapping", "internalusers", "actiongroups"} ));
break;
} catch (Exception e) {
LOGGER.debug("Unable to load configuration due to {}", String.valueOf(ExceptionUtils.getRootCause(e)));
try {
Thread.sleep(3000);
} catch (InterruptedException e1) {
Thread.currentThread().interrupt();
LOGGER.debug("Thread was interrupted so we cancel initialization");
break;
}
}
}
LOGGER.info("Node '{}' initialized", clusterService.localNode().getName());
} catch (Exception e) {
LOGGER.error("Unexpected exception while initializing node "+e, e);
}
}
});
LOGGER.info("Check if "+opendistrosecurityIndex+" index exists ...");
try {
IndicesExistsRequest ier = new IndicesExistsRequest(opendistrosecurityIndex)
.masterNodeTimeout(TimeValue.timeValueMinutes(1));
final ThreadContext threadContext = threadPool.getThreadContext();
try(StoredContext ctx = threadContext.stashContext()) {
threadContext.putHeader(ConfigConstants.OPENDISTRO_SECURITY_CONF_REQUEST_HEADER, "true");
client.admin().indices().exists(ier, new ActionListener<IndicesExistsResponse>() {
@Override
public void onResponse(IndicesExistsResponse response) {
if(response != null && response.isExists()) {
bgThread.start();
} else {
if(settings.get("tribe.name", null) == null && settings.getByPrefix("tribe").size() > 0) {
LOGGER.info("{} index does not exist yet, but we are a tribe node. So we will load the config anyhow until we got it ...", opendistrosecurityIndex);
bgThread.start();
} else {
if(settings.getAsBoolean(ConfigConstants.OPENDISTRO_SECURITY_ALLOW_DEFAULT_INIT_SECURITYINDEX, false)){
LOGGER.info("{} index does not exist yet, so we create a default config", opendistrosecurityIndex);
installDefaultConfig.set(true);
bgThread.start();
} else {
LOGGER.info("{} index does not exist yet, so no need to load config on node startup. Use securityadmin to initialize cluster", opendistrosecurityIndex);
}
}
}
}
@Override
public void onFailure(Exception e) {
LOGGER.error("Failure while checking {} index {}",e, opendistrosecurityIndex, e);
bgThread.start();
}
});
}
} catch (Throwable e2) {
LOGGER.error("Failure while executing IndicesExistsRequest {}",e2, e2);
bgThread.start();
}
}
});
}
public static ConfigurationRepository create(Settings settings, final Path configPath, final ThreadPool threadPool, Client client, ClusterService clusterService, AuditLog auditLog, ComplianceConfig complianceConfig) {
final IndexBaseConfigurationRepository repository = new IndexBaseConfigurationRepository(settings, configPath, threadPool, client, clusterService, auditLog, complianceConfig);
return repository;
}
@Override
public Settings getConfiguration(String configurationType, boolean triggerComplianceWhenCached) {
Settings result = typeToConfig.get(configurationType);
if (result != null) {
if(triggerComplianceWhenCached && complianceConfig.isEnabled()) {
Map<String, String> fields = new HashMap<String, String>();
fields.put(configurationType, Strings.toString(result));
auditLog.logDocumentRead(this.opendistrosecurityIndex, configurationType, null, fields, complianceConfig);
}
return result;
}
Map<String, Settings> loaded = loadConfigurations(Collections.singleton(configurationType));
result = loaded.get(configurationType);
return putSettingsToCache(configurationType, result);
}
private Settings putSettingsToCache(String configurationType, Settings result) {
if (result != null) {
typeToConfig.putIfAbsent(configurationType, result);
}
return typeToConfig.get(configurationType);
}
/*@Override
public Map<String, Settings> getConfiguration(Collection<String> configTypes) {
List<String> typesToLoad = Lists.newArrayList();
Map<String, Settings> result = Maps.newHashMap();
for (String type : configTypes) {
Settings conf = typeToConfig.get(type);
if (conf != null) {
result.put(type, conf);
} else {
typesToLoad.add(type);
}
}
if (typesToLoad.isEmpty()) {
return result;
}
Map<String, Settings> loaded = loadConfigurations(typesToLoad);
for (Map.Entry<String, Settings> entry : loaded.entrySet()) {
Settings conf = putSettingsToCache(entry.getKey(), entry.getValue());
if (conf != null) {
result.put(entry.getKey(), conf);
}
}
return result;
}*/
@Override
public Map<String, Settings> reloadConfiguration(Collection<String> configTypes) {
Map<String, Settings> loaded = loadConfigurations(configTypes);
typeToConfig.clear();
typeToConfig.putAll(loaded);
notifyAboutChanges(loaded);
return loaded;
}
@Override
public void persistConfiguration(String configurationType, Settings settings) {
//TODO should be use from com.amazon.opendistroforelasticsearch.security.tools.OpenDistroSecurityAdmin
throw new UnsupportedOperationException("Not implemented yet");
}
@Override
public synchronized void subscribeOnChange(String configurationType, ConfigurationChangeListener listener) {
LOGGER.debug("Subscribe on configuration changes by type {} with listener {}", configurationType, listener);
configTypeToChancheListener.put(configurationType, listener);
}
private synchronized void notifyAboutChanges(Map<String, Settings> typeToConfig) {
for (Map.Entry<String, ConfigurationChangeListener> entry : configTypeToChancheListener.entries()) {
String type = entry.getKey();
ConfigurationChangeListener listener = entry.getValue();
Settings settings = typeToConfig.get(type);
if (settings == null) {
continue;
}
LOGGER.debug("Notify {} listener about change configuration with type {}", listener, type);
listener.onChange(settings);
}
}
private Map<String, Settings> loadConfigurations(Collection<String> configTypes) {
final ThreadContext threadContext = threadPool.getThreadContext();
final Map<String, Settings> retVal = new HashMap<String, Settings>();
//final List<Exception> exception = new ArrayList<Exception>(1);
// final CountDownLatch latch = new CountDownLatch(1);
try(StoredContext ctx = threadContext.stashContext()) {
threadContext.putHeader(ConfigConstants.OPENDISTRO_SECURITY_CONF_REQUEST_HEADER, "true");
boolean securityIndexExists = clusterService.state().metaData().hasConcreteIndex(this.opendistrosecurityIndex);
if(securityIndexExists) {
if(clusterService.state().metaData().index(this.opendistrosecurityIndex).mapping("config") != null) {
//legacy layout
LOGGER.debug("security index exists and was created before ES 6 (legacy layout)");
retVal.putAll(validate(legacycl.loadLegacy(configTypes.toArray(new String[0]), 5, TimeUnit.SECONDS), configTypes.size()));
} else {
LOGGER.debug("security index exists and was created with ES 6 (new layout)");
retVal.putAll(validate(cl.load(configTypes.toArray(new String[0]), 5, TimeUnit.SECONDS), configTypes.size()));
}
} else {
//wait (and use new layout)
LOGGER.debug("security index not exists (yet)");
retVal.putAll(validate(cl.load(configTypes.toArray(new String[0]), 30, TimeUnit.SECONDS), configTypes.size()));
}
} catch (Exception e) {
throw new ElasticsearchException(e);
}
return retVal;
}
private Map<String, Settings> validate(Map<String, Settings> conf, int expectedSize) throws InvalidConfigException {
if(conf == null || conf.size() != expectedSize) {
throw new InvalidConfigException("Retrieved only partial configuration");
}
final Settings roles = conf.get("roles");
final String rolesDelimited;
if (roles != null && (rolesDelimited = roles.toDelimitedString('#')) != null) {
//<role>.indices.<indice>._dls_= OK
//<role>.indices.<indice>._fls_.<num>= OK
final String[] rolesString = rolesDelimited.split("#");
for (String role : rolesString) {
if (role.contains("_fls_.") && !FLS_PATTERN.matcher(role).matches()) {
LOGGER.error("Invalid FLS configuration detected, FLS/DLS will not work correctly: {}", role);
}
if (role.contains("_dls_=") && !DLS_PATTERN.matcher(role).matches()) {
LOGGER.error("Invalid DLS configuration detected, FLS/DLS will not work correctly: {}", role);
}
}
}
return conf;
}
private static String formatDate(long date) {
return new SimpleDateFormat("yyyy-MM-dd").format(new Date(date));
}
}

View File

@ -0,0 +1,60 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.configuration;
public class InvalidConfigException extends Exception {
/**
*
*/
private static final long serialVersionUID = 1L;
public InvalidConfigException() {
super();
}
public InvalidConfigException(String message, Throwable cause, boolean enableSuppression, boolean writableStackTrace) {
super(message, cause, enableSuppression, writableStackTrace);
}
public InvalidConfigException(String message, Throwable cause) {
super(message, cause);
}
public InvalidConfigException(String message) {
super(message);
}
public InvalidConfigException(Throwable cause) {
super(cause);
}
}

View File

@ -0,0 +1,208 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.configuration;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.ExceptionsHelper;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.get.MultiGetItemResponse;
import org.elasticsearch.action.get.MultiGetRequest;
import org.elasticsearch.action.get.MultiGetResponse;
import org.elasticsearch.action.get.MultiGetResponse.Failure;
import org.elasticsearch.client.Client;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
import org.elasticsearch.common.xcontent.XContentHelper;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.threadpool.ThreadPool;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
import com.amazon.opendistroforelasticsearch.security.support.OpenDistroSecurityDeprecationHandler;
class LegacyConfigurationLoader {
protected final Logger log = LogManager.getLogger(this.getClass());
private final Client client;
//private final ThreadContext threadContext;
private final String opendistrosecurityIndex;
LegacyConfigurationLoader(final Client client, ThreadPool threadPool, final Settings settings) {
super();
this.client = client;
//this.threadContext = threadPool.getThreadContext();
this.opendistrosecurityIndex = settings.get(ConfigConstants.OPENDISTRO_SECURITY_CONFIG_INDEX_NAME, ConfigConstants.OPENDISTRO_SECURITY_DEFAULT_CONFIG_INDEX);
log.debug("Index is: {}", opendistrosecurityIndex);
}
Map<String, Settings> loadLegacy(final String[] events, long timeout, TimeUnit timeUnit) throws InterruptedException, TimeoutException {
final CountDownLatch latch = new CountDownLatch(events.length);
final Map<String, Settings> rs = new HashMap<String, Settings>(events.length);
loadAsyncLegacy(events, new ConfigCallback() {
@Override
public void success(String type, Settings settings) {
if(latch.getCount() <= 0) {
log.error("Latch already counted down (for {} of {}) (index={})", type, Arrays.toString(events), opendistrosecurityIndex);
}
rs.put(type, settings);
latch.countDown();
if(log.isDebugEnabled()) {
log.debug("Received config for {} (of {}) with current latch value={}", type, Arrays.toString(events), latch.getCount());
}
}
@Override
public void singleFailure(Failure failure) {
log.error("Failure {} retrieving configuration for {} (index={})", failure==null?null:failure.getMessage(), Arrays.toString(events), opendistrosecurityIndex);
}
@Override
public void noData(String type) {
log.warn("No data for {} while retrieving configuration for {} (index={})", type, Arrays.toString(events), opendistrosecurityIndex);
}
@Override
public void failure(Throwable t) {
log.error("Exception {} while retrieving configuration for {} (index={})",t,t.toString(), Arrays.toString(events), opendistrosecurityIndex);
}
});
if(!latch.await(timeout, timeUnit)) {
//timeout
throw new TimeoutException("Timeout after "+timeout+" "+timeUnit+" while retrieving configuration for "+Arrays.toString(events)+ "(index="+opendistrosecurityIndex+")");
}
return rs;
}
void loadAsyncLegacy(final String[] events, final ConfigCallback callback) {
if(events == null || events.length == 0) {
log.warn("No config events requested to load");
return;
}
final MultiGetRequest mget = new MultiGetRequest();
for (int i = 0; i < events.length; i++) {
final String event = events[i];
mget.add(opendistrosecurityIndex, event, "0");
}
mget.refresh(true);
mget.realtime(true);
//try(StoredContext ctx = threadContext.stashContext()) {
// threadContext.putHeader(ConfigConstants.OPENDISTRO_SECURITY_CONF_REQUEST_HEADER, "true");
{
client.multiGet(mget, new ActionListener<MultiGetResponse>() {
@Override
public void onResponse(MultiGetResponse response) {
MultiGetItemResponse[] responses = response.getResponses();
for (int i = 0; i < responses.length; i++) {
MultiGetItemResponse singleResponse = responses[i];
if(singleResponse != null && !singleResponse.isFailed()) {
GetResponse singleGetResponse = singleResponse.getResponse();
if(singleGetResponse.isExists() && !singleGetResponse.isSourceEmpty()) {
//success
final Settings _settings = toSettings(singleGetResponse.getSourceAsBytesRef(), singleGetResponse.getType());
if(_settings != null) {
callback.success(singleGetResponse.getType(), _settings);
} else {
log.error("Cannot parse settings for "+singleGetResponse.getType());
}
} else {
//does not exist or empty source
callback.noData(singleGetResponse.getType());
}
} else {
//failure
callback.singleFailure(singleResponse==null?null:singleResponse.getFailure());
}
}
}
@Override
public void onFailure(Exception e) {
callback.failure(e);
}
});
}
}
private Settings toSettings(final BytesReference ref, final String type) {
if (ref == null || ref.length() == 0) {
log.error("Empty or null byte reference for {}", type);
return null;
}
XContentParser parser = null;
try {
parser = XContentHelper.createParser(NamedXContentRegistry.EMPTY, OpenDistroSecurityDeprecationHandler.INSTANCE, ref, XContentType.JSON);
parser.nextToken();
parser.nextToken();
if(!type.equals((parser.currentName()))) {
log.error("Cannot parse config for type {} because {}!={}", type, type, parser.currentName());
return null;
}
parser.nextToken();
return Settings.builder().loadFromStream("dummy.json", new ByteArrayInputStream(parser.binaryValue()), true).build();
} catch (final IOException e) {
throw ExceptionsHelper.convertToElastic(e);
} finally {
if(parser != null) {
try {
parser.close();
} catch (IOException e) {
//ignore
}
}
}
}
}

View File

@ -0,0 +1,108 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.configuration;
import java.io.IOException;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.search.IndexSearcher;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.index.Index;
import org.elasticsearch.index.IndexService;
import org.elasticsearch.index.engine.EngineException;
import org.elasticsearch.index.shard.IndexSearcherWrapper;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
import com.amazon.opendistroforelasticsearch.security.support.HeaderHelper;
import com.amazon.opendistroforelasticsearch.security.user.User;
public class OpenDistroSecurityIndexSearcherWrapper extends IndexSearcherWrapper {
protected final Logger log = LogManager.getLogger(this.getClass());
protected final ThreadContext threadContext;
protected final Index index;
protected final String opendistrosecurityIndex;
private final AdminDNs adminDns;
//constructor is called per index, so avoid costly operations here
public OpenDistroSecurityIndexSearcherWrapper(final IndexService indexService, final Settings settings, final AdminDNs adminDNs) {
index = indexService.index();
threadContext = indexService.getThreadPool().getThreadContext();
this.opendistrosecurityIndex = settings.get(ConfigConstants.OPENDISTRO_SECURITY_CONFIG_INDEX_NAME, ConfigConstants.OPENDISTRO_SECURITY_DEFAULT_CONFIG_INDEX);
this.adminDns = adminDNs;
}
@Override
public final DirectoryReader wrap(final DirectoryReader reader) throws IOException {
if (isSecurityIndexRequest() && !isAdminAuthenticatedOrInternalRequest()) {
return new EmptyFilterLeafReader.EmptyDirectoryReader(reader);
}
return dlsFlsWrap(reader, isAdminAuthenticatedOrInternalRequest());
}
@Override
public final IndexSearcher wrap(final IndexSearcher searcher) throws EngineException {
return dlsFlsWrap(searcher, isAdminAuthenticatedOrInternalRequest());
}
protected IndexSearcher dlsFlsWrap(final IndexSearcher searcher, boolean isAdmin) throws EngineException {
return searcher;
}
protected DirectoryReader dlsFlsWrap(final DirectoryReader reader, boolean isAdmin) throws IOException {
return reader;
}
protected final boolean isAdminAuthenticatedOrInternalRequest() {
final User user = (User) threadContext.getTransient(ConfigConstants.OPENDISTRO_SECURITY_USER);
if (user != null && adminDns.isAdmin(user)) {
return true;
}
if ("true".equals(HeaderHelper.getSafeFromHeader(threadContext, ConfigConstants.OPENDISTRO_SECURITY_CONF_REQUEST_HEADER))) {
return true;
}
return false;
}
protected final boolean isSecurityIndexRequest() {
return index.getName().equals(opendistrosecurityIndex);
}
}

View File

@ -0,0 +1,342 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.filter;
import java.util.UUID;
import java.util.stream.Collectors;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.ElasticsearchSecurityException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.ActionRequest;
import org.elasticsearch.action.ActionResponse;
import org.elasticsearch.action.DocWriteRequest.OpType;
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotRequest;
import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest;
import org.elasticsearch.action.admin.indices.close.CloseIndexRequest;
import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest;
import org.elasticsearch.action.bulk.BulkItemRequest;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.bulk.BulkShardRequest;
import org.elasticsearch.action.delete.DeleteRequest;
import org.elasticsearch.action.get.GetRequest;
import org.elasticsearch.action.get.MultiGetRequest;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.search.MultiSearchRequest;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.support.ActionFilter;
import org.elasticsearch.action.support.ActionFilterChain;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.common.util.concurrent.ThreadContext.StoredContext;
import org.elasticsearch.index.reindex.DeleteByQueryRequest;
import org.elasticsearch.index.reindex.UpdateByQueryRequest;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.tasks.Task;
import org.elasticsearch.threadpool.ThreadPool;
import com.amazon.opendistroforelasticsearch.security.action.whoami.WhoAmIAction;
import com.amazon.opendistroforelasticsearch.security.auditlog.AuditLog;
import com.amazon.opendistroforelasticsearch.security.auditlog.AuditLog.Origin;
import com.amazon.opendistroforelasticsearch.security.compliance.ComplianceConfig;
import com.amazon.opendistroforelasticsearch.security.configuration.AdminDNs;
import com.amazon.opendistroforelasticsearch.security.configuration.CompatConfig;
import com.amazon.opendistroforelasticsearch.security.configuration.DlsFlsRequestValve;
import com.amazon.opendistroforelasticsearch.security.privileges.PrivilegesEvaluator;
import com.amazon.opendistroforelasticsearch.security.privileges.PrivilegesEvaluatorResponse;
import com.amazon.opendistroforelasticsearch.security.support.Base64Helper;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
import com.amazon.opendistroforelasticsearch.security.support.HeaderHelper;
import com.amazon.opendistroforelasticsearch.security.support.SourceFieldsContext;
import com.amazon.opendistroforelasticsearch.security.user.User;
public class OpenDistroSecurityFilter implements ActionFilter {
protected final Logger log = LogManager.getLogger(this.getClass());
protected final Logger actionTrace = LogManager.getLogger("opendistro_security_action_trace");
private final PrivilegesEvaluator evalp;
private final AdminDNs adminDns;
private DlsFlsRequestValve dlsFlsValve;
private final AuditLog auditLog;
private final ThreadContext threadContext;
private final ClusterService cs;
private final ComplianceConfig complianceConfig;
private final CompatConfig compatConfig;
public OpenDistroSecurityFilter(final PrivilegesEvaluator evalp, final AdminDNs adminDns,
DlsFlsRequestValve dlsFlsValve, AuditLog auditLog, ThreadPool threadPool, ClusterService cs,
ComplianceConfig complianceConfig, final CompatConfig compatConfig) {
this.evalp = evalp;
this.adminDns = adminDns;
this.dlsFlsValve = dlsFlsValve;
this.auditLog = auditLog;
this.threadContext = threadPool.getThreadContext();
this.cs = cs;
this.complianceConfig = complianceConfig;
this.compatConfig = compatConfig;
}
@Override
public int order() {
return Integer.MIN_VALUE;
}
@Override
public <Request extends ActionRequest, Response extends ActionResponse> void apply(Task task, final String action, Request request,
ActionListener<Response> listener, ActionFilterChain<Request, Response> chain) {
try (StoredContext ctx = threadContext.newStoredContext(true)){
org.apache.logging.log4j.ThreadContext.clearAll();
apply0(task, action, request, listener, chain);
}
}
private <Request extends ActionRequest, Response extends ActionResponse> void apply0(Task task, final String action, Request request,
ActionListener<Response> listener, ActionFilterChain<Request, Response> chain) {
try {
if(threadContext.getTransient(ConfigConstants.OPENDISTRO_SECURITY_ORIGIN) == null) {
threadContext.putTransient(ConfigConstants.OPENDISTRO_SECURITY_ORIGIN, Origin.LOCAL.toString());
}
if(complianceConfig != null && complianceConfig.isEnabled()) {
attachSourceFieldContext(request);
}
final User user = threadContext.getTransient(ConfigConstants.OPENDISTRO_SECURITY_USER);
final boolean userIsAdmin = isUserAdmin(user, adminDns);
final boolean interClusterRequest = HeaderHelper.isInterClusterRequest(threadContext);
final boolean trustedClusterRequest = HeaderHelper.isTrustedClusterRequest(threadContext);
final boolean confRequest = "true".equals(HeaderHelper.getSafeFromHeader(threadContext, ConfigConstants.OPENDISTRO_SECURITY_CONF_REQUEST_HEADER));
final boolean passThroughRequest = action.startsWith("indices:admin/seq_no")
|| action.equals(WhoAmIAction.NAME);
final boolean internalRequest =
(interClusterRequest || HeaderHelper.isDirectRequest(threadContext))
&& action.startsWith("internal:")
&& !action.startsWith("internal:transport/proxy");
if (user != null) {
org.apache.logging.log4j.ThreadContext.put("user", user.getName());
}
if(actionTrace.isTraceEnabled()) {
String count = "";
if(request instanceof BulkRequest) {
count = ""+((BulkRequest) request).requests().size();
}
if(request instanceof MultiGetRequest) {
count = ""+((MultiGetRequest) request).getItems().size();
}
if(request instanceof MultiSearchRequest) {
count = ""+((MultiSearchRequest) request).requests().size();
}
actionTrace.trace("Node "+cs.localNode().getName()+" -> "+action+" ("+count+"): userIsAdmin="+userIsAdmin+"/conRequest="+confRequest+"/internalRequest="+internalRequest
+"origin="+threadContext.getTransient(ConfigConstants.OPENDISTRO_SECURITY_ORIGIN)+"/directRequest="+HeaderHelper.isDirectRequest(threadContext)+"/remoteAddress="+request.remoteAddress());
threadContext.putHeader("_opendistro_security_trace"+System.currentTimeMillis()+"#"+UUID.randomUUID().toString(), Thread.currentThread().getName()+" FILTER -> "+"Node "+cs.localNode().getName()+" -> "+action+" userIsAdmin="+userIsAdmin+"/conRequest="+confRequest+"/internalRequest="+internalRequest
+"origin="+threadContext.getTransient(ConfigConstants.OPENDISTRO_SECURITY_ORIGIN)+"/directRequest="+HeaderHelper.isDirectRequest(threadContext)+"/remoteAddress="+request.remoteAddress()+" "+threadContext.getHeaders().entrySet().stream().filter(p->!p.getKey().startsWith("_opendistro_security_trace")).collect(Collectors.toMap(p -> p.getKey(), p -> p.getValue())));
}
if(userIsAdmin
|| confRequest
|| internalRequest
|| passThroughRequest){
if(userIsAdmin && !confRequest && !internalRequest && !passThroughRequest) {
auditLog.logGrantedPrivileges(action, request, task);
}
chain.proceed(task, action, request, listener);
return;
}
if(complianceConfig != null && complianceConfig.isEnabled()) {
boolean isImmutable = false;
if(request instanceof BulkShardRequest) {
for(BulkItemRequest bsr: ((BulkShardRequest) request).items()) {
isImmutable = checkImmutableIndices(bsr.request(), listener);
if(isImmutable) {
break;
}
}
} else {
isImmutable = checkImmutableIndices(request, listener);
}
if(isImmutable) {
return;
}
}
if(Origin.LOCAL.toString().equals(threadContext.getTransient(ConfigConstants.OPENDISTRO_SECURITY_ORIGIN))
&& (interClusterRequest || HeaderHelper.isDirectRequest(threadContext))
) {
chain.proceed(task, action, request, listener);
return;
}
if(user == null) {
if(action.startsWith("cluster:monitor/state")) {
chain.proceed(task, action, request, listener);
return;
}
if((interClusterRequest || trustedClusterRequest || request.remoteAddress() == null) && !compatConfig.transportInterClusterAuthEnabled()) {
chain.proceed(task, action, request, listener);
return;
}
log.error("No user found for "+ action+" from "+request.remoteAddress()+" "+threadContext.getTransient(ConfigConstants.OPENDISTRO_SECURITY_ORIGIN)+" via "+threadContext.getTransient(ConfigConstants.OPENDISTRO_SECURITY_CHANNEL_TYPE)+" "+threadContext.getHeaders());
listener.onFailure(new ElasticsearchSecurityException("No user found for "+action, RestStatus.INTERNAL_SERVER_ERROR));
return;
}
final PrivilegesEvaluator eval = evalp;
if (!eval.isInitialized()) {
log.error("Open Distro Security not initialized for {}", action);
listener.onFailure(new ElasticsearchSecurityException("Open Distro Security not initialized for "
+ action, RestStatus.SERVICE_UNAVAILABLE));
return;
}
if (log.isTraceEnabled()) {
log.trace("Evaluate permissions for user: {}", user.getName());
}
final PrivilegesEvaluatorResponse pres = eval.evaluate(user, action, request, task);
if (log.isDebugEnabled()) {
log.debug(pres);
}
if (pres.isAllowed()) {
auditLog.logGrantedPrivileges(action, request, task);
if(!dlsFlsValve.invoke(request, listener, pres.getAllowedFlsFields(), pres.getMaskedFields(), pres.getQueries())) {
return;
}
chain.proceed(task, action, request, listener);
return;
} else {
auditLog.logMissingPrivileges(action, request, task);
log.debug("no permissions for {}", pres.getMissingPrivileges());
listener.onFailure(new ElasticsearchSecurityException("no permissions for " + pres.getMissingPrivileges()+" and "+user, RestStatus.FORBIDDEN));
return;
}
} catch (Throwable e) {
log.error("Unexpected exception "+e, e);
listener.onFailure(new ElasticsearchSecurityException("Unexpected exception " + action, RestStatus.INTERNAL_SERVER_ERROR));
return;
}
}
private static boolean isUserAdmin(User user, final AdminDNs adminDns) {
if (user != null && adminDns.isAdmin(user)) {
return true;
}
return false;
}
private void attachSourceFieldContext(ActionRequest request) {
if(request instanceof SearchRequest && SourceFieldsContext.isNeeded((SearchRequest) request)) {
if(threadContext.getHeader("_opendistro_security_source_field_context") == null) {
final String serializedSourceFieldContext = Base64Helper.serializeObject(new SourceFieldsContext((SearchRequest) request));
threadContext.putHeader("_opendistro_security_source_field_context", serializedSourceFieldContext);
}
} else if (request instanceof GetRequest && SourceFieldsContext.isNeeded((GetRequest) request)) {
if(threadContext.getHeader("_opendistro_security_source_field_context") == null) {
final String serializedSourceFieldContext = Base64Helper.serializeObject(new SourceFieldsContext((GetRequest) request));
threadContext.putHeader("_opendistro_security_source_field_context", serializedSourceFieldContext);
}
}
}
@SuppressWarnings("rawtypes")
private boolean checkImmutableIndices(Object request, ActionListener listener) {
if( request instanceof DeleteRequest
|| request instanceof UpdateRequest
|| request instanceof UpdateByQueryRequest
|| request instanceof DeleteByQueryRequest
|| request instanceof DeleteIndexRequest
|| request instanceof RestoreSnapshotRequest
|| request instanceof CloseIndexRequest
|| request instanceof IndicesAliasesRequest //TODO only remove index
) {
if(complianceConfig != null && complianceConfig.isIndexImmutable(request)) {
//auditLog.log
//check index for type = remove index
//IndicesAliasesRequest iar = (IndicesAliasesRequest) request;
//for(AliasActions aa: iar.getAliasActions()) {
// if(aa.actionType() == Type.REMOVE_INDEX) {
// }
//}
listener.onFailure(new ElasticsearchSecurityException("Index is immutable", RestStatus.FORBIDDEN));
return true;
}
}
if(request instanceof IndexRequest) {
if(complianceConfig != null && complianceConfig.isIndexImmutable(request)) {
((IndexRequest) request).opType(OpType.CREATE);
}
}
return false;
}
}

View File

@ -0,0 +1,158 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.filter;
import java.nio.file.Path;
import javax.net.ssl.SSLPeerUnverifiedException;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.client.node.NodeClient;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.rest.BytesRestResponse;
import org.elasticsearch.rest.RestChannel;
import org.elasticsearch.rest.RestHandler;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.rest.RestRequest.Method;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.threadpool.ThreadPool;
import com.amazon.opendistroforelasticsearch.security.auditlog.AuditLog;
import com.amazon.opendistroforelasticsearch.security.auditlog.AuditLog.Origin;
import com.amazon.opendistroforelasticsearch.security.auth.BackendRegistry;
import com.amazon.opendistroforelasticsearch.security.configuration.CompatConfig;
import com.amazon.opendistroforelasticsearch.security.ssl.transport.PrincipalExtractor;
import com.amazon.opendistroforelasticsearch.security.ssl.util.ExceptionUtils;
import com.amazon.opendistroforelasticsearch.security.ssl.util.SSLRequestHelper;
import com.amazon.opendistroforelasticsearch.security.ssl.util.SSLRequestHelper.SSLInfo;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
import com.amazon.opendistroforelasticsearch.security.support.HTTPHelper;
import com.amazon.opendistroforelasticsearch.security.user.User;
public class OpenDistroSecurityRestFilter {
protected final Logger log = LogManager.getLogger(this.getClass());
private final BackendRegistry registry;
private final AuditLog auditLog;
private final ThreadContext threadContext;
private final PrincipalExtractor principalExtractor;
private final Settings settings;
private final Path configPath;
private final CompatConfig compatConfig;
public OpenDistroSecurityRestFilter(final BackendRegistry registry, final AuditLog auditLog,
final ThreadPool threadPool, final PrincipalExtractor principalExtractor,
final Settings settings, final Path configPath, final CompatConfig compatConfig) {
super();
this.registry = registry;
this.auditLog = auditLog;
this.threadContext = threadPool.getThreadContext();
this.principalExtractor = principalExtractor;
this.settings = settings;
this.configPath = configPath;
this.compatConfig = compatConfig;
}
public RestHandler wrap(RestHandler original) {
return new RestHandler() {
@Override
public void handleRequest(RestRequest request, RestChannel channel, NodeClient client) throws Exception {
org.apache.logging.log4j.ThreadContext.clearAll();
if(!checkAndAuthenticateRequest(request, channel, client)) {
original.handleRequest(request, channel, client);
}
}
};
}
private boolean checkAndAuthenticateRequest(RestRequest request, RestChannel channel, NodeClient client) throws Exception {
threadContext.putTransient(ConfigConstants.OPENDISTRO_SECURITY_ORIGIN, Origin.REST.toString());
if(HTTPHelper.containsBadHeader(request)) {
final ElasticsearchException exception = ExceptionUtils.createBadHeaderException();
log.error(exception);
auditLog.logBadHeaders(request);
channel.sendResponse(new BytesRestResponse(channel, RestStatus.FORBIDDEN, exception));
return true;
}
if(SSLRequestHelper.containsBadHeader(threadContext, ConfigConstants.OPENDISTRO_SECURITY_CONFIG_PREFIX)) {
final ElasticsearchException exception = ExceptionUtils.createBadHeaderException();
log.error(exception);
auditLog.logBadHeaders(request);
channel.sendResponse(new BytesRestResponse(channel, RestStatus.FORBIDDEN, exception));
return true;
}
final SSLInfo sslInfo;
try {
if((sslInfo = SSLRequestHelper.getSSLInfo(settings, configPath, request, principalExtractor)) != null) {
if(sslInfo.getPrincipal() != null) {
threadContext.putTransient("_opendistro_security_ssl_principal", sslInfo.getPrincipal());
}
if(sslInfo.getX509Certs() != null) {
threadContext.putTransient("_opendistro_security_ssl_peer_certificates", sslInfo.getX509Certs());
}
threadContext.putTransient("_opendistro_security_ssl_protocol", sslInfo.getProtocol());
threadContext.putTransient("_opendistro_security_ssl_cipher", sslInfo.getCipher());
}
} catch (SSLPeerUnverifiedException e) {
log.error("No ssl info", e);
auditLog.logSSLException(request, e);
channel.sendResponse(new BytesRestResponse(channel, RestStatus.FORBIDDEN, e));
return true;
}
if(!compatConfig.restAuthEnabled()) {
return false;
}
if(request.method() != Method.OPTIONS
&& !"/_opendistro/_security/health".equals(request.path())) {
if (!registry.authenticate(request, channel, threadContext)) {
// another roundtrip
org.apache.logging.log4j.ThreadContext.remove("user");
return true;
} else {
// make it possible to filter logs by username
org.apache.logging.log4j.ThreadContext.put("user", ((User)threadContext.getTransient(ConfigConstants.OPENDISTRO_SECURITY_USER)).getName());
}
}
return false;
}
}

View File

@ -0,0 +1,83 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.http;
import java.nio.file.Path;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.rest.BytesRestResponse;
import org.elasticsearch.rest.RestChannel;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.rest.RestStatus;
import com.amazon.opendistroforelasticsearch.security.auth.HTTPAuthenticator;
import com.amazon.opendistroforelasticsearch.security.support.HTTPHelper;
import com.amazon.opendistroforelasticsearch.security.user.AuthCredentials;
//TODO FUTURE allow only if protocol==https
public class HTTPBasicAuthenticator implements HTTPAuthenticator {
protected final Logger log = LogManager.getLogger(this.getClass());
public HTTPBasicAuthenticator(final Settings settings, final Path configPath) {
}
@Override
public AuthCredentials extractCredentials(final RestRequest request, ThreadContext threadContext) {
final boolean forceLogin = request.paramAsBoolean("force_login", false);
if(forceLogin) {
return null;
}
final String authorizationHeader = request.header("Authorization");
return HTTPHelper.extractCredentials(authorizationHeader, log);
}
@Override
public boolean reRequestAuthentication(final RestChannel channel, AuthCredentials creds) {
final BytesRestResponse wwwAuthenticateResponse = new BytesRestResponse(RestStatus.UNAUTHORIZED, "Unauthorized");
wwwAuthenticateResponse.addHeader("WWW-Authenticate", "Basic realm=\"Open Distro Security\"");
channel.sendResponse(wwwAuthenticateResponse);
return true;
}
@Override
public String getType() {
return "basic";
}
}

View File

@ -0,0 +1,127 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.http;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import javax.naming.InvalidNameException;
import javax.naming.ldap.LdapName;
import javax.naming.ldap.Rdn;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.rest.RestChannel;
import org.elasticsearch.rest.RestRequest;
import com.amazon.opendistroforelasticsearch.security.auth.HTTPAuthenticator;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
import com.amazon.opendistroforelasticsearch.security.user.AuthCredentials;
public class HTTPClientCertAuthenticator implements HTTPAuthenticator {
protected final Logger log = LogManager.getLogger(this.getClass());
protected final Settings settings;
public HTTPClientCertAuthenticator(final Settings settings, final Path configPath) {
this.settings = settings;
}
@Override
public AuthCredentials extractCredentials(final RestRequest request, final ThreadContext threadContext) {
final String principal = threadContext.getTransient(ConfigConstants.OPENDISTRO_SECURITY_SSL_PRINCIPAL);
if (!Strings.isNullOrEmpty(principal)) {
final String usernameAttribute = settings.get("username_attribute");
final String rolesAttribute = settings.get("roles_attribute");
try {
final LdapName rfc2253dn = new LdapName(principal);
String username = principal.trim();
String[] backendRoles = null;
if(usernameAttribute != null && usernameAttribute.length() > 0) {
final List<String> usernames = getDnAttribute(rfc2253dn, usernameAttribute);
if(usernames.isEmpty() == false) {
username = usernames.get(0);
}
}
if(rolesAttribute != null && rolesAttribute.length() > 0) {
final List<String> roles = getDnAttribute(rfc2253dn, rolesAttribute);
if(roles.isEmpty() == false) {
backendRoles = roles.toArray(new String[0]);
}
}
return new AuthCredentials(username, backendRoles).markComplete();
} catch (InvalidNameException e) {
log.error("Client cert had no properly formed DN (was: {})", principal);
return null;
}
} else {
log.trace("No CLIENT CERT, send 401");
return null;
}
}
@Override
public boolean reRequestAuthentication(final RestChannel channel, AuthCredentials creds) {
return false;
}
@Override
public String getType() {
return "clientcert";
}
private List<String> getDnAttribute(LdapName rfc2253dn, String attribute) {
final List<String> attrValues = new ArrayList<>(rfc2253dn.size());
final List<Rdn> reverseRdn = new ArrayList<>(rfc2253dn.getRdns());
Collections.reverse(reverseRdn);
for (Rdn rdn : reverseRdn) {
if (rdn.getType().equalsIgnoreCase(attribute)) {
attrValues.add(rdn.getValue().toString());
}
}
return Collections.unmodifiableList(attrValues);
}
}

View File

@ -0,0 +1,100 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.http;
import java.nio.file.Path;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.ElasticsearchSecurityException;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.rest.RestChannel;
import org.elasticsearch.rest.RestRequest;
import com.amazon.opendistroforelasticsearch.security.auth.HTTPAuthenticator;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
import com.amazon.opendistroforelasticsearch.security.user.AuthCredentials;
public class HTTPProxyAuthenticator implements HTTPAuthenticator {
protected final Logger log = LogManager.getLogger(this.getClass());
private volatile Settings settings;
public HTTPProxyAuthenticator(Settings settings, final Path configPath) {
super();
this.settings = settings;
}
@Override
public AuthCredentials extractCredentials(final RestRequest request, ThreadContext context) {
if(context.getTransient(ConfigConstants.OPENDISTRO_SECURITY_XFF_DONE) != Boolean.TRUE) {
throw new ElasticsearchSecurityException("xff not done");
}
final String userHeader = settings.get("user_header");
final String rolesHeader = settings.get("roles_header");
final String rolesSeparator = settings.get("roles_separator", ",");
if(log.isDebugEnabled()) {
log.debug("headers {}", request.getHeaders());
log.debug("userHeader {}, value {}", userHeader, userHeader == null?null:request.header(userHeader));
log.debug("rolesHeader {}, value {}", rolesHeader, rolesHeader == null?null:request.header(rolesHeader));
}
if (!Strings.isNullOrEmpty(userHeader) && !Strings.isNullOrEmpty((String) request.header(userHeader))) {
String[] backendRoles = null;
if (!Strings.isNullOrEmpty(rolesHeader) && !Strings.isNullOrEmpty((String) request.header(rolesHeader))) {
backendRoles = ((String) request.header(rolesHeader)).split(rolesSeparator);
}
return new AuthCredentials((String) request.header(userHeader), backendRoles).markComplete();
} else {
if(log.isTraceEnabled()) {
log.trace("No '{}' header, send 401", userHeader);
}
return null;
}
}
@Override
public boolean reRequestAuthentication(final RestChannel channel, AuthCredentials creds) {
return false;
}
@Override
public String getType() {
return "proxy";
}
}

View File

@ -0,0 +1,51 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.http;
import org.elasticsearch.common.network.NetworkService;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.BigArrays;
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
import org.elasticsearch.threadpool.ThreadPool;
import com.amazon.opendistroforelasticsearch.security.ssl.OpenDistroSecurityKeyStore;
import com.amazon.opendistroforelasticsearch.security.ssl.SslExceptionHandler;
import com.amazon.opendistroforelasticsearch.security.ssl.http.netty.OpenDistroSecuritySSLNettyHttpServerTransport;
import com.amazon.opendistroforelasticsearch.security.ssl.http.netty.ValidatingDispatcher;
public class OpenDistroSecurityHttpServerTransport extends OpenDistroSecuritySSLNettyHttpServerTransport {
public OpenDistroSecurityHttpServerTransport(final Settings settings, final NetworkService networkService,
final BigArrays bigArrays, final ThreadPool threadPool, final OpenDistroSecurityKeyStore odsks,
final SslExceptionHandler sslExceptionHandler, final NamedXContentRegistry namedXContentRegistry, final ValidatingDispatcher dispatcher) {
super(settings, networkService, bigArrays, threadPool, odsks, namedXContentRegistry, dispatcher, sslExceptionHandler);
}
}

View File

@ -0,0 +1,70 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.http;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandler;
import org.elasticsearch.common.network.NetworkService;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.BigArrays;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
import org.elasticsearch.http.netty4.Netty4HttpServerTransport;
import org.elasticsearch.threadpool.ThreadPool;
public class OpenDistroSecurityNonSslHttpServerTransport extends Netty4HttpServerTransport {
private final ThreadContext threadContext;
public OpenDistroSecurityNonSslHttpServerTransport(final Settings settings, final NetworkService networkService, final BigArrays bigArrays,
final ThreadPool threadPool, final NamedXContentRegistry namedXContentRegistry, final Dispatcher dispatcher) {
super(settings, networkService, bigArrays, threadPool, namedXContentRegistry, dispatcher);
this.threadContext = threadPool.getThreadContext();
}
@Override
public ChannelHandler configureServerChannelHandler() {
return new NonSslHttpChannelHandler(this);
}
protected class NonSslHttpChannelHandler extends Netty4HttpServerTransport.HttpChannelHandler {
protected NonSslHttpChannelHandler(Netty4HttpServerTransport transport) {
super(transport, OpenDistroSecurityNonSslHttpServerTransport.this.detailedErrorsEnabled, OpenDistroSecurityNonSslHttpServerTransport.this.threadContext);
}
@Override
protected void initChannel(Channel ch) throws Exception {
super.initChannel(ch);
}
}
}

View File

@ -0,0 +1,343 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.http;
import java.net.InetSocketAddress;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.regex.Pattern;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.http.netty4.Netty4HttpRequest;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
class RemoteIpDetector {
/**
* {@link Pattern} for a comma delimited string that support whitespace characters
*/
private static final Pattern commaSeparatedValuesPattern = Pattern.compile("\\s*,\\s*");
/**
* Logger
*/
protected final Logger log = LogManager.getLogger(this.getClass());
/**
* Convert a given comma delimited String into an array of String
*
* @return array of String (non <code>null</code>)
*/
protected static String[] commaDelimitedListToStringArray(String commaDelimitedStrings) {
return (commaDelimitedStrings == null || commaDelimitedStrings.length() == 0) ? new String[0] : commaSeparatedValuesPattern
.split(commaDelimitedStrings);
}
/**
* Convert an array of strings in a comma delimited string
*/
protected static String listToCommaDelimitedString(List<String> stringList) {
if (stringList == null) {
return "";
}
StringBuilder result = new StringBuilder();
for (Iterator<String> it = stringList.iterator(); it.hasNext();) {
Object element = it.next();
if (element != null) {
result.append(element);
if (it.hasNext()) {
result.append(", ");
}
}
}
return result.toString();
}
/**
* @see #setInternalProxies(String)
*/
private Pattern internalProxies = Pattern.compile(
"10\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}|" +
"192\\.168\\.\\d{1,3}\\.\\d{1,3}|" +
"169\\.254\\.\\d{1,3}\\.\\d{1,3}|" +
"127\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}|" +
"172\\.1[6-9]{1}\\.\\d{1,3}\\.\\d{1,3}|" +
"172\\.2[0-9]{1}\\.\\d{1,3}\\.\\d{1,3}|" +
"172\\.3[0-1]{1}\\.\\d{1,3}\\.\\d{1,3}");
/**
* @see #setProxiesHeader(String)
*/
private String proxiesHeader = "X-Forwarded-By";
/**
* @see #setRemoteIpHeader(String)
*/
private String remoteIpHeader = "X-Forwarded-For";
/**
* @see RemoteIpValve#setTrustedProxies(String)
*/
private Pattern trustedProxies = null;
/**
* @see #setInternalProxies(String)
* @return Regular expression that defines the internal proxies
*/
public String getInternalProxies() {
if (internalProxies == null) {
return null;
}
return internalProxies.toString();
}
/**
* @see #setProxiesHeader(String)
* @return the proxies header name (e.g. "X-Forwarded-By")
*/
public String getProxiesHeader() {
return proxiesHeader;
}
/**
* @see #setRemoteIpHeader(String)
* @return the remote IP header name (e.g. "X-Forwarded-For")
*/
public String getRemoteIpHeader() {
return remoteIpHeader;
}
/**
* @see #setTrustedProxies(String)
* @return Regular expression that defines the trusted proxies
*/
public String getTrustedProxies() {
if (trustedProxies == null) {
return null;
}
return trustedProxies.toString();
}
String detect(final Netty4HttpRequest request, ThreadContext threadContext){
final String originalRemoteAddr = ((InetSocketAddress)request.getRemoteAddress()).getAddress().getHostAddress();
@SuppressWarnings("unused")
final String originalProxiesHeader = request.header(proxiesHeader);
//final String originalRemoteIpHeader = request.getHeader(remoteIpHeader);
if(log.isTraceEnabled()) {
log.trace("originalRemoteAddr {}", originalRemoteAddr);
}
//X-Forwarded-For: client1, proxy1, proxy2
// ^^^^^^ originalRemoteAddr
//originalRemoteAddr need to be in the list of internalProxies
if (internalProxies !=null &&
internalProxies.matcher(originalRemoteAddr).matches()) {
String remoteIp = null;
// In java 6, proxiesHeaderValue should be declared as a java.util.Deque
final LinkedList<String> proxiesHeaderValue = new LinkedList<>();
final StringBuilder concatRemoteIpHeaderValue = new StringBuilder();
//client1, proxy1, proxy2
final List<String> remoteIpHeaders = request.request().headers().getAll(remoteIpHeader); //X-Forwarded-For
if(remoteIpHeaders == null || remoteIpHeaders.isEmpty()) {
return originalRemoteAddr;
}
for (String rh:remoteIpHeaders) {
if (concatRemoteIpHeaderValue.length() > 0) {
concatRemoteIpHeaderValue.append(", ");
}
concatRemoteIpHeaderValue.append(rh);
}
if(log.isTraceEnabled()) {
log.trace("concatRemoteIpHeaderValue {}", concatRemoteIpHeaderValue.toString());
}
final String[] remoteIpHeaderValue = commaDelimitedListToStringArray(concatRemoteIpHeaderValue.toString());
int idx;
// loop on remoteIpHeaderValue to find the first trusted remote ip and to build the proxies chain
for (idx = remoteIpHeaderValue.length - 1; idx >= 0; idx--) {
String currentRemoteIp = remoteIpHeaderValue[idx];
remoteIp = currentRemoteIp;
if (internalProxies.matcher(currentRemoteIp).matches()) {
// do nothing, internalProxies IPs are not appended to the
} else if (trustedProxies != null &&
trustedProxies.matcher(currentRemoteIp).matches()) {
proxiesHeaderValue.addFirst(currentRemoteIp);
} else {
idx--; // decrement idx because break statement doesn't do it
break;
}
}
// continue to loop on remoteIpHeaderValue to build the new value of the remoteIpHeader
final LinkedList<String> newRemoteIpHeaderValue = new LinkedList<>();
for (; idx >= 0; idx--) {
String currentRemoteIp = remoteIpHeaderValue[idx];
newRemoteIpHeaderValue.addFirst(currentRemoteIp);
}
if (remoteIp != null) {
if (proxiesHeaderValue.size() == 0) {
request.request().headers().remove(proxiesHeader);
} else {
String commaDelimitedListOfProxies = listToCommaDelimitedString(proxiesHeaderValue);
request.request().headers().set(proxiesHeader,commaDelimitedListOfProxies);
}
if (newRemoteIpHeaderValue.size() == 0) {
request.request().headers().remove(remoteIpHeader);
} else {
String commaDelimitedRemoteIpHeaderValue = listToCommaDelimitedString(newRemoteIpHeaderValue);
request.request().headers().set(remoteIpHeader,commaDelimitedRemoteIpHeaderValue);
}
if (log.isTraceEnabled()) {
final String originalRemoteHost = ((InetSocketAddress)request.getRemoteAddress()).getAddress().getHostName();
log.trace("Incoming request " + request.request().uri() + " with originalRemoteAddr '" + originalRemoteAddr
+ "', originalRemoteHost='" + originalRemoteHost + "', will be seen as newRemoteAddr='" + remoteIp);
}
//TODO check put in thread context
threadContext.putTransient(ConfigConstants.OPENDISTRO_SECURITY_XFF_DONE, Boolean.TRUE);
//request.putInContext(ConfigConstants.OPENDISTRO_SECURITY_XFF_DONE, Boolean.TRUE);
return remoteIp;
} else {
log.warn("Remote ip could not be detected, this should normally not happen");
}
} else {
if (log.isTraceEnabled()) {
log.trace("Skip RemoteIpDetector for request " + request.request().uri() + " with originalRemoteAddr '"
+ request.getRemoteAddress() + "' cause no internal proxy matches");
}
}
return originalRemoteAddr;
}
/**
* <p>
* Regular expression that defines the internal proxies.
* </p>
* <p>
* Default value : 10\.\d{1,3}\.\d{1,3}\.\d{1,3}|192\.168\.\d{1,3}\.\d{1,3}|169\.254.\d{1,3}.\d{1,3}|127\.\d{1,3}\.\d{1,3}\.\d{1,3}
* </p>
*/
public void setInternalProxies(String internalProxies) {
if (internalProxies == null || internalProxies.length() == 0) {
this.internalProxies = null;
} else {
this.internalProxies = Pattern.compile(internalProxies);
}
}
/**
* <p>
* The proxiesHeader directive specifies a header into which mod_remoteip will collect a list of all of the intermediate client IP
* addresses trusted to resolve the actual remote IP. Note that intermediate RemoteIPTrustedProxy addresses are recorded in this header,
* while any intermediate RemoteIPInternalProxy addresses are discarded.
* </p>
* <p>
* Name of the http header that holds the list of trusted proxies that has been traversed by the http request.
* </p>
* <p>
* The value of this header can be comma delimited.
* </p>
* <p>
* Default value : <code>X-Forwarded-By</code>
* </p>
*/
public void setProxiesHeader(String proxiesHeader) {
this.proxiesHeader = proxiesHeader;
}
/**
* <p>
* Name of the http header from which the remote ip is extracted.
* </p>
* <p>
* The value of this header can be comma delimited.
* </p>
* <p>
* Default value : <code>X-Forwarded-For</code>
* </p>
*
* @param remoteIpHeader
*/
public void setRemoteIpHeader(String remoteIpHeader) {
this.remoteIpHeader = remoteIpHeader;
}
/**
* <p>
* Regular expression defining proxies that are trusted when they appear in
* the {@link #remoteIpHeader} header.
* </p>
* <p>
* Default value : empty list, no external proxy is trusted.
* </p>
*/
public void setTrustedProxies(String trustedProxies) {
if (trustedProxies == null || trustedProxies.length() == 0) {
this.trustedProxies = null;
} else {
this.trustedProxies = Pattern.compile(trustedProxies);
}
}
}

View File

@ -0,0 +1,108 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.http;
import java.net.InetSocketAddress;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.ElasticsearchSecurityException;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.http.netty4.Netty4HttpRequest;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.threadpool.ThreadPool;
import com.amazon.opendistroforelasticsearch.security.configuration.ConfigurationChangeListener;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
public class XFFResolver implements ConfigurationChangeListener {
protected final Logger log = LogManager.getLogger(this.getClass());
private volatile boolean enabled;
private volatile RemoteIpDetector detector;
private final ThreadContext threadContext;
public XFFResolver(final ThreadPool threadPool) {
super();
this.threadContext = threadPool.getThreadContext();
}
public TransportAddress resolve(final RestRequest request) throws ElasticsearchSecurityException {
if(log.isTraceEnabled()) {
log.trace("resolve {}", request.getRemoteAddress());
}
if(enabled && request.getRemoteAddress() instanceof InetSocketAddress && request instanceof Netty4HttpRequest) {
final InetSocketAddress isa = new InetSocketAddress(detector.detect((Netty4HttpRequest) request, threadContext), ((InetSocketAddress)request.getRemoteAddress()).getPort());
if(isa.isUnresolved()) {
throw new ElasticsearchSecurityException("Cannot resolve address "+isa.getHostString());
}
if(log.isTraceEnabled()) {
if(threadContext.getTransient(ConfigConstants.OPENDISTRO_SECURITY_XFF_DONE) == Boolean.TRUE) {
log.trace("xff resolved {} to {}", request.getRemoteAddress(), isa);
} else {
log.trace("no xff done for {}",request.getClass());
}
}
return new TransportAddress(isa);
} else if(request.getRemoteAddress() instanceof InetSocketAddress){
if(log.isTraceEnabled()) {
log.trace("no xff done (enabled or no netty request) {},{},{},{}",enabled, request.getClass());
}
return new TransportAddress((InetSocketAddress)request.getRemoteAddress());
} else {
throw new ElasticsearchSecurityException("Cannot handle this request. Remote address is "+request.getRemoteAddress()+" with request class "+request.getClass());
}
}
@Override
public void onChange(final Settings settings) {
enabled = settings.getAsBoolean("opendistro_security.dynamic.http.xff.enabled", true);
if(enabled) {
detector = new RemoteIpDetector();
detector.setInternalProxies(settings.get("opendistro_security.dynamic.http.xff.internalProxies", detector.getInternalProxies()));
detector.setProxiesHeader(settings.get("opendistro_security.dynamic.http.xff.proxiesHeader", detector.getProxiesHeader()));
detector.setRemoteIpHeader(settings.get("opendistro_security.dynamic.http.xff.remoteIpHeader", detector.getRemoteIpHeader()));
detector.setTrustedProxies(settings.get("opendistro_security.dynamic.http.xff.trustedProxies", detector.getTrustedProxies()));
} else {
detector = null;
}
}
}

View File

@ -0,0 +1,173 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.privileges;
import java.io.Serializable;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.Set;
import java.util.Map.Entry;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.ElasticsearchSecurityException;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.collect.Tuple;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.threadpool.ThreadPool;
import com.amazon.opendistroforelasticsearch.security.resolver.IndexResolverReplacer.Resolved;
import com.amazon.opendistroforelasticsearch.security.securityconf.ConfigModel.SecurityRoles;
import com.amazon.opendistroforelasticsearch.security.support.Base64Helper;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
import com.amazon.opendistroforelasticsearch.security.support.WildcardMatcher;
import com.amazon.opendistroforelasticsearch.security.user.User;
public class DlsFlsEvaluator {
protected final Logger log = LogManager.getLogger(this.getClass());
private final ThreadPool threadPool;
public DlsFlsEvaluator(Settings settings, ThreadPool threadPool) {
this.threadPool = threadPool;
}
public PrivilegesEvaluatorResponse evaluate(final ClusterService clusterService, final IndexNameExpressionResolver resolver, final Resolved requestedResolved, final User user,
final SecurityRoles securityRoles, final PrivilegesEvaluatorResponse presponse) {
ThreadContext threadContext = threadPool.getThreadContext();
// maskedFields
final Map<String, Set<String>> maskedFieldsMap = securityRoles.getMaskedFields(user, resolver, clusterService);
if (maskedFieldsMap != null && !maskedFieldsMap.isEmpty()) {
if (threadContext.getHeader(ConfigConstants.OPENDISTRO_SECURITY_MASKED_FIELD_HEADER) != null) {
if (!maskedFieldsMap.equals(Base64Helper.deserializeObject(threadContext.getHeader(ConfigConstants.OPENDISTRO_SECURITY_MASKED_FIELD_HEADER)))) {
throw new ElasticsearchSecurityException(ConfigConstants.OPENDISTRO_SECURITY_MASKED_FIELD_HEADER + " does not match (Security 901D)");
} else {
if (log.isDebugEnabled()) {
log.debug(ConfigConstants.OPENDISTRO_SECURITY_MASKED_FIELD_HEADER + " already set");
}
}
} else {
threadContext.putHeader(ConfigConstants.OPENDISTRO_SECURITY_MASKED_FIELD_HEADER, Base64Helper.serializeObject((Serializable) maskedFieldsMap));
if (log.isDebugEnabled()) {
log.debug("attach masked fields info: {}", maskedFieldsMap);
}
}
presponse.maskedFields = new HashMap<>(maskedFieldsMap);
if (!requestedResolved.getAllIndices().isEmpty()) {
for (Iterator<Entry<String, Set<String>>> it = presponse.maskedFields.entrySet().iterator(); it.hasNext();) {
Entry<String, Set<String>> entry = it.next();
if (!WildcardMatcher.matchAny(entry.getKey(), requestedResolved.getAllIndices(), false)) {
it.remove();
}
}
}
}
// attach dls/fls map if not already done
// TODO do this only if enterprise module are loaded
final Tuple<Map<String, Set<String>>, Map<String, Set<String>>> dlsFls = securityRoles.getDlsFls(user, resolver, clusterService);
final Map<String, Set<String>> dlsQueries = dlsFls.v1();
final Map<String, Set<String>> flsFields = dlsFls.v2();
if (!dlsQueries.isEmpty()) {
if (threadContext.getHeader(ConfigConstants.OPENDISTRO_SECURITY_DLS_QUERY_HEADER) != null) {
if (!dlsQueries.equals(Base64Helper.deserializeObject(threadContext.getHeader(ConfigConstants.OPENDISTRO_SECURITY_DLS_QUERY_HEADER)))) {
throw new ElasticsearchSecurityException(ConfigConstants.OPENDISTRO_SECURITY_DLS_QUERY_HEADER + " does not match (Security 900D)");
}
} else {
threadContext.putHeader(ConfigConstants.OPENDISTRO_SECURITY_DLS_QUERY_HEADER, Base64Helper.serializeObject((Serializable) dlsQueries));
if (log.isDebugEnabled()) {
log.debug("attach DLS info: {}", dlsQueries);
}
}
presponse.queries = new HashMap<>(dlsQueries);
if (!requestedResolved.getAllIndices().isEmpty()) {
for (Iterator<Entry<String, Set<String>>> it = presponse.queries.entrySet().iterator(); it.hasNext();) {
Entry<String, Set<String>> entry = it.next();
if (!WildcardMatcher.matchAny(entry.getKey(), requestedResolved.getAllIndices(), false)) {
it.remove();
}
}
}
}
if (!flsFields.isEmpty()) {
if (threadContext.getHeader(ConfigConstants.OPENDISTRO_SECURITY_FLS_FIELDS_HEADER) != null) {
if (!flsFields.equals(Base64Helper.deserializeObject(threadContext.getHeader(ConfigConstants.OPENDISTRO_SECURITY_FLS_FIELDS_HEADER)))) {
throw new ElasticsearchSecurityException(ConfigConstants.OPENDISTRO_SECURITY_FLS_FIELDS_HEADER + " does not match (Security 901D)");
} else {
if (log.isDebugEnabled()) {
log.debug(ConfigConstants.OPENDISTRO_SECURITY_FLS_FIELDS_HEADER + " already set");
}
}
} else {
threadContext.putHeader(ConfigConstants.OPENDISTRO_SECURITY_FLS_FIELDS_HEADER, Base64Helper.serializeObject((Serializable) flsFields));
if (log.isDebugEnabled()) {
log.debug("attach FLS info: {}", flsFields);
}
}
presponse.allowedFlsFields = new HashMap<>(flsFields);
if (!requestedResolved.getAllIndices().isEmpty()) {
for (Iterator<Entry<String, Set<String>>> it = presponse.allowedFlsFields.entrySet().iterator(); it.hasNext();) {
Entry<String, Set<String>> entry = it.next();
if (!WildcardMatcher.matchAny(entry.getKey(), requestedResolved.getAllIndices(), false)) {
it.remove();
}
}
}
}
if (requestedResolved == Resolved._EMPTY) {
presponse.allowed = true;
return presponse.markComplete();
}
return presponse;
}
}

View File

@ -0,0 +1,122 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.privileges;
import java.util.ArrayList;
import java.util.List;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.action.ActionRequest;
import org.elasticsearch.action.RealtimeRequest;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.tasks.Task;
import com.amazon.opendistroforelasticsearch.security.auditlog.AuditLog;
import com.amazon.opendistroforelasticsearch.security.resolver.IndexResolverReplacer.Resolved;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
import com.amazon.opendistroforelasticsearch.security.support.WildcardMatcher;
public class OpenDistroSecurityIndexAccessEvaluator {
protected final Logger log = LogManager.getLogger(this.getClass());
private final String opendistrosecurityIndex;
private final AuditLog auditLog;
private final String[] securityDeniedActionPatternsAll;
private final String[] securityDeniedActionPatternsSnapshotRestoreAllowed;
private final boolean restoreSecurityIndexEnabled;
public OpenDistroSecurityIndexAccessEvaluator(final Settings settings, AuditLog auditLog) {
this.opendistrosecurityIndex = settings.get(ConfigConstants.OPENDISTRO_SECURITY_CONFIG_INDEX_NAME, ConfigConstants.OPENDISTRO_SECURITY_DEFAULT_CONFIG_INDEX);
this.auditLog = auditLog;
final List<String> securityIndexdeniedActionPatternsListAll = new ArrayList<String>();
securityIndexdeniedActionPatternsListAll.add("indices:data/write*");
securityIndexdeniedActionPatternsListAll.add("indices:admin/close");
securityIndexdeniedActionPatternsListAll.add("indices:admin/delete");
securityIndexdeniedActionPatternsListAll.add("cluster:admin/snapshot/restore");
securityDeniedActionPatternsAll = securityIndexdeniedActionPatternsListAll.toArray(new String[0]);
final List<String> securityIndexdeniedActionPatternsListSnapshotRestoreAllowed = new ArrayList<String>();
securityIndexdeniedActionPatternsListAll.add("indices:data/write*");
securityIndexdeniedActionPatternsListAll.add("indices:admin/delete");
securityDeniedActionPatternsSnapshotRestoreAllowed = securityIndexdeniedActionPatternsListSnapshotRestoreAllowed.toArray(new String[0]);
this.restoreSecurityIndexEnabled = settings.getAsBoolean(ConfigConstants.OPENDISTRO_SECURITY_UNSUPPORTED_RESTORE_SECURITYINDEX_ENABLED, false);
}
public PrivilegesEvaluatorResponse evaluate(final ActionRequest request, final Task task, final String action, final Resolved requestedResolved,
final PrivilegesEvaluatorResponse presponse) {
final String[] securityDeniedActionPatterns = this.restoreSecurityIndexEnabled? securityDeniedActionPatternsSnapshotRestoreAllowed : securityDeniedActionPatternsAll;
if (requestedResolved.getAllIndices().contains(opendistrosecurityIndex)
&& WildcardMatcher.matchAny(securityDeniedActionPatterns, action)) {
auditLog.logSecurityIndexAttempt(request, action, task);
log.warn(action + " for '{}' index is not allowed for a regular user", opendistrosecurityIndex);
presponse.allowed = false;
return presponse.markComplete();
}
//TODO: newpeval: check if isAll() is all (contains("_all" or "*"))
if (requestedResolved.isAll()
&& WildcardMatcher.matchAny(securityDeniedActionPatterns, action)) {
auditLog.logSecurityIndexAttempt(request, action, task);
log.warn(action + " for '_all' indices is not allowed for a regular user");
presponse.allowed = false;
return presponse.markComplete();
}
//TODO: newpeval: check if isAll() is all (contains("_all" or "*"))
if(requestedResolved.getAllIndices().contains(opendistrosecurityIndex) || requestedResolved.isAll()) {
if(request instanceof SearchRequest) {
((SearchRequest)request).requestCache(Boolean.FALSE);
if(log.isDebugEnabled()) {
log.debug("Disable search request cache for this request");
}
}
if(request instanceof RealtimeRequest) {
((RealtimeRequest) request).realtime(Boolean.FALSE);
if(log.isDebugEnabled()) {
log.debug("Disable realtime for this request");
}
}
}
return presponse;
}
}

View File

@ -0,0 +1,707 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.privileges;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Set;
import java.util.TreeSet;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.ElasticsearchSecurityException;
import org.elasticsearch.action.ActionRequest;
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotRequest;
import org.elasticsearch.action.admin.indices.alias.IndicesAliasesAction;
import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest;
import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest.AliasActions;
import org.elasticsearch.action.admin.indices.create.CreateIndexRequest;
import org.elasticsearch.action.admin.indices.delete.DeleteIndexAction;
import org.elasticsearch.action.bulk.BulkAction;
import org.elasticsearch.action.bulk.BulkItemRequest;
import org.elasticsearch.action.bulk.BulkShardRequest;
import org.elasticsearch.action.delete.DeleteAction;
import org.elasticsearch.action.get.MultiGetAction;
import org.elasticsearch.action.index.IndexAction;
import org.elasticsearch.action.search.MultiSearchAction;
import org.elasticsearch.action.search.SearchScrollAction;
import org.elasticsearch.action.termvectors.MultiTermVectorsAction;
import org.elasticsearch.action.update.UpdateAction;
import org.elasticsearch.cluster.metadata.AliasMetaData;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.collect.ImmutableOpenMap;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.index.reindex.ReindexAction;
import org.elasticsearch.tasks.Task;
import org.elasticsearch.threadpool.ThreadPool;
import com.amazon.opendistroforelasticsearch.security.auditlog.AuditLog;
import com.amazon.opendistroforelasticsearch.security.configuration.ActionGroupHolder;
import com.amazon.opendistroforelasticsearch.security.configuration.ClusterInfoHolder;
import com.amazon.opendistroforelasticsearch.security.configuration.ConfigurationRepository;
import com.amazon.opendistroforelasticsearch.security.resolver.IndexResolverReplacer;
import com.amazon.opendistroforelasticsearch.security.resolver.IndexResolverReplacer.Resolved;
import com.amazon.opendistroforelasticsearch.security.securityconf.ConfigModel;
import com.amazon.opendistroforelasticsearch.security.securityconf.ConfigModel.SecurityRoles;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
import com.amazon.opendistroforelasticsearch.security.support.WildcardMatcher;
import com.amazon.opendistroforelasticsearch.security.user.User;
public class PrivilegesEvaluator {
protected final Logger log = LogManager.getLogger(this.getClass());
protected final Logger actionTrace = LogManager.getLogger("opendistro_security_action_trace");
private final ClusterService clusterService;
private final IndexNameExpressionResolver resolver;
private final AuditLog auditLog;
private ThreadContext threadContext;
//private final static IndicesOptions DEFAULT_INDICES_OPTIONS = IndicesOptions.lenientExpandOpen();
private final ConfigurationRepository configurationRepository;
private PrivilegesInterceptor privilegesInterceptor;
private final boolean checkSnapshotRestoreWritePrivileges;
private ConfigConstants.RolesMappingResolution rolesMappingResolution;
private final ClusterInfoHolder clusterInfoHolder;
//private final boolean typeSecurityDisabled = false;
private final ConfigModel configModel;
private final IndexResolverReplacer irr;
private final SnapshotRestoreEvaluator snapshotRestoreEvaluator;
private final OpenDistroSecurityIndexAccessEvaluator securityIndexAccessEvaluator;
private final TermsAggregationEvaluator termsAggregationEvaluator;
private final DlsFlsEvaluator dlsFlsEvaluator;
public PrivilegesEvaluator(final ClusterService clusterService, final ThreadPool threadPool, final ConfigurationRepository configurationRepository, final ActionGroupHolder ah,
final IndexNameExpressionResolver resolver, AuditLog auditLog, final Settings settings, final PrivilegesInterceptor privilegesInterceptor,
final ClusterInfoHolder clusterInfoHolder) {
super();
this.configurationRepository = configurationRepository;
this.clusterService = clusterService;
this.resolver = resolver;
this.auditLog = auditLog;
this.threadContext = threadPool.getThreadContext();
this.privilegesInterceptor = privilegesInterceptor;
try {
rolesMappingResolution = ConfigConstants.RolesMappingResolution.valueOf(settings.get(ConfigConstants.OPENDISTRO_SECURITY_ROLES_MAPPING_RESOLUTION, ConfigConstants.RolesMappingResolution.MAPPING_ONLY.toString()).toUpperCase());
} catch (Exception e) {
log.error("Cannot apply roles mapping resolution",e);
rolesMappingResolution = ConfigConstants.RolesMappingResolution.MAPPING_ONLY;
}
this.checkSnapshotRestoreWritePrivileges = settings.getAsBoolean(ConfigConstants.OPENDISTRO_SECURITY_CHECK_SNAPSHOT_RESTORE_WRITE_PRIVILEGES,
ConfigConstants.OPENDISTRO_SECURITY_DEFAULT_CHECK_SNAPSHOT_RESTORE_WRITE_PRIVILEGES);
this.clusterInfoHolder = clusterInfoHolder;
//this.typeSecurityDisabled = settings.getAsBoolean(ConfigConstants.OPENDISTRO_SECURITY_DISABLE_TYPE_SECURITY, false);
configModel = new ConfigModel(ah, configurationRepository);
irr = new IndexResolverReplacer(resolver, clusterService, clusterInfoHolder);
snapshotRestoreEvaluator = new SnapshotRestoreEvaluator(settings, auditLog);
securityIndexAccessEvaluator = new OpenDistroSecurityIndexAccessEvaluator(settings, auditLog);
dlsFlsEvaluator = new DlsFlsEvaluator(settings, threadPool);
termsAggregationEvaluator = new TermsAggregationEvaluator();
}
private Settings getRolesSettings() {
return configurationRepository.getConfiguration(ConfigConstants.CONFIGNAME_ROLES, false);
}
private Settings getRolesMappingSettings() {
return configurationRepository.getConfiguration(ConfigConstants.CONFIGNAME_ROLES_MAPPING, false);
}
private Settings getConfigSettings() {
return configurationRepository.getConfiguration(ConfigConstants.CONFIGNAME_CONFIG, false);
}
//TODO: optimize, recreate only if changed
private SecurityRoles getSecurityRoles(final User user, final TransportAddress caller) {
Set<String> roles = mapSecurityRoles(user, caller);
return configModel.load().filter(roles);
}
public boolean isInitialized() {
return getRolesSettings() != null && getRolesMappingSettings() != null && getConfigSettings() != null;
}
public PrivilegesEvaluatorResponse evaluate(final User user, String action0, final ActionRequest request, Task task) {
if (!isInitialized()) {
throw new ElasticsearchSecurityException("Open Distro Security is not initialized.");
}
if(action0.startsWith("internal:indices/admin/upgrade")) {
action0 = "indices:admin/upgrade";
}
final TransportAddress caller = Objects.requireNonNull((TransportAddress) this.threadContext.getTransient(ConfigConstants.OPENDISTRO_SECURITY_REMOTE_ADDRESS));
final SecurityRoles securityRoles = getSecurityRoles(user, caller);
final PrivilegesEvaluatorResponse presponse = new PrivilegesEvaluatorResponse();
if (log.isDebugEnabled()) {
log.debug("### evaluate permissions for {} on {}", user, clusterService.localNode().getName());
log.debug("action: "+action0+" ("+request.getClass().getSimpleName()+")");
}
final Resolved requestedResolved = irr.resolveRequest(request);
if (log.isDebugEnabled()) {
log.debug("requestedResolved : {}", requestedResolved );
}
// check snapshot/restore requests
if (dlsFlsEvaluator.evaluate(clusterService, resolver, requestedResolved, user, securityRoles, presponse).isComplete()) {
return presponse;
}
// check snapshot/restore requests
if (snapshotRestoreEvaluator.evaluate(request, task, action0, clusterInfoHolder, presponse).isComplete()) {
return presponse;
}
// Security index access
if (securityIndexAccessEvaluator.evaluate(request, task, action0, requestedResolved, presponse).isComplete()) {
return presponse;
}
final boolean dnfofEnabled =
getConfigSettings().getAsBoolean("opendistro_security.dynamic.kibana.do_not_fail_on_forbidden", false)
|| getConfigSettings().getAsBoolean("opendistro_security.dynamic.do_not_fail_on_forbidden", false);
if(log.isTraceEnabled()) {
log.trace("dnfof enabled? {}", dnfofEnabled);
}
final Settings config = getConfigSettings();
if (isClusterPerm(action0)) {
if(!securityRoles.impliesClusterPermissionPermission(action0)) {
presponse.missingPrivileges.add(action0);
presponse.allowed = false;
log.info("No {}-level perm match for {} {} [Action [{}]] [RolesChecked {}]", "cluster" , user, requestedResolved, action0, securityRoles.getRoles().stream().map(r->r.getName()).toArray());
log.info("No permissions for {}", presponse.missingPrivileges);
return presponse;
} else {
if(request instanceof RestoreSnapshotRequest && checkSnapshotRestoreWritePrivileges) {
if(log.isDebugEnabled()) {
log.debug("Normally allowed but we need to apply some extra checks for a restore request.");
}
} else {
if(privilegesInterceptor.getClass() != PrivilegesInterceptor.class) {
final Boolean replaceResult = privilegesInterceptor.replaceKibanaIndex(request, action0, user, config, requestedResolved, mapTenants(user, caller));
if(log.isDebugEnabled()) {
log.debug("Result from privileges interceptor for cluster perm: {}", replaceResult);
}
if (replaceResult == Boolean.TRUE) {
auditLog.logMissingPrivileges(action0, request, task);
return presponse;
}
if (replaceResult == Boolean.FALSE) {
presponse.allowed = true;
return presponse;
}
}
if (dnfofEnabled
&& (action0.startsWith("indices:data/read/"))
&& !requestedResolved.getAllIndices().isEmpty()
) {
if(requestedResolved.getAllIndices().isEmpty()) {
presponse.missingPrivileges.clear();
presponse.allowed = true;
return presponse;
}
Set<String> reduced = securityRoles.reduce(requestedResolved, user, new String[]{action0}, resolver, clusterService);
if(reduced.isEmpty()) {
presponse.allowed = false;
return presponse;
}
if(irr.replace(request, true, reduced.toArray(new String[0]))) {
presponse.missingPrivileges.clear();
presponse.allowed = true;
return presponse;
}
}
if(log.isDebugEnabled()) {
log.debug("Allowed because we have cluster permissions for "+action0);
}
presponse.allowed = true;
return presponse;
}
}
}
// term aggregations
if (termsAggregationEvaluator.evaluate(request, clusterService, user, securityRoles, resolver, presponse) .isComplete()) {
return presponse;
}
final Set<String> allIndexPermsRequired = evaluateAdditionalIndexPermissions(request, action0);
final String[] allIndexPermsRequiredA = allIndexPermsRequired.toArray(new String[0]);
if(log.isDebugEnabled()) {
log.debug("requested {} from {}", allIndexPermsRequired, caller);
}
presponse.missingPrivileges.clear();
presponse.missingPrivileges.addAll(allIndexPermsRequired);
if (log.isDebugEnabled()) {
log.debug("requested resolved indextypes: {}", requestedResolved);
}
if (log.isDebugEnabled()) {
log.debug("sgr: {}", securityRoles.getRoles().stream().map(d->d.getName()).toArray());
}
//TODO exclude Security index
if(privilegesInterceptor.getClass() != PrivilegesInterceptor.class) {
final Boolean replaceResult = privilegesInterceptor.replaceKibanaIndex(request, action0, user, config, requestedResolved, mapTenants(user, caller));
if(log.isDebugEnabled()) {
log.debug("Result from privileges interceptor: {}", replaceResult);
}
if (replaceResult == Boolean.TRUE) {
auditLog.logMissingPrivileges(action0, request, task);
return presponse;
}
if (replaceResult == Boolean.FALSE) {
presponse.allowed = true;
return presponse;
}
}
if (dnfofEnabled
&& (action0.startsWith("indices:data/read/")
|| action0.startsWith("indices:admin/mappings/fields/get"))) {
if(requestedResolved.getAllIndices().isEmpty()) {
presponse.missingPrivileges.clear();
presponse.allowed = true;
return presponse;
}
Set<String> reduced = securityRoles.reduce(requestedResolved, user, allIndexPermsRequiredA, resolver, clusterService);
if(reduced.isEmpty()) {
presponse.allowed = false;
return presponse;
}
if(irr.replace(request, true, reduced.toArray(new String[0]))) {
presponse.missingPrivileges.clear();
presponse.allowed = true;
return presponse;
}
}
//not bulk, mget, etc request here
boolean permGiven = false;
if (config.getAsBoolean("opendistro_security.dynamic.multi_rolespan_enabled", false)) {
permGiven = securityRoles.impliesTypePermGlobal(requestedResolved, user, allIndexPermsRequiredA, resolver, clusterService);
} else {
permGiven = securityRoles.get(requestedResolved, user, allIndexPermsRequiredA, resolver, clusterService);
}
if (!permGiven) {
log.info("No {}-level perm match for {} {} [Action [{}]] [RolesChecked {}]", "index" , user, requestedResolved, action0, securityRoles.getRoles().stream().map(r->r.getName()).toArray());
log.info("No permissions for {}", presponse.missingPrivileges);
} else {
if(checkFilteredAliases(requestedResolved.getAllIndices(), action0)) {
presponse.allowed=false;
return presponse;
}
if(log.isDebugEnabled()) {
log.debug("Allowed because we have all indices permissions for "+action0);
}
}
presponse.allowed=permGiven;
return presponse;
}
public Set<String> mapSecurityRoles(final User user, final TransportAddress caller) {
final Settings rolesMapping = getRolesMappingSettings();
final Set<String> securityRoles = new TreeSet<String>();
if(user == null) {
return Collections.emptySet();
}
if(rolesMappingResolution == ConfigConstants.RolesMappingResolution.BOTH
|| rolesMappingResolution == ConfigConstants.RolesMappingResolution.BACKENDROLES_ONLY) {
if(log.isDebugEnabled()) {
log.debug("Pass backendroles from {}", user);
}
securityRoles.addAll(user.getRoles());
}
if(rolesMapping != null && ((rolesMappingResolution == ConfigConstants.RolesMappingResolution.BOTH
|| rolesMappingResolution == ConfigConstants.RolesMappingResolution.MAPPING_ONLY))) {
for (final String roleMap : rolesMapping.names()) {
final Settings roleMapSettings = rolesMapping.getByPrefix(roleMap);
if (WildcardMatcher.allPatternsMatched(roleMapSettings.getAsList(".and_backendroles", Collections.emptyList()).toArray(new String[0]), user.getRoles().toArray(new String[0]))) {
securityRoles.add(roleMap);
continue;
}
if (WildcardMatcher.matchAny(roleMapSettings.getAsList(".backendroles", Collections.emptyList()).toArray(new String[0]), user.getRoles().toArray(new String[0]))) {
securityRoles.add(roleMap);
continue;
}
if (WildcardMatcher.matchAny(roleMapSettings.getAsList(".users"), user.getName())) {
securityRoles.add(roleMap);
continue;
}
if(caller != null && log.isTraceEnabled()) {
log.trace("caller (getAddress()) is {}", caller.getAddress());
log.trace("caller unresolved? {}", caller.address().isUnresolved());
log.trace("caller inner? {}", caller.address().getAddress()==null?"<unresolved>":caller.address().getAddress().toString());
log.trace("caller (getHostString()) is {}", caller.address().getHostString());
log.trace("caller (getHostName(), dns) is {}", caller.address().getHostName()); //reverse lookup
}
if(caller != null) {
//IPV4 or IPv6 (compressed and without scope identifiers)
final String ipAddress = caller.getAddress();
if (WildcardMatcher.matchAny(roleMapSettings.getAsList(".hosts"), ipAddress)) {
securityRoles.add(roleMap);
continue;
}
final String hostResolverMode = getConfigSettings().get("opendistro_security.dynamic.hosts_resolver_mode","ip-only");
if(caller.address() != null && (hostResolverMode.equalsIgnoreCase("ip-hostname") || hostResolverMode.equalsIgnoreCase("ip-hostname-lookup"))){
final String hostName = caller.address().getHostString();
if (WildcardMatcher.matchAny(roleMapSettings.getAsList(".hosts"), hostName)) {
securityRoles.add(roleMap);
continue;
}
}
if(caller.address() != null && hostResolverMode.equalsIgnoreCase("ip-hostname-lookup")){
final String resolvedHostName = caller.address().getHostName();
if (WildcardMatcher.matchAny(roleMapSettings.getAsList(".hosts"), resolvedHostName)) {
securityRoles.add(roleMap);
continue;
}
}
}
}
}
return Collections.unmodifiableSet(securityRoles);
}
public Map<String, Boolean> mapTenants(final User user, final TransportAddress caller) {
if(user == null) {
return Collections.emptyMap();
}
final Map<String, Boolean> result = new HashMap<>();
result.put(user.getName(), true);
for(String securityRole: mapSecurityRoles(user, caller)) {
Settings tenants = getRolesSettings().getByPrefix(securityRole+".tenants.");
if(tenants != null) {
for(String tenant: tenants.names()) {
if(tenant.equals(user.getName())) {
continue;
}
if("RW".equalsIgnoreCase(tenants.get(tenant, "RO"))) {
result.put(tenant, true);
} else {
if(!result.containsKey(tenant)) { //RW outperforms RO
result.put(tenant, false);
}
}
}
}
}
return Collections.unmodifiableMap(result);
}
public Set<String> getAllConfiguredTenantNames() {
final Settings roles = getRolesSettings();
if(roles == null || roles.isEmpty()) {
return Collections.emptySet();
}
final Set<String> configuredTenants = new HashSet<>();
for(String securityRole: roles.names()) {
Settings tenants = roles.getByPrefix(securityRole+".tenants.");
if(tenants != null) {
configuredTenants.addAll(tenants.names());
}
}
return Collections.unmodifiableSet(configuredTenants);
}
public boolean multitenancyEnabled() {
return privilegesInterceptor.getClass() != PrivilegesInterceptor.class
&& getConfigSettings().getAsBoolean("opendistro_security.dynamic.kibana.multitenancy_enabled", true);
}
public boolean notFailOnForbiddenEnabled() {
return privilegesInterceptor.getClass() != PrivilegesInterceptor.class
&& getConfigSettings().getAsBoolean("opendistro_security.dynamic.kibana.do_not_fail_on_forbidden", false);
}
public String kibanaIndex() {
return getConfigSettings().get("opendistro_security.dynamic.kibana.index",".kibana");
}
public String kibanaServerUsername() {
return getConfigSettings().get("opendistro_security.dynamic.kibana.server_username","kibanaserver");
}
private Set<String> evaluateAdditionalIndexPermissions(final ActionRequest request, final String originalAction) {
//--- check inner bulk requests
final Set<String> additionalPermissionsRequired = new HashSet<>();
if(!isClusterPerm(originalAction)) {
additionalPermissionsRequired.add(originalAction);
}
if (request instanceof BulkShardRequest) {
BulkShardRequest bsr = (BulkShardRequest) request;
for (BulkItemRequest bir : bsr.items()) {
switch (bir.request().opType()) {
case CREATE:
additionalPermissionsRequired.add(IndexAction.NAME);
break;
case INDEX:
additionalPermissionsRequired.add(IndexAction.NAME);
break;
case DELETE:
additionalPermissionsRequired.add(DeleteAction.NAME);
break;
case UPDATE:
additionalPermissionsRequired.add(UpdateAction.NAME);
break;
}
}
}
if (request instanceof IndicesAliasesRequest) {
IndicesAliasesRequest bsr = (IndicesAliasesRequest) request;
for (AliasActions bir : bsr.getAliasActions()) {
switch (bir.actionType()) {
case REMOVE_INDEX:
additionalPermissionsRequired.add(DeleteIndexAction.NAME);
break;
default:
break;
}
}
}
if (request instanceof CreateIndexRequest) {
CreateIndexRequest cir = (CreateIndexRequest) request;
if(cir.aliases() != null && !cir.aliases().isEmpty()) {
additionalPermissionsRequired.add(IndicesAliasesAction.NAME);
}
}
if(request instanceof RestoreSnapshotRequest && checkSnapshotRestoreWritePrivileges) {
additionalPermissionsRequired.addAll(ConfigConstants.OPENDISTRO_SECURITY_SNAPSHOT_RESTORE_NEEDED_WRITE_PRIVILEGES);
}
if(actionTrace.isTraceEnabled() && additionalPermissionsRequired.size() > 1) {
actionTrace.trace(("Additional permissions required: "+additionalPermissionsRequired));
}
if(log.isDebugEnabled() && additionalPermissionsRequired.size() > 1) {
log.debug("Additional permissions required: "+additionalPermissionsRequired);
}
return Collections.unmodifiableSet(additionalPermissionsRequired);
}
private static boolean isClusterPerm(String action0) {
return ( action0.startsWith("cluster:")
|| action0.startsWith("indices:admin/template/")
|| action0.startsWith(SearchScrollAction.NAME)
|| (action0.equals(BulkAction.NAME))
|| (action0.equals(MultiGetAction.NAME))
|| (action0.equals(MultiSearchAction.NAME))
|| (action0.equals(MultiTermVectorsAction.NAME))
|| (action0.equals("indices:data/read/coordinate-msearch"))
|| (action0.equals(ReindexAction.NAME))
) ;
}
private boolean checkFilteredAliases(Set<String> requestedResolvedIndices, String action) {
//check filtered aliases
for(String requestAliasOrIndex: requestedResolvedIndices) {
final List<AliasMetaData> filteredAliases = new ArrayList<AliasMetaData>();
final IndexMetaData indexMetaData = clusterService.state().metaData().getIndices().get(requestAliasOrIndex);
if(indexMetaData == null) {
log.debug("{} does not exist in cluster metadata", requestAliasOrIndex);
continue;
}
final ImmutableOpenMap<String, AliasMetaData> aliases = indexMetaData.getAliases();
if(aliases != null && aliases.size() > 0) {
if(log.isDebugEnabled()) {
log.debug("Aliases for {}: {}", requestAliasOrIndex, aliases);
}
final Iterator<String> it = aliases.keysIt();
while(it.hasNext()) {
final String alias = it.next();
final AliasMetaData aliasMetaData = aliases.get(alias);
if(aliasMetaData != null && aliasMetaData.filteringRequired()) {
filteredAliases.add(aliasMetaData);
if(log.isDebugEnabled()) {
log.debug(alias+" is a filtered alias "+aliasMetaData.getFilter());
}
} else {
if(log.isDebugEnabled()) {
log.debug(alias+" is not an alias or does not have a filter");
}
}
}
}
if(filteredAliases.size() > 1 && WildcardMatcher.match("indices:data/read/*search*", action)) {
//TODO add queries as dls queries (works only if dls module is installed)
final String faMode = getConfigSettings().get("opendistro_security.dynamic.filtered_alias_mode","warn");
if(faMode.equals("warn")) {
log.warn("More than one ({}) filtered alias found for same index ({}). This is currently not recommended. Aliases: {}", filteredAliases.size(), requestAliasOrIndex, toString(filteredAliases));
} else if (faMode.equals("disallow")) {
log.error("More than one ({}) filtered alias found for same index ({}). This is currently not supported. Aliases: {}", filteredAliases.size(), requestAliasOrIndex, toString(filteredAliases));
return true;
} else {
if (log.isDebugEnabled()) {
log.debug("More than one ({}) filtered alias found for same index ({}). Aliases: {}", filteredAliases.size(), requestAliasOrIndex, toString(filteredAliases));
}
}
}
} //end-for
return false;
}
private List<String> toString(List<AliasMetaData> aliases) {
if(aliases == null || aliases.size() == 0) {
return Collections.emptyList();
}
final List<String> ret = new ArrayList<>(aliases.size());
for(final AliasMetaData amd: aliases) {
if(amd != null) {
ret.add(amd.alias());
}
}
return Collections.unmodifiableList(ret);
}
}

View File

@ -0,0 +1,93 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.privileges;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
public class PrivilegesEvaluatorResponse {
boolean allowed = false;
Set<String> missingPrivileges = new HashSet<String>();
Map<String,Set<String>> allowedFlsFields;
Map<String,Set<String>> maskedFields;
Map<String,Set<String>> queries;
PrivilegesEvaluatorResponseState state = PrivilegesEvaluatorResponseState.PENDING;
public boolean isAllowed() {
return allowed;
}
public Set<String> getMissingPrivileges() {
return new HashSet<String>(missingPrivileges);
}
public Map<String,Set<String>> getAllowedFlsFields() {
return allowedFlsFields;
}
public Map<String,Set<String>> getMaskedFields() {
return maskedFields;
}
public Map<String,Set<String>> getQueries() {
return queries;
}
public PrivilegesEvaluatorResponse markComplete() {
this.state = PrivilegesEvaluatorResponseState.COMPLETE;
return this;
}
public PrivilegesEvaluatorResponse markPending() {
this.state = PrivilegesEvaluatorResponseState.PENDING;
return this;
}
public boolean isComplete() {
return this.state == PrivilegesEvaluatorResponseState.COMPLETE;
}
public boolean isPending() {
return this.state == PrivilegesEvaluatorResponseState.PENDING;
}
@Override
public String toString() {
return "PrivEvalResponse [allowed=" + allowed + ", missingPrivileges=" + missingPrivileges
+ ", allowedFlsFields=" + allowedFlsFields + ", maskedFields=" + maskedFields + ", queries=" + queries + "]";
}
public static enum PrivilegesEvaluatorResponseState {
PENDING,
COMPLETE;
}
}

View File

@ -0,0 +1,68 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.privileges;
import java.util.Map;
import org.elasticsearch.action.ActionRequest;
import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.threadpool.ThreadPool;
import com.amazon.opendistroforelasticsearch.security.resolver.IndexResolverReplacer.Resolved;
import com.amazon.opendistroforelasticsearch.security.user.User;
public class PrivilegesInterceptor {
protected final IndexNameExpressionResolver resolver;
protected final ClusterService clusterService;
protected final Client client;
protected final ThreadPool threadPool;
public PrivilegesInterceptor(final IndexNameExpressionResolver resolver, final ClusterService clusterService,
final Client client, ThreadPool threadPool) {
this.resolver = resolver;
this.clusterService = clusterService;
this.client = client;
this.threadPool = threadPool;
}
public Boolean replaceKibanaIndex(final ActionRequest request, final String action, final User user, final Settings config, final Resolved requestedResolved, final Map<String, Boolean> tenants) {
throw new RuntimeException("not implemented");
}
protected final ThreadContext getThreadContext() {
return threadPool.getThreadContext();
}
}

View File

@ -0,0 +1,111 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.privileges;
import java.util.List;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.action.ActionRequest;
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotRequest;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.tasks.Task;
import com.amazon.opendistroforelasticsearch.security.auditlog.AuditLog;
import com.amazon.opendistroforelasticsearch.security.configuration.ClusterInfoHolder;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
import com.amazon.opendistroforelasticsearch.security.support.SnapshotRestoreHelper;
public class SnapshotRestoreEvaluator {
protected final Logger log = LogManager.getLogger(this.getClass());
private final boolean enableSnapshotRestorePrivilege;
private final String opendistrosecurityIndex;
private final AuditLog auditLog;
private final boolean restoreSecurityIndexEnabled;
public SnapshotRestoreEvaluator(final Settings settings, AuditLog auditLog) {
this.enableSnapshotRestorePrivilege = settings.getAsBoolean(ConfigConstants.OPENDISTRO_SECURITY_ENABLE_SNAPSHOT_RESTORE_PRIVILEGE,
ConfigConstants.OPENDISTRO_SECURITY_DEFAULT_ENABLE_SNAPSHOT_RESTORE_PRIVILEGE);
this.restoreSecurityIndexEnabled = settings.getAsBoolean(ConfigConstants.OPENDISTRO_SECURITY_UNSUPPORTED_RESTORE_SECURITYINDEX_ENABLED, false);
this.opendistrosecurityIndex = settings.get(ConfigConstants.OPENDISTRO_SECURITY_CONFIG_INDEX_NAME, ConfigConstants.OPENDISTRO_SECURITY_DEFAULT_CONFIG_INDEX);
this.auditLog = auditLog;
}
public PrivilegesEvaluatorResponse evaluate(final ActionRequest request, final Task task, final String action, final ClusterInfoHolder clusterInfoHolder,
final PrivilegesEvaluatorResponse presponse) {
if (!(request instanceof RestoreSnapshotRequest)) {
return presponse;
}
// snapshot restore for regular users not enabled
if (!enableSnapshotRestorePrivilege) {
log.warn(action + " is not allowed for a regular user");
presponse.allowed = false;
return presponse.markComplete();
}
// if this feature is enabled, users can also snapshot and restore
// the Security index and the global state
if (restoreSecurityIndexEnabled) {
presponse.allowed = true;
return presponse;
}
if (clusterInfoHolder.isLocalNodeElectedMaster() == Boolean.FALSE) {
presponse.allowed = true;
return presponse.markComplete();
}
final RestoreSnapshotRequest restoreRequest = (RestoreSnapshotRequest) request;
// Do not allow restore of global state
if (restoreRequest.includeGlobalState()) {
auditLog.logSecurityIndexAttempt(request, action, task);
log.warn(action + " with 'include_global_state' enabled is not allowed");
presponse.allowed = false;
return presponse.markComplete();
}
final List<String> rs = SnapshotRestoreHelper.resolveOriginalIndices(restoreRequest);
if (rs != null && (rs.contains(opendistrosecurityIndex) || rs.contains("_all") || rs.contains("*"))) {
auditLog.logSecurityIndexAttempt(request, action, task);
log.warn(action + " for '{}' as source index is not allowed", opendistrosecurityIndex);
presponse.allowed = false;
return presponse.markComplete();
}
return presponse;
}
}

View File

@ -0,0 +1,108 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.privileges;
import java.util.Set;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.action.ActionRequest;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.index.query.MatchNoneQueryBuilder;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.index.query.TermsQueryBuilder;
import org.elasticsearch.search.aggregations.AggregationBuilder;
import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;
import com.amazon.opendistroforelasticsearch.security.securityconf.ConfigModel.SecurityRoles;
import com.amazon.opendistroforelasticsearch.security.user.User;
public class TermsAggregationEvaluator {
protected final Logger log = LogManager.getLogger(this.getClass());
private static final String[] READ_ACTIONS = new String[]{
"indices:data/read/msearch",
"indices:data/read/mget",
"indices:data/read/get",
"indices:data/read/search",
"indices:data/read/field_caps*"
//"indices:admin/mappings/fields/get*"
};
private static final QueryBuilder NONE_QUERY = new MatchNoneQueryBuilder();
public TermsAggregationEvaluator() {
}
public PrivilegesEvaluatorResponse evaluate(final ActionRequest request, ClusterService clusterService, User user, SecurityRoles securityRoles, IndexNameExpressionResolver resolver, PrivilegesEvaluatorResponse presponse) {
try {
if(request instanceof SearchRequest) {
SearchRequest sr = (SearchRequest) request;
if( sr.source() != null
&& sr.source().query() == null
&& sr.source().aggregations() != null
&& sr.source().aggregations().getAggregatorFactories() != null
&& sr.source().aggregations().getAggregatorFactories().size() == 1
&& sr.source().size() == 0) {
AggregationBuilder ab = sr.source().aggregations().getAggregatorFactories().iterator().next();
if( ab instanceof TermsAggregationBuilder
&& "terms".equals(ab.getType())
&& "indices".equals(ab.getName())) {
if("_index".equals(((TermsAggregationBuilder) ab).field())
&& ab.getPipelineAggregations().isEmpty()
&& ab.getSubAggregations().isEmpty()) {
final Set<String> allPermittedIndices = securityRoles.getAllPermittedIndices(user, READ_ACTIONS, resolver, clusterService);
if(allPermittedIndices == null || allPermittedIndices.isEmpty()) {
sr.source().query(NONE_QUERY);
} else {
sr.source().query(new TermsQueryBuilder("_index", allPermittedIndices));
}
presponse.allowed = true;
return presponse.markComplete();
}
}
}
}
} catch (Exception e) {
log.warn("Unable to evaluate terms aggregation",e);
return presponse;
}
return presponse;
}
}

View File

@ -0,0 +1,904 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.resolver;
import java.io.IOException;
import java.io.Serializable;
import java.lang.reflect.Method;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.ListIterator;
import java.util.Map;
import java.util.Set;
import java.util.SortedMap;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.stream.Collectors;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionRequest;
import org.elasticsearch.action.DocWriteRequest;
import org.elasticsearch.action.IndicesRequest;
import org.elasticsearch.action.IndicesRequest.Replaceable;
import org.elasticsearch.action.OriginalIndices;
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotRequest;
import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest;
import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest.AliasActions;
import org.elasticsearch.action.admin.indices.create.CreateIndexRequest;
import org.elasticsearch.action.admin.indices.mapping.put.PutMappingRequest;
import org.elasticsearch.action.bulk.BulkItemRequest;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.bulk.BulkShardRequest;
import org.elasticsearch.action.delete.DeleteRequest;
import org.elasticsearch.action.fieldcaps.FieldCapabilitiesRequest;
import org.elasticsearch.action.get.GetRequest;
import org.elasticsearch.action.get.MultiGetRequest;
import org.elasticsearch.action.get.MultiGetRequest.Item;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.main.MainRequest;
import org.elasticsearch.action.search.ClearScrollRequest;
import org.elasticsearch.action.search.MultiSearchRequest;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.search.SearchScrollRequest;
import org.elasticsearch.action.support.IndicesOptions;
import org.elasticsearch.action.support.nodes.BaseNodesRequest;
import org.elasticsearch.action.support.replication.ReplicationRequest;
import org.elasticsearch.action.support.single.shard.SingleShardRequest;
import org.elasticsearch.action.termvectors.MultiTermVectorsRequest;
import org.elasticsearch.action.termvectors.TermVectorsRequest;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.metadata.AliasOrIndex;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.collect.Tuple;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.index.Index;
import org.elasticsearch.index.IndexNotFoundException;
import org.elasticsearch.index.reindex.ReindexRequest;
import org.elasticsearch.snapshots.SnapshotInfo;
import org.elasticsearch.snapshots.SnapshotUtils;
import org.elasticsearch.transport.RemoteClusterAware;
import org.elasticsearch.transport.RemoteClusterService;
import org.elasticsearch.transport.TransportRequest;
import com.amazon.opendistroforelasticsearch.security.OpenDistroSecurityPlugin;
import com.amazon.opendistroforelasticsearch.security.configuration.ClusterInfoHolder;
import com.amazon.opendistroforelasticsearch.security.support.SnapshotRestoreHelper;
import com.amazon.opendistroforelasticsearch.security.support.WildcardMatcher;
import com.google.common.collect.Sets;
public final class IndexResolverReplacer {
//private final static IndicesOptions DEFAULT_INDICES_OPTIONS = IndicesOptions.lenientExpandOpen();
//private static final String[] NO_INDICES_SET = Sets.newHashSet("\\",";",",","/","|").toArray(new String[0]);
private static final Set<String> NULL_SET = Sets.newHashSet((String)null);
private final Map<Class<?>, Method> typeCache = Collections.synchronizedMap(new HashMap<Class<?>, Method>(100));
private final Map<Class<?>, Method> typesCache = Collections.synchronizedMap(new HashMap<Class<?>, Method>(100));
private final Logger log = LogManager.getLogger(this.getClass());
private final IndexNameExpressionResolver resolver;
private final ClusterService clusterService;
private final ClusterInfoHolder clusterInfoHolder;
public IndexResolverReplacer(IndexNameExpressionResolver resolver, ClusterService clusterService, ClusterInfoHolder clusterInfoHolder) {
super();
this.resolver = resolver;
this.clusterService = clusterService;
this.clusterInfoHolder = clusterInfoHolder;
}
public static final boolean isAll(final String... requestedPatterns) {
final List<String> patterns = requestedPatterns==null?null:Arrays.asList(requestedPatterns);
if(IndexNameExpressionResolver.isAllIndices(patterns)) {
return true;
}
if(patterns.contains("*")) {
return true;
}
if(patterns.contains("_all")) {
return true;
}
if(new HashSet<String>(patterns).equals(NULL_SET)) {
return true;
}
return false;
}
private Resolved resolveIndexPatterns(final String... requestedPatterns) {
if(log.isTraceEnabled()) {
log.trace("resolve requestedPatterns: "+Arrays.toString(requestedPatterns));
}
if(isAll(requestedPatterns)) {
return Resolved._ALL;
}
ClusterState state = clusterService.state();
final SortedMap<String, AliasOrIndex> lookup = state.metaData().getAliasAndIndexLookup();
final Set<String> aliases = lookup.entrySet().stream().filter(e->e.getValue().isAlias()).map(e->e.getKey()).collect(Collectors.toSet());
final Set<String> matchingAliases = new HashSet<>(requestedPatterns.length*10);
final Set<String> matchingIndices = new HashSet<>(requestedPatterns.length*10);
final Set<String> matchingAllIndices = new HashSet<>(requestedPatterns.length*10);
//fill matchingAliases
for (int i = 0; i < requestedPatterns.length; i++) {
final String requestedPattern = resolver.resolveDateMathExpression(requestedPatterns[i]);
final List<String> _aliases = WildcardMatcher.getMatchAny(requestedPattern, aliases);
matchingAliases.addAll(_aliases);
}
//-alias not possible
{
//final String requestedPattern = resolver.resolveDateMathExpression(requestedPatterns[i]);
//final List<String> _aliases = WildcardMatcher.getMatchAny(requestedPattern, aliases);
//matchingAliases.addAll(_aliases);
List<String> _indices;
try {
_indices = new ArrayList<>(Arrays.asList(resolver.concreteIndexNames(state, IndicesOptions.fromOptions(false, true, true, false), requestedPatterns)));
if (log.isDebugEnabled()) {
log.debug("Resolved pattern {} to {}", requestedPatterns, _indices);
}
} catch (IndexNotFoundException e1) {
if (log.isDebugEnabled()) {
log.debug("No such indices for pattern {}, use raw value", (Object[]) requestedPatterns);
}
_indices = new ArrayList<>(requestedPatterns.length);
for (int i = 0; i < requestedPatterns.length; i++) {
String requestedPattern = requestedPatterns[i];
_indices.add(resolver.resolveDateMathExpression(requestedPattern));
}
/*if(requestedPatterns.length == 1) {
_indices = Collections.singletonList(resolver.resolveDateMathExpression(requestedPatterns[0]));
} else {
log.warn("Multiple ({}) index patterns {} cannot be resolved, assume _all", requestedPatterns.length, requestedPatterns);
//_indices = Collections.singletonList("*");
_indices = Arrays.asList(requestedPatterns); //date math not handled
}*/
}
final List<String> _aliases = WildcardMatcher.getMatchAny(requestedPatterns, aliases);
matchingAllIndices.addAll(_indices);
if(_aliases.isEmpty()) {
matchingIndices.addAll(_indices); //date math resolved?
} else {
if(!_indices.isEmpty()) {
for(String al:_aliases) {
Set<String> doubleIndices = lookup.get(al).getIndices().stream().map(a->a.getIndex().getName()).collect(Collectors.toSet());
_indices.removeAll(doubleIndices);
}
matchingIndices.addAll(_indices);
}
}
}
return new Resolved.Builder(matchingAliases, matchingIndices, matchingAllIndices, null).build();
}
@SuppressWarnings("rawtypes")
private Set<String> resolveTypes(final Object request) {
// check if type security is enabled
final Class<?> requestClass = request.getClass();
final Set<String> requestTypes = new HashSet<String>();
if (true) {
if (request instanceof BulkShardRequest) {
BulkShardRequest bsr = (BulkShardRequest) request;
for (BulkItemRequest bir : bsr.items()) {
requestTypes.add(bir.request().type());
}
} else if (request instanceof DocWriteRequest) {
requestTypes.add(((DocWriteRequest) request).type());
} else if (request instanceof SearchRequest) {
requestTypes.addAll(Arrays.asList(((SearchRequest) request).types()));
} else if (request instanceof GetRequest) {
requestTypes.add(((GetRequest) request).type());
} else {
Method typeMethod = null;
if (typeCache.containsKey(requestClass)) {
typeMethod = typeCache.get(requestClass);
} else {
try {
typeMethod = requestClass.getMethod("type");
typeCache.put(requestClass, typeMethod);
} catch (NoSuchMethodException e) {
typeCache.put(requestClass, null);
} catch (SecurityException e) {
log.error("Cannot evaluate type() for {} due to {}", requestClass, e, e);
}
}
Method typesMethod = null;
if (typesCache.containsKey(requestClass)) {
typesMethod = typesCache.get(requestClass);
} else {
try {
typesMethod = requestClass.getMethod("types");
typesCache.put(requestClass, typesMethod);
} catch (NoSuchMethodException e) {
typesCache.put(requestClass, null);
} catch (SecurityException e) {
log.error("Cannot evaluate types() for {} due to {}", requestClass, e, e);
}
}
if (typeMethod != null) {
try {
String type = (String) typeMethod.invoke(request);
if (type != null) {
requestTypes.add(type);
}
} catch (Exception e) {
log.error("Unable to invoke type() for {} due to", requestClass, e);
}
}
if (typesMethod != null) {
try {
final String[] types = (String[]) typesMethod.invoke(request);
if (types != null) {
requestTypes.addAll(Arrays.asList(types));
}
} catch (Exception e) {
log.error("Unable to invoke types() for {} due to", requestClass, e);
}
}
}
}
if (log.isTraceEnabled()) {
log.trace("requestTypes {} for {}", requestTypes, request.getClass());
}
return Collections.unmodifiableSet(requestTypes);
}
/*public boolean exclude(final TransportRequest request, String... exclude) {
return getOrReplaceAllIndices(request, new IndicesProvider() {
@Override
public String[] provide(final String[] original, final Object request, final boolean supportsReplace) {
if(supportsReplace) {
final List<String> result = new ArrayList<String>(Arrays.asList(original));
// if(isAll(original)) {
// result = new ArrayList<String>(Collections.singletonList("*"));
// } else {
// result = new ArrayList<String>(Arrays.asList(original));
// }
final Set<String> preliminary = new HashSet<>(resolveIndexPatterns(result.toArray(new String[0])).allIndices);
if(log.isTraceEnabled()) {
log.trace("resolved original {}, excludes {}",preliminary, Arrays.toString(exclude));
}
WildcardMatcher.wildcardRetainInSet(preliminary, exclude);
if(log.isTraceEnabled()) {
log.trace("modified original {}",preliminary);
}
result.addAll(preliminary.stream().map(a->"-"+a).collect(Collectors.toList()));
if(log.isTraceEnabled()) {
log.trace("exclude for {}: replaced {} with {}", request.getClass().getSimpleName(), Arrays.toString(original) ,result);
}
return result.toArray(new String[0]);
} else {
return NOOP;
}
}
}, false);
}*/
//dnfof
public boolean replace(final TransportRequest request, boolean retainMode, String... replacements) {
return getOrReplaceAllIndices(request, new IndicesProvider() {
@Override
public String[] provide(String[] original, Object request, boolean supportsReplace) {
if(supportsReplace) {
if(retainMode && original != null && original.length > 0) {
//TODO datemath?
List<String> originalAsList = Arrays.asList(original);
if(originalAsList.contains("*") || originalAsList.contains("_all")) {
return replacements;
}
original = resolver.concreteIndexNames(clusterService.state(), IndicesOptions.lenientExpandOpen(), original);
final String[] retained = WildcardMatcher.getMatchAny(original, replacements).toArray(new String[0]);
return retained;
}
return replacements;
} else {
return NOOP;
}
}
}, false);
}
public Resolved resolveRequest(final Object request) {
if(log.isDebugEnabled()) {
log.debug("Resolve aliases, indices and types from {}", request.getClass().getSimpleName());
}
Resolved.Builder resolvedBuilder = new Resolved.Builder();
final AtomicBoolean returnEmpty = new AtomicBoolean();
getOrReplaceAllIndices(request, new IndicesProvider() {
@Override
public String[] provide(String[] original, Object localRequest, boolean supportsReplace) {
//CCS
if((localRequest instanceof FieldCapabilitiesRequest || localRequest instanceof SearchRequest)
&& (request instanceof FieldCapabilitiesRequest || request instanceof SearchRequest)) {
assert supportsReplace: localRequest.getClass().getName()+" does not support replace";
final Tuple<Boolean, String[]> ccsResult = handleCcs((Replaceable) localRequest);
if(ccsResult.v1() == Boolean.TRUE) {
if(ccsResult.v2() == null || ccsResult.v2().length == 0) {
returnEmpty.set(true);
}
original = ccsResult.v2();
}
}
if(returnEmpty.get()) {
if(log.isTraceEnabled()) {
log.trace("CCS return empty indices for local node");
}
} else {
final Resolved iResolved = resolveIndexPatterns(original);
if(log.isTraceEnabled()) {
log.trace("Resolved patterns {} for {} ({}) to {}", original, localRequest.getClass().getSimpleName(), request.getClass().getSimpleName(), iResolved);
}
resolvedBuilder.add(iResolved);
resolvedBuilder.addTypes(resolveTypes(localRequest));
}
return IndicesProvider.NOOP;
}
}, false);
if(log.isTraceEnabled()) {
log.trace("Finally resolved for {}: {}", request.getClass().getSimpleName(), resolvedBuilder.build());
}
if(returnEmpty.get()) {
return Resolved._EMPTY;
}
return resolvedBuilder.build();
}
private Tuple<Boolean, String[]> handleCcs(final IndicesRequest.Replaceable request) {
Boolean modified = Boolean.FALSE;
String[] localIndices = request.indices();
final RemoteClusterService remoteClusterService = OpenDistroSecurityPlugin.GuiceHolder.getRemoteClusterService();
// handle CCS
// TODO how to handle aliases with CCS??
if (remoteClusterService.isCrossClusterSearchEnabled() && (request instanceof FieldCapabilitiesRequest || request instanceof SearchRequest)) {
IndicesRequest.Replaceable searchRequest = request;
final Map<String, OriginalIndices> remoteClusterIndices = OpenDistroSecurityPlugin.GuiceHolder.getRemoteClusterService().groupIndices(
searchRequest.indicesOptions(), searchRequest.indices(), idx -> resolver.hasIndexOrAlias(idx, clusterService.state()));
assert remoteClusterIndices.size() > 0:"Remote cluster size must not be zero";
// check permissions?
if (log.isDebugEnabled()) {
log.debug("CCS case, original indices: " + Arrays.toString(localIndices));
log.debug("remoteClusterIndices ({}): {}", remoteClusterIndices.size(), remoteClusterIndices);
}
final OriginalIndices originalLocalIndices = remoteClusterIndices.get(RemoteClusterAware.LOCAL_CLUSTER_GROUP_KEY);
if(originalLocalIndices == null) {
localIndices = null;
} else {
localIndices = originalLocalIndices.indices();
}
modified = Boolean.TRUE;
if (log.isDebugEnabled()) {
log.debug("remoteClusterIndices keys" + remoteClusterIndices.keySet() + "//remoteClusterIndices "
+ remoteClusterIndices);
log.debug("modified local indices: " + Arrays.toString(localIndices));
}
}
return new Tuple<Boolean, String[]>(modified, localIndices);
}
public final static class Resolved implements Serializable, Writeable {
/**
*
*/
private static final Set<String> All_SET = Sets.newHashSet("*");
private static final long serialVersionUID = 1L;
public final static Resolved _ALL = new Resolved(All_SET, All_SET, All_SET, All_SET);
public final static Resolved _EMPTY = new Builder().build();
private final Set<String> aliases;
private final Set<String> indices;
private final Set<String> allIndices;
private final Set<String> types;
private Resolved(final Set<String> aliases, final Set<String> indices, final Set<String> allIndices, final Set<String> types) {
super();
this.aliases = aliases;
this.indices = indices;
this.allIndices = allIndices;
this.types = types;
if(!aliases.isEmpty() || !indices.isEmpty() || !allIndices.isEmpty()) {
if(types.isEmpty()) {
throw new ElasticsearchException("Empty types for nonempty inidices or aliases");
}
}
}
public boolean isAll() {
return aliases.contains("*") && indices.contains("*") && allIndices.contains("*") && types.contains("*");
}
public boolean isEmpty() {
return aliases.isEmpty() && indices.isEmpty() && allIndices.isEmpty() && types.isEmpty();
}
public Set<String> getAliases() {
return Collections.unmodifiableSet(aliases);
}
public Set<String> getIndices() {
return Collections.unmodifiableSet(indices);
}
public Set<String> getAllIndices() {
return Collections.unmodifiableSet(allIndices);
}
public Set<String> getTypes() {
return Collections.unmodifiableSet(types);
}
//TODO equals and hashcode??
@Override
public String toString() {
return "Resolved [aliases=" + aliases + ", indices=" + indices + ", allIndices=" + allIndices + ", types=" + types
+ ", isAll()=" + isAll() + ", isEmpty()=" + isEmpty() + "]";
}
@Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((aliases == null) ? 0 : aliases.hashCode());
result = prime * result + ((allIndices == null) ? 0 : allIndices.hashCode());
result = prime * result + ((indices == null) ? 0 : indices.hashCode());
result = prime * result + ((types == null) ? 0 : types.hashCode());
return result;
}
@Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
Resolved other = (Resolved) obj;
if (aliases == null) {
if (other.aliases != null)
return false;
} else if (!aliases.equals(other.aliases))
return false;
if (allIndices == null) {
if (other.allIndices != null)
return false;
} else if (!allIndices.equals(other.allIndices))
return false;
if (indices == null) {
if (other.indices != null)
return false;
} else if (!indices.equals(other.indices))
return false;
if (types == null) {
if (other.types != null)
return false;
} else if (!types.equals(other.types))
return false;
return true;
}
private static class Builder {
final Set<String> aliases = new HashSet<String>();
final Set<String> indices = new HashSet<String>();
final Set<String> allIndices = new HashSet<String>();
final Set<String> types = new HashSet<String>();
public Builder() {
this(null, null, null, null);
}
public Builder(Collection<String> aliases, Collection<String> indices, Collection<String> allIndices, Collection<String> types) {
if(aliases != null) {
this.aliases.addAll(aliases);
}
if(indices != null) {
this.indices.addAll(indices);
}
if(allIndices != null) {
this.allIndices.addAll(allIndices);
}
if(types != null) {
this.types.addAll(types);
}
}
public Builder addTypes(Collection<String> types) {
if(types != null && types.size() > 0) {
if(this.types.contains("*")) {
this.types.remove("*");
}
this.types.addAll(types);
}
return this;
}
public Builder add(Resolved r) {
this.aliases.addAll(r.aliases);
this.indices.addAll(r.indices);
this.allIndices.addAll(r.allIndices);
addTypes(r.types);
return this;
}
public Resolved build() {
if(types.isEmpty()) {
types.add("*");
}
return new Resolved(new HashSet<String>(aliases), new HashSet<String>(indices), new HashSet<String>(allIndices), new HashSet<String>(types));
}
}
public Resolved(final StreamInput in) throws IOException {
aliases = new HashSet<String>(in.readList(StreamInput::readString));
indices = new HashSet<String>(in.readList(StreamInput::readString));
allIndices = new HashSet<String>(in.readList(StreamInput::readString));
types = new HashSet<String>(in.readList(StreamInput::readString));
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeStringList(new ArrayList<>(aliases));
out.writeStringList(new ArrayList<>(indices));
out.writeStringList(new ArrayList<>(allIndices));
out.writeStringList(new ArrayList<>(types));
}
}
private List<String> renamedIndices(final RestoreSnapshotRequest request, final List<String> filteredIndices) {
final List<String> renamedIndices = new ArrayList<>();
for (final String index : filteredIndices) {
String renamedIndex = index;
if (request.renameReplacement() != null && request.renamePattern() != null) {
renamedIndex = index.replaceAll(request.renamePattern(), request.renameReplacement());
}
renamedIndices.add(renamedIndex);
}
return renamedIndices;
}
//--
@FunctionalInterface
public interface IndicesProvider {
public static final String[] NOOP = new String[0];
String[] provide(String[] original, Object request, boolean supportsReplace);
}
private boolean checkIndices(Object request, String[] indices, boolean needsToBeSizeOne, boolean allowEmpty) {
if(indices == IndicesProvider.NOOP) {
return false;
}
if(!allowEmpty && (indices == null || indices.length == 0)) {
if(log.isTraceEnabled() && request != null) {
log.trace("Null or empty indices for "+request.getClass().getName());
}
return false;
}
if(!allowEmpty && needsToBeSizeOne && indices.length != 1) {
if(log.isTraceEnabled() && request != null) {
log.trace("To much indices for "+request.getClass().getName());
}
return false;
}
for (int i = 0; i < indices.length; i++) {
final String index = indices[i];
if(index == null || index.isEmpty()) {
//not allowed
if(log.isTraceEnabled() && request != null) {
log.trace("At least one null or empty index for "+request.getClass().getName());
}
return false;
}
}
return true;
}
/**
* new
* @param request
* @param newIndices
* @return
*/
@SuppressWarnings("rawtypes")
private boolean getOrReplaceAllIndices(final Object request, final IndicesProvider provider, boolean allowEmptyIndices) {
if(log.isTraceEnabled()) {
log.trace("getOrReplaceAllIndices() for "+request.getClass());
}
boolean result = true;
if (request instanceof BulkRequest) {
for (DocWriteRequest ar : ((BulkRequest) request).requests()) {
result = getOrReplaceAllIndices(ar, provider, false) && result;
}
} else if (request instanceof MultiGetRequest) {
for (ListIterator<Item> it = ((MultiGetRequest) request).getItems().listIterator(); it.hasNext();){
Item item = it.next();
result = getOrReplaceAllIndices(item, provider, false) && result;
/*if(item.index() == null || item.indices() == null || item.indices().length == 0) {
it.remove();
}*/
}
} else if (request instanceof MultiSearchRequest) {
for (ListIterator<SearchRequest> it = ((MultiSearchRequest) request).requests().listIterator(); it.hasNext();) {
SearchRequest ar = it.next();
result = getOrReplaceAllIndices(ar, provider, false) && result;
/*if(ar.indices() == null || ar.indices().length == 0) {
it.remove();
}*/
}
} else if (request instanceof MultiTermVectorsRequest) {
for (ActionRequest ar : (Iterable<TermVectorsRequest>) () -> ((MultiTermVectorsRequest) request).iterator()) {
result = getOrReplaceAllIndices(ar, provider, false) && result;
}
} else if(request instanceof PutMappingRequest) {
PutMappingRequest pmr = (PutMappingRequest) request;
Index concreteIndex = pmr.getConcreteIndex();
if(concreteIndex != null && (pmr.indices() == null || pmr.indices().length == 0)) {
String[] newIndices = provider.provide(new String[]{concreteIndex.getName()}, request, true);
if(checkIndices(request, newIndices, true, allowEmptyIndices) == false) {
return false;
}
((PutMappingRequest) request).indices(newIndices);
((PutMappingRequest) request).setConcreteIndex(null);
} else {
String[] newIndices = provider.provide(((PutMappingRequest) request).indices(), request, true);
if(checkIndices(request, newIndices, false, allowEmptyIndices) == false) {
return false;
}
((PutMappingRequest) request).indices(newIndices);
}
} else if(request instanceof RestoreSnapshotRequest) {
if(clusterInfoHolder.isLocalNodeElectedMaster() == Boolean.FALSE) {
return true;
}
final RestoreSnapshotRequest restoreRequest = (RestoreSnapshotRequest) request;
final SnapshotInfo snapshotInfo = SnapshotRestoreHelper.getSnapshotInfo(restoreRequest);
if (snapshotInfo == null) {
log.warn("snapshot repository '" + restoreRequest.repository() + "', snapshot '" + restoreRequest.snapshot() + "' not found");
provider.provide(new String[]{"*"}, request, false);
} else {
final List<String> requestedResolvedIndices = SnapshotUtils.filterIndices(snapshotInfo.indices(), restoreRequest.indices(), restoreRequest.indicesOptions());
final List<String> renamedTargetIndices = renamedIndices(restoreRequest, requestedResolvedIndices);
//final Set<String> indices = new HashSet<>(requestedResolvedIndices);
//indices.addAll(renamedTargetIndices);
if(log.isDebugEnabled()) {
log.debug("snapshot: {} contains this indices: {}", snapshotInfo.snapshotId().getName(), renamedTargetIndices);
}
provider.provide(renamedTargetIndices.toArray(new String[0]), request, false);
}
} else if (request instanceof IndicesAliasesRequest) {
for(AliasActions ar: ((IndicesAliasesRequest) request).getAliasActions()) {
result = getOrReplaceAllIndices(ar, provider, false) && result;
}
} else if (request instanceof DeleteRequest) {
String[] newIndices = provider.provide(((DeleteRequest) request).indices(), request, true);
if(checkIndices(request, newIndices, true, allowEmptyIndices) == false) {
return false;
}
((DeleteRequest) request).index(newIndices.length!=1?null:newIndices[0]);
} else if (request instanceof UpdateRequest) {
String[] newIndices = provider.provide(((UpdateRequest) request).indices(), request, true);
if(checkIndices(request, newIndices, true, allowEmptyIndices) == false) {
return false;
}
((UpdateRequest) request).index(newIndices.length!=1?null:newIndices[0]);
} else if (request instanceof SingleShardRequest) {
final SingleShardRequest<?> gr = (SingleShardRequest<?>) request;
final String[] indices = gr.indices();
final String index = gr.index();
final List<String> indicesL = new ArrayList<String>();
if (index != null) {
indicesL.add(index);
}
if (indices != null && indices.length > 0) {
indicesL.addAll(Arrays.asList(indices));
}
String[] newIndices = provider.provide(indicesL.toArray(new String[0]), request, true);
if(checkIndices(request, newIndices, true, allowEmptyIndices) == false) {
return false;
}
((SingleShardRequest) request).index(newIndices.length!=1?null:newIndices[0]);
} else if (request instanceof IndexRequest) {
String[] newIndices = provider.provide(((IndexRequest) request).indices(), request, true);
if(checkIndices(request, newIndices, true, allowEmptyIndices) == false) {
return false;
}
((IndexRequest) request).index(newIndices.length!=1?null:newIndices[0]);
} else if (request instanceof Replaceable) {
String[] newIndices = provider.provide(((Replaceable) request).indices(), request, true);
if(checkIndices(request, newIndices, false, allowEmptyIndices) == false) {
return false;
}
((Replaceable) request).indices(newIndices);
} else if (request instanceof BulkShardRequest) {
provider.provide(((ReplicationRequest) request).indices(), request, false);
//replace not supported?
} else if (request instanceof ReplicationRequest) {
String[] newIndices = provider.provide(((ReplicationRequest) request).indices(), request, true);
if(checkIndices(request, newIndices, true, allowEmptyIndices) == false) {
return false;
}
((ReplicationRequest) request).index(newIndices.length!=1?null:newIndices[0]);
} else if (request instanceof MultiGetRequest.Item) {
String[] newIndices = provider.provide(((MultiGetRequest.Item) request).indices(), request, true);
if(checkIndices(request, newIndices, true, allowEmptyIndices) == false) {
return false;
}
((MultiGetRequest.Item) request).index(newIndices.length!=1?null:newIndices[0]);
} else if (request instanceof CreateIndexRequest) {
String[] newIndices = provider.provide(((CreateIndexRequest) request).indices(), request, true);
if(checkIndices(request, newIndices, true, allowEmptyIndices) == false) {
return false;
}
((CreateIndexRequest) request).index(newIndices.length!=1?null:newIndices[0]);
} else if (request instanceof ReindexRequest) {
result = getOrReplaceAllIndices(((ReindexRequest) request).getDestination(), provider, false) && result;
result = getOrReplaceAllIndices(((ReindexRequest) request).getSearchRequest(), provider, false) && result;
} else if (request instanceof BaseNodesRequest) {
//do nothing
} else if (request instanceof MainRequest) {
//do nothing
} else if (request instanceof ClearScrollRequest) {
//do nothing
} else if (request instanceof SearchScrollRequest) {
//do nothing
} else {
if(log.isDebugEnabled()) {
log.debug(request.getClass() + " not supported (It is likely not a indices related request)");
}
result = false;
}
return result;
}
}

View File

@ -0,0 +1,119 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.rest;
import static org.elasticsearch.rest.RestRequest.Method.GET;
import static org.elasticsearch.rest.RestRequest.Method.POST;
import java.io.IOException;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.client.node.NodeClient;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.rest.BaseRestHandler;
import org.elasticsearch.rest.BytesRestResponse;
import org.elasticsearch.rest.RestChannel;
import org.elasticsearch.rest.RestController;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.threadpool.ThreadPool;
import com.amazon.opendistroforelasticsearch.security.privileges.PrivilegesEvaluator;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
import com.amazon.opendistroforelasticsearch.security.user.User;
public class KibanaInfoAction extends BaseRestHandler {
private final Logger log = LogManager.getLogger(this.getClass());
private final PrivilegesEvaluator evaluator;
private final ThreadContext threadContext;
public KibanaInfoAction(final Settings settings, final RestController controller, final PrivilegesEvaluator evaluator, final ThreadPool threadPool) {
super(settings);
this.threadContext = threadPool.getThreadContext();
this.evaluator = evaluator;
controller.registerHandler(GET, "/_opendistro/_security/kibanainfo", this);
controller.registerHandler(POST, "/_opendistro/_security/kibanainfo", this);
}
@Override
protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) throws IOException {
return new RestChannelConsumer() {
@Override
public void accept(RestChannel channel) throws Exception {
XContentBuilder builder = channel.newBuilder(); //NOSONAR
BytesRestResponse response = null;
try {
final User user = (User)threadContext.getTransient(ConfigConstants.OPENDISTRO_SECURITY_USER);
final TransportAddress remoteAddress = (TransportAddress) threadContext.getTransient(ConfigConstants.OPENDISTRO_SECURITY_REMOTE_ADDRESS);
builder.startObject();
builder.field("user_name", user==null?null:user.getName());
builder.field("not_fail_on_forbidden_enabled", evaluator.notFailOnForbiddenEnabled());
builder.field("kibana_mt_enabled", evaluator.multitenancyEnabled());
builder.field("kibana_index", evaluator.kibanaIndex());
builder.field("kibana_server_user", evaluator.kibanaServerUsername());
//builder.field("kibana_index_readonly", evaluator.kibanaIndexReadonly(user, remoteAddress));
builder.endObject();
response = new BytesRestResponse(RestStatus.OK, builder);
} catch (final Exception e1) {
log.error(e1.toString(),e1);
builder = channel.newBuilder(); //NOSONAR
builder.startObject();
builder.field("error", e1.toString());
builder.endObject();
response = new BytesRestResponse(RestStatus.INTERNAL_SERVER_ERROR, builder);
} finally {
if(builder != null) {
builder.close();
}
}
channel.sendResponse(response);
}
};
}
@Override
public String getName() {
return "Kibana Info Action";
}
}

View File

@ -0,0 +1,109 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.rest;
import static org.elasticsearch.rest.RestRequest.Method.GET;
import static org.elasticsearch.rest.RestRequest.Method.POST;
import java.io.IOException;
import org.elasticsearch.client.node.NodeClient;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.rest.BaseRestHandler;
import org.elasticsearch.rest.BytesRestResponse;
import org.elasticsearch.rest.RestChannel;
import org.elasticsearch.rest.RestController;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.rest.RestStatus;
import com.amazon.opendistroforelasticsearch.security.auth.BackendRegistry;
public class OpenDistroSecurityHealthAction extends BaseRestHandler {
private final BackendRegistry registry;
public OpenDistroSecurityHealthAction(final Settings settings, final RestController controller, final BackendRegistry registry) {
super(settings);
this.registry = registry;
controller.registerHandler(GET, "/_opendistro/_security/health", this);
controller.registerHandler(POST, "/_opendistro/_security/health", this);
}
@Override
protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) throws IOException {
return new RestChannelConsumer() {
final String mode = request.param("mode","strict");
@Override
public void accept(RestChannel channel) throws Exception {
XContentBuilder builder = channel.newBuilder();
RestStatus restStatus = RestStatus.OK;
BytesRestResponse response = null;
try {
String status = "UP";
String message = null;
builder.startObject();
if ("strict".equalsIgnoreCase(mode) && registry.isInitialized() == false) {
status = "DOWN";
message = "Not initialized";
restStatus = RestStatus.SERVICE_UNAVAILABLE;
}
builder.field("message", message);
builder.field("mode", mode);
builder.field("status", status);
builder.endObject();
response = new BytesRestResponse(restStatus, builder);
} finally {
builder.close();
}
channel.sendResponse(response);
}
};
}
@Override
public String getName() {
return "Open Distro Security Health Check";
}
}

View File

@ -0,0 +1,143 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.rest;
import static org.elasticsearch.rest.RestRequest.Method.GET;
import static org.elasticsearch.rest.RestRequest.Method.POST;
import java.io.IOException;
import java.io.Serializable;
import java.nio.charset.StandardCharsets;
import java.security.cert.X509Certificate;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.apache.lucene.util.RamUsageEstimator;
import org.elasticsearch.client.node.NodeClient;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.rest.BaseRestHandler;
import org.elasticsearch.rest.BytesRestResponse;
import org.elasticsearch.rest.RestChannel;
import org.elasticsearch.rest.RestController;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.threadpool.ThreadPool;
import com.amazon.opendistroforelasticsearch.security.privileges.PrivilegesEvaluator;
import com.amazon.opendistroforelasticsearch.security.support.Base64Helper;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
import com.amazon.opendistroforelasticsearch.security.user.User;
public class OpenDistroSecurityInfoAction extends BaseRestHandler {
private final Logger log = LogManager.getLogger(this.getClass());
private final PrivilegesEvaluator evaluator;
private final ThreadContext threadContext;
public OpenDistroSecurityInfoAction(final Settings settings, final RestController controller, final PrivilegesEvaluator evaluator, final ThreadPool threadPool) {
super(settings);
this.threadContext = threadPool.getThreadContext();
this.evaluator = evaluator;
controller.registerHandler(GET, "/_opendistro/_security/authinfo", this);
controller.registerHandler(POST, "/_opendistro/_security/authinfo", this);
}
@Override
protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) throws IOException {
return new RestChannelConsumer() {
@Override
public void accept(RestChannel channel) throws Exception {
XContentBuilder builder = channel.newBuilder(); //NOSONAR
BytesRestResponse response = null;
try {
final boolean verbose = request.paramAsBoolean("verbose", false);
final X509Certificate[] certs = threadContext.getTransient(ConfigConstants.OPENDISTRO_SECURITY_SSL_PEER_CERTIFICATES);
final User user = (User)threadContext.getTransient(ConfigConstants.OPENDISTRO_SECURITY_USER);
final TransportAddress remoteAddress = (TransportAddress) threadContext.getTransient(ConfigConstants.OPENDISTRO_SECURITY_REMOTE_ADDRESS);
builder.startObject();
builder.field("user", user==null?null:user.toString());
builder.field("user_name", user==null?null:user.getName());
builder.field("user_requested_tenant", user==null?null:user.getRequestedTenant());
builder.field("remote_address", remoteAddress);
builder.field("backend_roles", user==null?null:user.getRoles());
builder.field("custom_attribute_names", user==null?null:user.getCustomAttributesMap().keySet());
builder.field("roles", evaluator.mapSecurityRoles(user, remoteAddress));
builder.field("tenants", evaluator.mapTenants(user, remoteAddress));
builder.field("principal", (String)threadContext.getTransient(ConfigConstants.OPENDISTRO_SECURITY_SSL_PRINCIPAL));
builder.field("peer_certificates", certs != null && certs.length > 0 ? certs.length + "" : "0");
builder.field("sso_logout_url", (String)threadContext.getTransient(ConfigConstants.SSO_LOGOUT_URL));
if(user != null && verbose) {
try {
builder.field("size_of_user", RamUsageEstimator.humanReadableUnits(Base64Helper.serializeObject(user).length()));
builder.field("size_of_custom_attributes", RamUsageEstimator.humanReadableUnits(Base64Helper.serializeObject((Serializable) user.getCustomAttributesMap()).getBytes(StandardCharsets.UTF_8).length));
builder.field("size_of_backendroles", RamUsageEstimator.humanReadableUnits(Base64Helper.serializeObject((Serializable)user.getRoles()).getBytes(StandardCharsets.UTF_8).length));
} catch (Throwable e) {
//ignore
}
}
builder.endObject();
response = new BytesRestResponse(RestStatus.OK, builder);
} catch (final Exception e1) {
log.error(e1.toString(),e1);
builder = channel.newBuilder(); //NOSONAR
builder.startObject();
builder.field("error", e1.toString());
builder.endObject();
response = new BytesRestResponse(RestStatus.INTERNAL_SERVER_ERROR, builder);
} finally {
if(builder != null) {
builder.close();
}
}
channel.sendResponse(response);
}
};
}
@Override
public String getName() {
return "Open Distro Security Info Action";
}
}

View File

@ -0,0 +1,167 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.rest;
import static org.elasticsearch.rest.RestRequest.Method.GET;
import static org.elasticsearch.rest.RestRequest.Method.POST;
import java.io.IOException;
import java.util.SortedMap;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.client.node.NodeClient;
import org.elasticsearch.cluster.metadata.AliasOrIndex;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.rest.BaseRestHandler;
import org.elasticsearch.rest.BytesRestResponse;
import org.elasticsearch.rest.RestChannel;
import org.elasticsearch.rest.RestController;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.threadpool.ThreadPool;
import com.amazon.opendistroforelasticsearch.security.configuration.AdminDNs;
import com.amazon.opendistroforelasticsearch.security.privileges.PrivilegesEvaluator;
import com.amazon.opendistroforelasticsearch.security.support.ConfigConstants;
import com.amazon.opendistroforelasticsearch.security.user.User;
public class TenantInfoAction extends BaseRestHandler {
private final Logger log = LogManager.getLogger(this.getClass());
private final PrivilegesEvaluator evaluator;
private final ThreadContext threadContext;
private final ClusterService clusterService;
private final AdminDNs adminDns;
public TenantInfoAction(final Settings settings, final RestController controller,
final PrivilegesEvaluator evaluator, final ThreadPool threadPool, final ClusterService clusterService, final AdminDNs adminDns) {
super(settings);
this.threadContext = threadPool.getThreadContext();
this.evaluator = evaluator;
this.clusterService = clusterService;
this.adminDns = adminDns;
controller.registerHandler(GET, "/_opendistro/_security/tenantinfo", this);
controller.registerHandler(POST, "/_opendistro/_security/tenantinfo", this);
}
@Override
protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) throws IOException {
return new RestChannelConsumer() {
@Override
public void accept(RestChannel channel) throws Exception {
XContentBuilder builder = channel.newBuilder(); //NOSONAR
BytesRestResponse response = null;
try {
final User user = (User)threadContext.getTransient(ConfigConstants.OPENDISTRO_SECURITY_USER);
//only allowed for admins or the kibanaserveruser
if(user == null ||
(!user.getName().equals(evaluator.kibanaServerUsername()))
&& !adminDns.isAdmin(user)) {
response = new BytesRestResponse(RestStatus.FORBIDDEN,"");
} else {
builder.startObject();
final SortedMap<String, AliasOrIndex> lookup = clusterService.state().metaData().getAliasAndIndexLookup();
for(final String indexOrAlias: lookup.keySet()) {
final String tenant = tenantNameForIndex(indexOrAlias);
if(tenant != null) {
builder.field(indexOrAlias, tenant);
}
}
builder.endObject();
response = new BytesRestResponse(RestStatus.OK, builder);
}
} catch (final Exception e1) {
log.error(e1.toString(),e1);
builder = channel.newBuilder(); //NOSONAR
builder.startObject();
builder.field("error", e1.toString());
builder.endObject();
response = new BytesRestResponse(RestStatus.INTERNAL_SERVER_ERROR, builder);
} finally {
if(builder != null) {
builder.close();
}
}
channel.sendResponse(response);
}
};
}
private String tenantNameForIndex(String index) {
String[] indexParts;
if(index == null
|| (indexParts = index.split("_")).length != 3
) {
return null;
}
if(!indexParts[0].equals(evaluator.kibanaIndex())) {
return null;
}
try {
final int expectedHash = Integer.parseInt(indexParts[1]);
final String sanitizedName = indexParts[2];
for(String tenant: evaluator.getAllConfiguredTenantNames()) {
if(tenant.hashCode() == expectedHash && sanitizedName.equals(tenant.toLowerCase().replaceAll("[^a-z0-9]+",""))) {
return tenant;
}
}
return "__private__";
} catch (NumberFormatException e) {
log.warn("Index "+index+" looks like a Security tenant index but we cannot parse the hashcode so we ignore it.");
return null;
}
}
@Override
public String getName() {
return "Tenant Info Action";
}
}

View File

@ -0,0 +1,830 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.securityconf;
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Objects;
import java.util.Set;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.action.support.IndicesOptions;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.collect.Tuple;
import org.elasticsearch.common.settings.Settings;
import com.amazon.opendistroforelasticsearch.security.configuration.ActionGroupHolder;
import com.amazon.opendistroforelasticsearch.security.configuration.ConfigurationRepository;
import com.amazon.opendistroforelasticsearch.security.resolver.IndexResolverReplacer.Resolved;
import com.amazon.opendistroforelasticsearch.security.support.WildcardMatcher;
import com.amazon.opendistroforelasticsearch.security.user.User;
import com.google.common.base.Joiner;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Iterables;
import com.google.common.collect.Sets;
public class ConfigModel {
protected final Logger log = LogManager.getLogger(this.getClass());
private static final Set<String> IGNORED_TYPES = ImmutableSet.of("_dls_", "_fls_","_masked_fields_");
private final ActionGroupHolder ah;
private final ConfigurationRepository configurationRepository;
public ConfigModel(final ActionGroupHolder ah,
final ConfigurationRepository configurationRepository) {
super();
this.ah = ah;
this.configurationRepository = configurationRepository;
}
public SecurityRoles load() {
final Settings settings = configurationRepository.getConfiguration("roles", false);
SecurityRoles _securityRoles = new SecurityRoles();
Set<String> securityRoles = settings.names();
for(String securityRole: securityRoles) {
SecurityRole _securityRole = new SecurityRole(securityRole);
final Settings securityRoleSettings = settings.getByPrefix(securityRole);
if (securityRoleSettings.names().isEmpty()) {
continue;
}
final Set<String> permittedClusterActions = ah.resolvedActions(securityRoleSettings.getAsList(".cluster", Collections.emptyList()));
_securityRole.addClusterPerms(permittedClusterActions);
Settings tenants = settings.getByPrefix(securityRole+".tenants.");
if(tenants != null) {
for(String tenant: tenants.names()) {
//if(tenant.equals(user.getName())) {
// continue;
//}
if("RW".equalsIgnoreCase(tenants.get(tenant, "RO"))) {
_securityRole.addTenant(new Tenant(tenant, true));
} else {
_securityRole.addTenant(new Tenant(tenant, false));
//if(_securityRole.tenants.stream().filter(t->t.tenant.equals(tenant)).count() > 0) { //RW outperforms RO
// _securityRole.addTenant(new Tenant(tenant, false));
//}
}
}
}
final Map<String, Settings> permittedAliasesIndices = securityRoleSettings.getGroups(".indices");
for (final String permittedAliasesIndex : permittedAliasesIndices.keySet()) {
final String resolvedRole = securityRole;
final String indexPattern = permittedAliasesIndex;
final String dls = settings.get(resolvedRole+".indices."+indexPattern+"._dls_");
final List<String> fls = settings.getAsList(resolvedRole+".indices."+indexPattern+"._fls_");
final List<String> maskedFields = settings.getAsList(resolvedRole+".indices."+indexPattern+"._masked_fields_");
IndexPattern _indexPattern = new IndexPattern(indexPattern);
_indexPattern.setDlsQuery(dls);
_indexPattern.addFlsFields(fls);
_indexPattern.addMaskedFields(maskedFields);
for(String type: permittedAliasesIndices.get(indexPattern).names()) {
if(IGNORED_TYPES.contains(type)) {
continue;
}
TypePerm typePerm = new TypePerm(type);
final List<String> perms = settings.getAsList(resolvedRole+".indices."+indexPattern+"."+type);
typePerm.addPerms(ah.resolvedActions(perms));
_indexPattern.addTypePerms(typePerm);
}
_securityRole.addIndexPattern(_indexPattern);
}
_securityRoles.addSecurityRole(_securityRole);
}
return _securityRoles;
}
//beans
public static class SecurityRoles {
protected final Logger log = LogManager.getLogger(this.getClass());
final Set<SecurityRole> roles = new HashSet<>(100);
private SecurityRoles() {
}
private SecurityRoles addSecurityRole(SecurityRole securityRole) {
if(securityRole != null) {
this.roles.add(securityRole);
}
return this;
}
@Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((roles == null) ? 0 : roles.hashCode());
return result;
}
@Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
SecurityRoles other = (SecurityRoles) obj;
if (roles == null) {
if (other.roles != null)
return false;
} else if (!roles.equals(other.roles))
return false;
return true;
}
@Override
public String toString() {
return "roles=" + roles;
}
public Set<SecurityRole> getRoles() {
return Collections.unmodifiableSet(roles);
}
public SecurityRoles filter(Set<String> keep) {
final SecurityRoles retVal = new SecurityRoles();
for(SecurityRole sgr: roles) {
if(keep.contains(sgr.getName())) {
retVal.addSecurityRole(sgr);
}
}
return retVal;
}
public Map<String,Set<String>> getMaskedFields(User user, IndexNameExpressionResolver resolver, ClusterService cs) {
final Map<String,Set<String>> maskedFieldsMap = new HashMap<String, Set<String>>();
for(SecurityRole sgr: roles) {
for(IndexPattern ip: sgr.getIpatterns()) {
final Set<String> maskedFields = ip.getMaskedFields();
final String indexPattern = ip.getUnresolvedIndexPattern(user);
String[] concreteIndices = new String[0];
if((maskedFields != null && maskedFields.size() > 0)) {
concreteIndices = ip.getResolvedIndexPattern(user, resolver, cs);
}
if(maskedFields != null && maskedFields.size() > 0) {
if(maskedFieldsMap.containsKey(indexPattern)) {
maskedFieldsMap.get(indexPattern).addAll(Sets.newHashSet(maskedFields));
} else {
maskedFieldsMap.put(indexPattern, new HashSet<String>());
maskedFieldsMap.get(indexPattern).addAll(Sets.newHashSet(maskedFields));
}
for (int i = 0; i < concreteIndices.length; i++) {
final String ci = concreteIndices[i];
if(maskedFieldsMap.containsKey(ci)) {
maskedFieldsMap.get(ci).addAll(Sets.newHashSet(maskedFields));
} else {
maskedFieldsMap.put(ci, new HashSet<String>());
maskedFieldsMap.get(ci).addAll(Sets.newHashSet(maskedFields));
}
}
}
}
}
return maskedFieldsMap;
}
public Tuple<Map<String,Set<String>>,Map<String,Set<String>>> getDlsFls(User user, IndexNameExpressionResolver resolver, ClusterService cs) {
final Map<String,Set<String>> dlsQueries = new HashMap<String, Set<String>>();
final Map<String,Set<String>> flsFields = new HashMap<String, Set<String>>();
for(SecurityRole sgr: roles) {
for(IndexPattern ip: sgr.getIpatterns()) {
final Set<String> fls = ip.getFls();
final String dls = ip.getDlsQuery(user);
final String indexPattern = ip.getUnresolvedIndexPattern(user);
String[] concreteIndices = new String[0];
if((dls != null && dls.length() > 0) || (fls != null && fls.size() > 0)) {
concreteIndices = ip.getResolvedIndexPattern(user, resolver, cs);
}
if(dls != null && dls.length() > 0) {
if(dlsQueries.containsKey(indexPattern)) {
dlsQueries.get(indexPattern).add(dls);
} else {
dlsQueries.put(indexPattern, new HashSet<String>());
dlsQueries.get(indexPattern).add(dls);
}
for (int i = 0; i < concreteIndices.length; i++) {
final String ci = concreteIndices[i];
if(dlsQueries.containsKey(ci)) {
dlsQueries.get(ci).add(dls);
} else {
dlsQueries.put(ci, new HashSet<String>());
dlsQueries.get(ci).add(dls);
}
}
}
if(fls != null && fls.size() > 0) {
if(flsFields.containsKey(indexPattern)) {
flsFields.get(indexPattern).addAll(Sets.newHashSet(fls));
} else {
flsFields.put(indexPattern, new HashSet<String>());
flsFields.get(indexPattern).addAll(Sets.newHashSet(fls));
}
for (int i = 0; i < concreteIndices.length; i++) {
final String ci = concreteIndices[i];
if(flsFields.containsKey(ci)) {
flsFields.get(ci).addAll(Sets.newHashSet(fls));
} else {
flsFields.put(ci, new HashSet<String>());
flsFields.get(ci).addAll(Sets.newHashSet(fls));
}
}
}
}
}
return new Tuple<Map<String,Set<String>>, Map<String,Set<String>>>(dlsQueries, flsFields);
}
//kibana special only
public Set<String> getAllPermittedIndices(User user, String[] actions, IndexNameExpressionResolver resolver, ClusterService cs) {
Set<String> retVal = new HashSet<>();
for(SecurityRole sgr: roles) {
retVal.addAll(sgr.getAllResolvedPermittedIndices(Resolved._ALL, user, actions, resolver, cs));
}
return Collections.unmodifiableSet(retVal);
}
//dnfof only
public Set<String> reduce(Resolved resolved, User user, String[] actions, IndexNameExpressionResolver resolver, ClusterService cs) {
Set<String> retVal = new HashSet<>();
for(SecurityRole sgr: roles) {
retVal.addAll(sgr.getAllResolvedPermittedIndices(resolved, user, actions, resolver, cs));
}
if(log.isDebugEnabled()) {
log.debug("Reduced requested resolved indices {} to permitted indices {}.", resolved, retVal.toString());
}
return Collections.unmodifiableSet(retVal);
}
//return true on success
public boolean get(Resolved resolved, User user, String[] actions, IndexNameExpressionResolver resolver, ClusterService cs) {
for(SecurityRole sgr: roles) {
if(ConfigModel.impliesTypePerm(sgr.getIpatterns(), resolved, user, actions, resolver, cs)) {
return true;
}
}
return false;
}
public boolean impliesClusterPermissionPermission(String action) {
return roles.stream()
.filter(r->r.impliesClusterPermission(action)).count() > 0;
}
//rolespan
public boolean impliesTypePermGlobal(Resolved resolved, User user, String[] actions, IndexNameExpressionResolver resolver, ClusterService cs) {
Set<IndexPattern> ipatterns = new HashSet<ConfigModel.IndexPattern>();
roles.stream().forEach(p->ipatterns.addAll(p.getIpatterns()));
return ConfigModel.impliesTypePerm(ipatterns, resolved, user, actions, resolver, cs);
}
}
public static class SecurityRole {
private final String name;
private final Set<Tenant> tenants = new HashSet<>();
private final Set<IndexPattern> ipatterns = new HashSet<>();
private final Set<String> clusterPerms = new HashSet<>();
private SecurityRole(String name) {
super();
this.name = Objects.requireNonNull(name);
}
private boolean impliesClusterPermission(String action) {
return WildcardMatcher.matchAny(clusterPerms, action);
}
//get indices which are permitted for the given types and actions
//dnfof + kibana special only
private Set<String> getAllResolvedPermittedIndices(Resolved resolved, User user, String[] actions, IndexNameExpressionResolver resolver, ClusterService cs) {
final Set<String> retVal = new HashSet<>();
for(IndexPattern p: ipatterns) {
//what if we cannot resolve one (for create purposes)
boolean patternMatch = false;
final Set<TypePerm> tperms = p.getTypePerms();
for(TypePerm tp: tperms) {
if(WildcardMatcher.matchAny(tp.typePattern, resolved.getTypes().toArray(new String[0]))) {
patternMatch = WildcardMatcher.matchAll(tp.perms.toArray(new String[0]), actions);
}
}
if(patternMatch) {
//resolved but can contain patterns for nonexistent indices
final String[] permitted = p.getResolvedIndexPattern(user, resolver, cs); //maybe they do not exists
final Set<String> res = new HashSet<>();
if(!resolved.isAll() && !resolved.getAllIndices().contains("*") && !resolved.getAllIndices().contains("_all")) {
final Set<String> wanted = new HashSet<>(resolved.getAllIndices());
//resolved but can contain patterns for nonexistent indices
WildcardMatcher.wildcardRetainInSet(wanted, permitted);
res.addAll(wanted);
} else {
//we want all indices so just return what's permitted
//#557
final String[] allIndices = resolver.concreteIndexNames(cs.state(), IndicesOptions.lenientExpandOpen(), "*");
final Set<String> wanted = new HashSet<>(Arrays.asList(allIndices));
WildcardMatcher.wildcardRetainInSet(wanted, permitted);
res.addAll(wanted);
//res.addAll(Arrays.asList(resolver.concreteIndexNames(cs.state(), IndicesOptions.lenientExpandOpen(), permitted)));
}
retVal.addAll(res);
}
}
//all that we want and all thats permitted of them
return Collections.unmodifiableSet(retVal);
}
private SecurityRole addTenant(Tenant tenant) {
if(tenant != null) {
this.tenants.add(tenant);
}
return this;
}
private SecurityRole addIndexPattern(IndexPattern indexPattern) {
if(indexPattern != null) {
this.ipatterns.add(indexPattern);
}
return this;
}
private SecurityRole addClusterPerms(Collection<String> clusterPerms) {
if(clusterPerms != null) {
this.clusterPerms.addAll(clusterPerms);
}
return this;
}
@Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((clusterPerms == null) ? 0 : clusterPerms.hashCode());
result = prime * result + ((ipatterns == null) ? 0 : ipatterns.hashCode());
result = prime * result + ((name == null) ? 0 : name.hashCode());
result = prime * result + ((tenants == null) ? 0 : tenants.hashCode());
return result;
}
@Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
SecurityRole other = (SecurityRole) obj;
if (clusterPerms == null) {
if (other.clusterPerms != null)
return false;
} else if (!clusterPerms.equals(other.clusterPerms))
return false;
if (ipatterns == null) {
if (other.ipatterns != null)
return false;
} else if (!ipatterns.equals(other.ipatterns))
return false;
if (name == null) {
if (other.name != null)
return false;
} else if (!name.equals(other.name))
return false;
if (tenants == null) {
if (other.tenants != null)
return false;
} else if (!tenants.equals(other.tenants))
return false;
return true;
}
@Override
public String toString() {
return System.lineSeparator()+" "+name+System.lineSeparator()+" tenants=" + tenants + System.lineSeparator()+ " ipatterns=" + ipatterns + System.lineSeparator()+ " clusterPerms=" + clusterPerms;
}
public Set<Tenant> getTenants(User user) {
//TODO filter out user tenants
return Collections.unmodifiableSet(tenants);
}
public Set<IndexPattern> getIpatterns() {
return Collections.unmodifiableSet(ipatterns);
}
public Set<String> getClusterPerms() {
return Collections.unmodifiableSet(clusterPerms);
}
public String getName() {
return name;
}
}
//Security roles
public static class IndexPattern {
private final String indexPattern;
private String dlsQuery;
private final Set<String> fls = new HashSet<>();
private final Set<String> maskedFields = new HashSet<>();
private final Set<TypePerm> typePerms = new HashSet<>();
public IndexPattern(String indexPattern) {
super();
this.indexPattern = Objects.requireNonNull(indexPattern);
}
public IndexPattern addFlsFields(List<String> flsFields) {
if(flsFields != null) {
this.fls.addAll(flsFields);
}
return this;
}
public IndexPattern addMaskedFields(List<String> maskedFields) {
if(maskedFields != null) {
this.maskedFields.addAll(maskedFields);
}
return this;
}
public IndexPattern addTypePerms(TypePerm typePerm) {
if(typePerm != null) {
this.typePerms.add(typePerm);
}
return this;
}
public IndexPattern setDlsQuery(String dlsQuery) {
if(dlsQuery != null) {
this.dlsQuery = dlsQuery;
}
return this;
}
@Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((dlsQuery == null) ? 0 : dlsQuery.hashCode());
result = prime * result + ((fls == null) ? 0 : fls.hashCode());
result = prime * result + ((maskedFields == null) ? 0 : maskedFields.hashCode());
result = prime * result + ((indexPattern == null) ? 0 : indexPattern.hashCode());
result = prime * result + ((typePerms == null) ? 0 : typePerms.hashCode());
return result;
}
@Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
IndexPattern other = (IndexPattern) obj;
if (dlsQuery == null) {
if (other.dlsQuery != null)
return false;
} else if (!dlsQuery.equals(other.dlsQuery))
return false;
if (fls == null) {
if (other.fls != null)
return false;
} else if (!fls.equals(other.fls))
return false;
if (maskedFields == null) {
if (other.maskedFields != null)
return false;
} else if (!maskedFields.equals(other.maskedFields))
return false;
if (indexPattern == null) {
if (other.indexPattern != null)
return false;
} else if (!indexPattern.equals(other.indexPattern))
return false;
if (typePerms == null) {
if (other.typePerms != null)
return false;
} else if (!typePerms.equals(other.typePerms))
return false;
return true;
}
@Override
public String toString() {
return System.lineSeparator()+" indexPattern=" + indexPattern + System.lineSeparator()+" dlsQuery=" + dlsQuery + System.lineSeparator()+ " fls=" + fls + System.lineSeparator()+ " typePerms=" + typePerms;
}
public String getUnresolvedIndexPattern(User user) {
return replaceProperties(indexPattern, user);
}
private String[] getResolvedIndexPattern(User user, IndexNameExpressionResolver resolver, ClusterService cs) {
String unresolved = getUnresolvedIndexPattern(user);
String[] resolved = null;
if(WildcardMatcher.containsWildcard(unresolved)) {
final String[] aliasesForPermittedPattern = cs.state().getMetaData().getAliasAndIndexLookup()
.entrySet().stream()
.filter(e->e.getValue().isAlias())
.filter(e->WildcardMatcher.match(unresolved, e.getKey()))
.map(e->e.getKey()).toArray(String[]::new);
if(aliasesForPermittedPattern != null && aliasesForPermittedPattern.length > 0) {
resolved = resolver.concreteIndexNames(cs.state(), IndicesOptions.lenientExpandOpen(), aliasesForPermittedPattern);
}
}
if(resolved == null && !unresolved.isEmpty()) {
resolved = resolver.concreteIndexNames(cs.state(), IndicesOptions.lenientExpandOpen(), unresolved);
}
if(resolved == null || resolved.length == 0) {
return new String[]{unresolved};
} else {
//append unresolved value for pattern matching
String[] retval = Arrays.copyOf(resolved, resolved.length +1);
retval[retval.length-1] = unresolved;
return retval;
}
}
public String getDlsQuery(User user) {
return replaceProperties(dlsQuery, user);
}
public Set<String> getFls() {
return Collections.unmodifiableSet(fls);
}
public Set<String> getMaskedFields() {
return Collections.unmodifiableSet(maskedFields);
}
public Set<TypePerm> getTypePerms() {
return Collections.unmodifiableSet(typePerms);
}
}
public static class TypePerm {
private final String typePattern;
private final Set<String> perms = new HashSet<>();
private TypePerm(String typePattern) {
super();
this.typePattern = Objects.requireNonNull(typePattern);
if(IGNORED_TYPES.contains(typePattern)) {
throw new RuntimeException("typepattern '"+typePattern+"' not allowed");
}
}
private TypePerm addPerms(Collection<String> perms) {
if(perms != null) {
this.perms.addAll(perms);
}
return this;
}
@Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((perms == null) ? 0 : perms.hashCode());
result = prime * result + ((typePattern == null) ? 0 : typePattern.hashCode());
return result;
}
@Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
TypePerm other = (TypePerm) obj;
if (perms == null) {
if (other.perms != null)
return false;
} else if (!perms.equals(other.perms))
return false;
if (typePattern == null) {
if (other.typePattern != null)
return false;
} else if (!typePattern.equals(other.typePattern))
return false;
return true;
}
@Override
public String toString() {
return System.lineSeparator()+" typePattern=" + typePattern + System.lineSeparator()+ " perms=" + perms;
}
public String getTypePattern() {
return typePattern;
}
public Set<String> getPerms() {
return Collections.unmodifiableSet(perms);
}
}
public static class Tenant {
private final String tenant;
private final boolean readWrite;
private Tenant(String tenant, boolean readWrite) {
super();
this.tenant = tenant;
this.readWrite = readWrite;
}
public String getTenant() {
return tenant;
}
public boolean isReadWrite() {
return readWrite;
}
@Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + (readWrite ? 1231 : 1237);
result = prime * result + ((tenant == null) ? 0 : tenant.hashCode());
return result;
}
@Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
Tenant other = (Tenant) obj;
if (readWrite != other.readWrite)
return false;
if (tenant == null) {
if (other.tenant != null)
return false;
} else if (!tenant.equals(other.tenant))
return false;
return true;
}
@Override
public String toString() {
return System.lineSeparator()+" tenant=" + tenant + System.lineSeparator() +" readWrite=" + readWrite;
}
}
private static String replaceProperties(String orig, User user) {
if(user == null || orig == null) {
return orig;
}
orig = orig.replace("${user.name}", user.getName()).replace("${user_name}", user.getName());
orig = replaceRoles(orig, user);
for(Entry<String, String> entry: user.getCustomAttributesMap().entrySet()) {
if(entry == null || entry.getKey() == null || entry.getValue() == null) {
continue;
}
orig = orig.replace("${"+entry.getKey()+"}", entry.getValue());
orig = orig.replace("${"+entry.getKey().replace('.', '_')+"}", entry.getValue());
}
return orig;
}
private static String replaceRoles(final String orig, final User user) {
String retVal = orig;
if(orig.contains("${user.roles}") || orig.contains("${user_roles}")) {
final String commaSeparatedRoles = toQuotedCommaSeparatedString(user.getRoles());
retVal = orig.replace("${user.roles}", commaSeparatedRoles).replace("${user_roles}", commaSeparatedRoles);
}
return retVal;
}
private static String toQuotedCommaSeparatedString(final Set<String> roles) {
return Joiner.on(',').join(Iterables.transform(roles, s->{
return new StringBuilder(s.length()+2).append('"').append(s).append('"').toString();
}));
}
private static boolean impliesTypePerm(Set<IndexPattern> ipatterns, Resolved resolved, User user, String[] actions, IndexNameExpressionResolver resolver, ClusterService cs) {
Set<String> matchingIndex = new HashSet<>(resolved.getAllIndices());
for(String in: resolved.getAllIndices()) {
//find index patterns who are matching
Set<String> matchingActions = new HashSet<>(Arrays.asList(actions));
Set<String> matchingTypes = new HashSet<>(resolved.getTypes());
for(IndexPattern p: ipatterns) {
if(WildcardMatcher.matchAny(p.getResolvedIndexPattern(user, resolver, cs), in)) {
//per resolved index per pattern
for(String t: resolved.getTypes()) {
for(TypePerm tp: p.typePerms) {
if(WildcardMatcher.match(tp.typePattern, t)) {
matchingTypes.remove(t);
for(String a: Arrays.asList(actions)) {
if(WildcardMatcher.matchAny(tp.perms, a)) {
matchingActions.remove(a);
}
}
}
}
}
}
}
if(matchingActions.isEmpty() && matchingTypes.isEmpty()) {
matchingIndex.remove(in);
}
}
return matchingIndex.isEmpty();
}
}

View File

@ -0,0 +1,145 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.support;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InvalidClassException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.ObjectStreamClass;
import java.io.Serializable;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.SocketAddress;
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import java.util.Map;
import org.elasticsearch.ElasticsearchException;
import com.amazon.opendistroforelasticsearch.security.resolver.IndexResolverReplacer;
import com.amazon.opendistroforelasticsearch.security.user.User;
import com.google.common.io.BaseEncoding;
public class Base64Helper {
public static String serializeObject(final Serializable object) {
if (object == null) {
throw new IllegalArgumentException("object must not be null");
}
try {
final ByteArrayOutputStream bos = new ByteArrayOutputStream();
final ObjectOutputStream out = new ObjectOutputStream(bos);
out.writeObject(object);
final byte[] bytes = bos.toByteArray();
return BaseEncoding.base64().encode(bytes);
} catch (final Exception e) {
throw new ElasticsearchException(e.toString());
}
}
public static Serializable deserializeObject(final String string) {
if (string == null) {
throw new IllegalArgumentException("string must not be null");
}
SafeObjectInputStream in = null;
try {
final byte[] userr = BaseEncoding.base64().decode(string);
final ByteArrayInputStream bis = new ByteArrayInputStream(userr); //NOSONAR
in = new SafeObjectInputStream(bis); //NOSONAR
return (Serializable) in.readObject();
} catch (final Exception e) {
throw new ElasticsearchException(e);
} finally {
if (in != null) {
try {
in.close();
} catch (IOException e) {
// ignore
}
}
}
}
private final static class SafeObjectInputStream extends ObjectInputStream {
private static final List<String> SAFE_CLASSES = new ArrayList<>();
static {
SAFE_CLASSES.add("com.amazon.dlic.auth.ldap.LdapUser");
SAFE_CLASSES.add("org.ldaptive.SearchEntry");
SAFE_CLASSES.add("org.ldaptive.LdapEntry");
SAFE_CLASSES.add("org.ldaptive.AbstractLdapBean");
SAFE_CLASSES.add("org.ldaptive.LdapAttribute");
SAFE_CLASSES.add("org.ldaptive.LdapAttribute$LdapAttributeValues");
}
public SafeObjectInputStream(InputStream in) throws IOException {
super(in);
}
@Override
protected Class<?> resolveClass(ObjectStreamClass desc) throws IOException, ClassNotFoundException {
Class<?> clazz = super.resolveClass(desc);
if (
clazz.isArray() ||
clazz.equals(String.class) ||
clazz.equals(SocketAddress.class) ||
clazz.equals(InetSocketAddress.class) ||
InetAddress.class.isAssignableFrom(clazz) ||
Number.class.isAssignableFrom(clazz) ||
Collection.class.isAssignableFrom(clazz) ||
Map.class.isAssignableFrom(clazz) ||
Enum.class.isAssignableFrom(clazz) ||
clazz.equals(User.class) ||
clazz.equals(IndexResolverReplacer.Resolved.class) ||
clazz.equals(SourceFieldsContext.class) ||
SAFE_CLASSES.contains(clazz.getName())
) {
return clazz;
}
throw new InvalidClassException("Unauthorized deserialization attempt", clazz.getName());
}
}
}

View File

@ -0,0 +1,242 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.support;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
public class ConfigConstants {
public static final String OPENDISTRO_SECURITY_CONFIG_PREFIX = "_opendistro_security_";
public static final String OPENDISTRO_SECURITY_CHANNEL_TYPE = OPENDISTRO_SECURITY_CONFIG_PREFIX+"channel_type";
public static final String OPENDISTRO_SECURITY_ORIGIN = OPENDISTRO_SECURITY_CONFIG_PREFIX+"origin";
public static final String OPENDISTRO_SECURITY_ORIGIN_HEADER = OPENDISTRO_SECURITY_CONFIG_PREFIX+"origin_header";
public static final String OPENDISTRO_SECURITY_DLS_QUERY_HEADER = OPENDISTRO_SECURITY_CONFIG_PREFIX+"dls_query";
public static final String OPENDISTRO_SECURITY_FLS_FIELDS_HEADER = OPENDISTRO_SECURITY_CONFIG_PREFIX+"fls_fields";
public static final String OPENDISTRO_SECURITY_MASKED_FIELD_HEADER = OPENDISTRO_SECURITY_CONFIG_PREFIX+"masked_fields";
public static final String OPENDISTRO_SECURITY_CONF_REQUEST_HEADER = OPENDISTRO_SECURITY_CONFIG_PREFIX+"conf_request";
public static final String OPENDISTRO_SECURITY_REMOTE_ADDRESS = OPENDISTRO_SECURITY_CONFIG_PREFIX+"remote_address";
public static final String OPENDISTRO_SECURITY_REMOTE_ADDRESS_HEADER = OPENDISTRO_SECURITY_CONFIG_PREFIX+"remote_address_header";
public static final String OPENDISTRO_SECURITY_INITIAL_ACTION_CLASS_HEADER = OPENDISTRO_SECURITY_CONFIG_PREFIX+"initial_action_class_header";
/**
* Set by SSL plugin for https requests only
*/
public static final String OPENDISTRO_SECURITY_SSL_PEER_CERTIFICATES = OPENDISTRO_SECURITY_CONFIG_PREFIX+"ssl_peer_certificates";
/**
* Set by SSL plugin for https requests only
*/
public static final String OPENDISTRO_SECURITY_SSL_PRINCIPAL = OPENDISTRO_SECURITY_CONFIG_PREFIX+"ssl_principal";
/**
* If this is set to TRUE then the request comes from a Server Node (fully trust)
* Its expected that there is a _opendistro_security_user attached as header
*/
public static final String OPENDISTRO_SECURITY_SSL_TRANSPORT_INTERCLUSTER_REQUEST = OPENDISTRO_SECURITY_CONFIG_PREFIX+"ssl_transport_intercluster_request";
public static final String OPENDISTRO_SECURITY_SSL_TRANSPORT_TRUSTED_CLUSTER_REQUEST = OPENDISTRO_SECURITY_CONFIG_PREFIX+"ssl_transport_trustedcluster_request";
/**
* Set by the SSL plugin, this is the peer node certificate on the transport layer
*/
public static final String OPENDISTRO_SECURITY_SSL_TRANSPORT_PRINCIPAL = OPENDISTRO_SECURITY_CONFIG_PREFIX+"ssl_transport_principal";
public static final String OPENDISTRO_SECURITY_USER = OPENDISTRO_SECURITY_CONFIG_PREFIX+"user";
public static final String OPENDISTRO_SECURITY_USER_HEADER = OPENDISTRO_SECURITY_CONFIG_PREFIX+"user_header";
public static final String OPENDISTRO_SECURITY_INJECTED_USER = "injected_user";
public static final String OPENDISTRO_SECURITY_XFF_DONE = OPENDISTRO_SECURITY_CONFIG_PREFIX+"xff_done";
public static final String SSO_LOGOUT_URL = OPENDISTRO_SECURITY_CONFIG_PREFIX+"sso_logout_url";
public static final String OPENDISTRO_SECURITY_DEFAULT_CONFIG_INDEX = ".opendistro_security";
public static final String OPENDISTRO_SECURITY_ENABLE_SNAPSHOT_RESTORE_PRIVILEGE = "opendistro_security.enable_snapshot_restore_privilege";
public static final boolean OPENDISTRO_SECURITY_DEFAULT_ENABLE_SNAPSHOT_RESTORE_PRIVILEGE = false;
public static final String OPENDISTRO_SECURITY_CHECK_SNAPSHOT_RESTORE_WRITE_PRIVILEGES = "opendistro_security.check_snapshot_restore_write_privileges";
public static final boolean OPENDISTRO_SECURITY_DEFAULT_CHECK_SNAPSHOT_RESTORE_WRITE_PRIVILEGES = true;
public static final Set<String> OPENDISTRO_SECURITY_SNAPSHOT_RESTORE_NEEDED_WRITE_PRIVILEGES = Collections.unmodifiableSet(
new HashSet<String>(Arrays.asList(
"indices:admin/create",
"indices:data/write/index"
// "indices:data/write/bulk"
)));
public final static String CONFIGNAME_ROLES = "roles";
public final static String CONFIGNAME_ROLES_MAPPING = "rolesmapping";
public final static String CONFIGNAME_ACTION_GROUPS = "actiongroups";
public final static String CONFIGNAME_INTERNAL_USERS = "internalusers";
public final static String CONFIGNAME_CONFIG = "config";
public final static String CONFIGKEY_ACTION_GROUPS_PERMISSIONS = "permissions";
public final static String CONFIGKEY_READONLY = "readonly";
public final static String CONFIGKEY_HIDDEN = "hidden";
public final static List<String> CONFIG_NAMES = Collections.unmodifiableList(Arrays.asList(new String[] {CONFIGNAME_ROLES, CONFIGNAME_ROLES_MAPPING,
CONFIGNAME_ACTION_GROUPS, CONFIGNAME_INTERNAL_USERS, CONFIGNAME_CONFIG}));
public static final String OPENDISTRO_SECURITY_INTERCLUSTER_REQUEST_EVALUATOR_CLASS = "opendistro_security.cert.intercluster_request_evaluator_class";
public static final String OPENDISTRO_SECURITY_ACTION_NAME = OPENDISTRO_SECURITY_CONFIG_PREFIX+"action_name";
public static final String OPENDISTRO_SECURITY_AUTHCZ_ADMIN_DN = "opendistro_security.authcz.admin_dn";
public static final String OPENDISTRO_SECURITY_CONFIG_INDEX_NAME = "opendistro_security.config_index_name";
public static final String OPENDISTRO_SECURITY_AUTHCZ_IMPERSONATION_DN = "opendistro_security.authcz.impersonation_dn";
public static final String OPENDISTRO_SECURITY_AUTHCZ_REST_IMPERSONATION_USERS="opendistro_security.authcz.rest_impersonation_user";
public static final String OPENDISTRO_SECURITY_AUDIT_TYPE_DEFAULT = "opendistro_security.audit.type";
public static final String OPENDISTRO_SECURITY_AUDIT_CONFIG_DEFAULT = "opendistro_security.audit.config";
public static final String OPENDISTRO_SECURITY_AUDIT_CONFIG_ROUTES = "opendistro_security.audit.routes";
public static final String OPENDISTRO_SECURITY_AUDIT_CONFIG_ENDPOINTS = "opendistro_security.audit.endpoints";
public static final String OPENDISTRO_SECURITY_AUDIT_THREADPOOL_SIZE = "opendistro_security.audit.threadpool.size";
public static final String OPENDISTRO_SECURITY_AUDIT_THREADPOOL_MAX_QUEUE_LEN = "opendistro_security.audit.threadpool.max_queue_len";
public static final String OPENDISTRO_SECURITY_AUDIT_LOG_REQUEST_BODY = "opendistro_security.audit.log_request_body";
public static final String OPENDISTRO_SECURITY_AUDIT_RESOLVE_INDICES = "opendistro_security.audit.resolve_indices";
public static final String OPENDISTRO_SECURITY_AUDIT_ENABLE_REST = "opendistro_security.audit.enable_rest";
public static final String OPENDISTRO_SECURITY_AUDIT_ENABLE_TRANSPORT = "opendistro_security.audit.enable_transport";
public static final String OPENDISTRO_SECURITY_AUDIT_CONFIG_DISABLED_TRANSPORT_CATEGORIES = "opendistro_security.audit.config.disabled_transport_categories";
public static final String OPENDISTRO_SECURITY_AUDIT_CONFIG_DISABLED_REST_CATEGORIES = "opendistro_security.audit.config.disabled_rest_categories";
public static final String OPENDISTRO_SECURITY_AUDIT_IGNORE_USERS = "opendistro_security.audit.ignore_users";
public static final String OPENDISTRO_SECURITY_AUDIT_IGNORE_REQUESTS = "opendistro_security.audit.ignore_requests";
public static final String OPENDISTRO_SECURITY_AUDIT_RESOLVE_BULK_REQUESTS = "opendistro_security.audit.resolve_bulk_requests";
public static final boolean OPENDISTRO_SECURITY_AUDIT_SSL_VERIFY_HOSTNAMES_DEFAULT = true;
public static final boolean OPENDISTRO_SECURITY_AUDIT_SSL_ENABLE_SSL_CLIENT_AUTH_DEFAULT = false;
public static final String OPENDISTRO_SECURITY_AUDIT_EXCLUDE_SENSITIVE_HEADERS = "opendistro_security.audit.exclude_sensitive_headers";
public static final String OPENDISTRO_SECURITY_AUDIT_CONFIG_DEFAULT_PREFIX = "opendistro_security.audit.config.";
// Internal / External ES
public static final String OPENDISTRO_SECURITY_AUDIT_ES_INDEX = "index";
public static final String OPENDISTRO_SECURITY_AUDIT_ES_TYPE = "type";
// External ES
public static final String OPENDISTRO_SECURITY_AUDIT_EXTERNAL_ES_HTTP_ENDPOINTS = "http_endpoints";
public static final String OPENDISTRO_SECURITY_AUDIT_EXTERNAL_ES_USERNAME = "username";
public static final String OPENDISTRO_SECURITY_AUDIT_EXTERNAL_ES_PASSWORD = "password";
public static final String OPENDISTRO_SECURITY_AUDIT_EXTERNAL_ES_ENABLE_SSL = "enable_ssl";
public static final String OPENDISTRO_SECURITY_AUDIT_EXTERNAL_ES_VERIFY_HOSTNAMES = "verify_hostnames";
public static final String OPENDISTRO_SECURITY_AUDIT_EXTERNAL_ES_ENABLE_SSL_CLIENT_AUTH = "enable_ssl_client_auth";
public static final String OPENDISTRO_SECURITY_AUDIT_EXTERNAL_ES_PEMKEY_FILEPATH = "pemkey_filepath";
public static final String OPENDISTRO_SECURITY_AUDIT_EXTERNAL_ES_PEMKEY_CONTENT = "pemkey_content";
public static final String OPENDISTRO_SECURITY_AUDIT_EXTERNAL_ES_PEMKEY_PASSWORD = "pemkey_password";
public static final String OPENDISTRO_SECURITY_AUDIT_EXTERNAL_ES_PEMCERT_FILEPATH = "pemcert_filepath";
public static final String OPENDISTRO_SECURITY_AUDIT_EXTERNAL_ES_PEMCERT_CONTENT = "pemcert_content";
public static final String OPENDISTRO_SECURITY_AUDIT_EXTERNAL_ES_PEMTRUSTEDCAS_FILEPATH = "pemtrustedcas_filepath";
public static final String OPENDISTRO_SECURITY_AUDIT_EXTERNAL_ES_PEMTRUSTEDCAS_CONTENT = "pemtrustedcas_content";
public static final String OPENDISTRO_SECURITY_AUDIT_EXTERNAL_ES_JKS_CERT_ALIAS = "cert_alias";
public static final String OPENDISTRO_SECURITY_AUDIT_EXTERNAL_ES_ENABLED_SSL_CIPHERS = "enabled_ssl_ciphers";
public static final String OPENDISTRO_SECURITY_AUDIT_EXTERNAL_ES_ENABLED_SSL_PROTOCOLS = "enabled_ssl_protocols";
// Webhooks
public static final String OPENDISTRO_SECURITY_AUDIT_WEBHOOK_URL = "webhook.url";
public static final String OPENDISTRO_SECURITY_AUDIT_WEBHOOK_FORMAT = "webhook.format";
public static final String OPENDISTRO_SECURITY_AUDIT_WEBHOOK_SSL_VERIFY = "webhook.ssl.verify";
public static final String OPENDISTRO_SECURITY_AUDIT_WEBHOOK_PEMTRUSTEDCAS_FILEPATH = "webhook.ssl.pemtrustedcas_filepath";
public static final String OPENDISTRO_SECURITY_AUDIT_WEBHOOK_PEMTRUSTEDCAS_CONTENT = "webhook.ssl.pemtrustedcas_content";
// Log4j
public static final String OPENDISTRO_SECURITY_AUDIT_LOG4J_LOGGER_NAME = "log4j.logger_name";
public static final String OPENDISTRO_SECURITY_AUDIT_LOG4J_LEVEL = "log4j.level";
//retry
public static final String OPENDISTRO_SECURITY_AUDIT_RETRY_COUNT = "opendistro_security.audit.config.retry_count";
public static final String OPENDISTRO_SECURITY_AUDIT_RETRY_DELAY_MS = "opendistro_security.audit.config.retry_delay_ms";
public static final String OPENDISTRO_SECURITY_KERBEROS_KRB5_FILEPATH = "opendistro_security.kerberos.krb5_filepath";
public static final String OPENDISTRO_SECURITY_KERBEROS_ACCEPTOR_KEYTAB_FILEPATH = "opendistro_security.kerberos.acceptor_keytab_filepath";
public static final String OPENDISTRO_SECURITY_KERBEROS_ACCEPTOR_PRINCIPAL = "opendistro_security.kerberos.acceptor_principal";
public static final String OPENDISTRO_SECURITY_CERT_OID = "opendistro_security.cert.oid";
public static final String OPENDISTRO_SECURITY_CERT_INTERCLUSTER_REQUEST_EVALUATOR_CLASS = "opendistro_security.cert.intercluster_request_evaluator_class";
public static final String OPENDISTRO_SECURITY_ENTERPRISE_MODULES_ENABLED = "opendistro_security.enterprise_modules_enabled";
public static final String OPENDISTRO_SECURITY_NODES_DN = "opendistro_security.nodes_dn";
public static final String OPENDISTRO_SECURITY_DISABLED = "opendistro_security.disabled";
public static final String OPENDISTRO_SECURITY_CACHE_TTL_MINUTES = "opendistro_security.cache.ttl_minutes";
public static final String OPENDISTRO_SECURITY_ALLOW_UNSAFE_DEMOCERTIFICATES = "opendistro_security.allow_unsafe_democertificates";
public static final String OPENDISTRO_SECURITY_ALLOW_DEFAULT_INIT_SECURITYINDEX = "opendistro_security.allow_default_init_securityindex";
public static final String OPENDISTRO_SECURITY_ROLES_MAPPING_RESOLUTION = "opendistro_security.roles_mapping_resolution";
public static final String OPENDISTRO_SECURITY_COMPLIANCE_HISTORY_WRITE_METADATA_ONLY = "opendistro_security.compliance.history.write.metadata_only";
public static final String OPENDISTRO_SECURITY_COMPLIANCE_HISTORY_READ_METADATA_ONLY = "opendistro_security.compliance.history.read.metadata_only";
public static final String OPENDISTRO_SECURITY_COMPLIANCE_HISTORY_READ_WATCHED_FIELDS = "opendistro_security.compliance.history.read.watched_fields";
public static final String OPENDISTRO_SECURITY_COMPLIANCE_HISTORY_WRITE_WATCHED_INDICES = "opendistro_security.compliance.history.write.watched_indices";
public static final String OPENDISTRO_SECURITY_COMPLIANCE_HISTORY_WRITE_LOG_DIFFS = "opendistro_security.compliance.history.write.log_diffs";
public static final String OPENDISTRO_SECURITY_COMPLIANCE_HISTORY_READ_IGNORE_USERS = "opendistro_security.compliance.history.read.ignore_users";
public static final String OPENDISTRO_SECURITY_COMPLIANCE_HISTORY_WRITE_IGNORE_USERS = "opendistro_security.compliance.history.write.ignore_users";
public static final String OPENDISTRO_SECURITY_COMPLIANCE_HISTORY_EXTERNAL_CONFIG_ENABLED = "opendistro_security.compliance.history.external_config_enabled";
public static final String OPENDISTRO_SECURITY_COMPLIANCE_DISABLE_ANONYMOUS_AUTHENTICATION = "opendistro_security.compliance.disable_anonymous_authentication";
public static final String OPENDISTRO_SECURITY_COMPLIANCE_IMMUTABLE_INDICES = "opendistro_security.compliance.immutable_indices";
public static final String OPENDISTRO_SECURITY_COMPLIANCE_SALT = "opendistro_security.compliance.salt";
public static final String OPENDISTRO_SECURITY_COMPLIANCE_SALT_DEFAULT = "e1ukloTsQlOgPquJ";//16 chars
public static final String OPENDISTRO_SECURITY_COMPLIANCE_HISTORY_INTERNAL_CONFIG_ENABLED = "opendistro_security.compliance.history.internal_config_enabled";
public static final String OPENDISTRO_SECURITY_SSL_ONLY = "opendistro_security.ssl_only";
public enum RolesMappingResolution {
MAPPING_ONLY,
BACKENDROLES_ONLY,
BOTH
}
//public static final String OPENDISTRO_SECURITY_TRIBE_CLUSTERNAME = "opendistro_security.tribe.clustername";
//public static final String OPENDISTRO_SECURITY_DISABLE_TYPE_SECURITY = "opendistro_security.disable_type_security";
// REST API
public static final String OPENDISTRO_SECURITY_RESTAPI_ROLES_ENABLED = "opendistro_security.restapi.roles_enabled";
public static final String OPENDISTRO_SECURITY_RESTAPI_ENDPOINTS_DISABLED = "opendistro_security.restapi.endpoints_disabled";
public static final String OPENDISTRO_SECURITY_RESTAPI_PASSWORD_VALIDATION_REGEX = "opendistro_security.restapi.password_validation_regex";
public static final String OPENDISTRO_SECURITY_RESTAPI_PASSWORD_VALIDATION_ERROR_MESSAGE = "opendistro_security.restapi.password_validation_error_message";
// Illegal Opcodes from here on
public static final String OPENDISTRO_SECURITY_UNSUPPORTED_DISABLE_REST_AUTH_INITIALLY = "opendistro_security.unsupported.disable_rest_auth_initially";
public static final String OPENDISTRO_SECURITY_UNSUPPORTED_DISABLE_INTERTRANSPORT_AUTH_INITIALLY = "opendistro_security.unsupported.disable_intertransport_auth_initially";
public static final String OPENDISTRO_SECURITY_UNSUPPORTED_RESTORE_SECURITYINDEX_ENABLED = "opendistro_security.unsupported.restore.securityindex.enabled";
public static final String OPENDISTRO_SECURITY_UNSUPPORTED_INJECT_USER_ENABLED = "opendistro_security.unsupported.inject_user.enabled";
public static final String OPENDISTRO_SECURITY_UNSUPPORTED_INJECT_ADMIN_USER_ENABLED = "opendistro_security.unsupported.inject_user.admin.enabled";
}

View File

@ -0,0 +1,92 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.support;
import java.io.ByteArrayInputStream;
import java.io.FileReader;
import java.io.IOException;
import java.io.Reader;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.support.WriteRequest.RefreshPolicy;
import org.elasticsearch.client.Client;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.common.xcontent.XContentType;
public class ConfigHelper {
private static final Logger LOGGER = LogManager.getLogger(ConfigHelper.class);
public static void uploadFile(Client tc, String filepath, String index, String id) throws Exception {
LOGGER.info("Will update '" + id + "' with " + filepath);
try (Reader reader = new FileReader(filepath)) {
final String res = tc
.index(new IndexRequest(index).type("security").id(id).setRefreshPolicy(RefreshPolicy.IMMEDIATE)
.source(id, readXContent(reader, XContentType.YAML))).actionGet().getId();
if (!id.equals(res)) {
throw new Exception(" FAIL: Configuration for '" + id
+ "' failed for unknown reasons. Pls. consult logfile of elasticsearch");
}
} catch (Exception e) {
throw e;
}
}
public static BytesReference readXContent(final Reader reader, final XContentType xContentType) throws IOException {
BytesReference retVal;
XContentParser parser = null;
try {
parser = XContentFactory.xContent(xContentType).createParser(NamedXContentRegistry.EMPTY, OpenDistroSecurityDeprecationHandler.INSTANCE, reader);
parser.nextToken();
final XContentBuilder builder = XContentFactory.jsonBuilder();
builder.copyCurrentStructure(parser);
retVal = BytesReference.bytes(builder);
} finally {
if (parser != null) {
parser.close();
}
}
//validate
Settings.builder().loadFromStream("dummy.json", new ByteArrayInputStream(BytesReference.toBytes(retVal)), true).build();
return retVal;
}
}

View File

@ -0,0 +1,106 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.support;
import java.nio.charset.StandardCharsets;
import java.util.Base64;
import java.util.List;
import java.util.Map;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.rest.RestRequest;
import com.amazon.opendistroforelasticsearch.security.user.AuthCredentials;
public class HTTPHelper {
public static AuthCredentials extractCredentials(String authorizationHeader, Logger log) {
if (authorizationHeader != null) {
if (!authorizationHeader.trim().toLowerCase().startsWith("basic ")) {
log.warn("No 'Basic Authorization' header, send 401 and 'WWW-Authenticate Basic'");
return null;
} else {
final String decodedBasicHeader = new String(Base64.getDecoder().decode(authorizationHeader.split(" ")[1]),
StandardCharsets.UTF_8);
//username:password
//special case
//username must not contain a :, but password is allowed to do so
// username:pass:word
//blank password
// username:
final int firstColonIndex = decodedBasicHeader.indexOf(':');
String username = null;
String password = null;
if (firstColonIndex > 0) {
username = decodedBasicHeader.substring(0, firstColonIndex);
if(decodedBasicHeader.length() - 1 != firstColonIndex) {
password = decodedBasicHeader.substring(firstColonIndex + 1);
} else {
//blank password
password="";
}
}
if (username == null || password == null) {
log.warn("Invalid 'Authorization' header, send 401 and 'WWW-Authenticate Basic'");
return null;
} else {
return new AuthCredentials(username, password.getBytes(StandardCharsets.UTF_8)).markComplete();
}
}
} else {
return null;
}
}
public static boolean containsBadHeader(final RestRequest request) {
final Map<String, List<String>> headers;
if (request != null && ( headers = request.getHeaders()) != null) {
for (final String key: headers.keySet()) {
if ( key != null
&& key.trim().toLowerCase().startsWith(ConfigConstants.OPENDISTRO_SECURITY_CONFIG_PREFIX.toLowerCase())) {
return true;
}
}
}
return false;
}
}

View File

@ -0,0 +1,87 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.support;
import java.io.Serializable;
import java.util.Map;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import com.google.common.base.Strings;
public class HeaderHelper {
public static boolean isInterClusterRequest(final ThreadContext context) {
return context.getTransient(ConfigConstants.OPENDISTRO_SECURITY_SSL_TRANSPORT_INTERCLUSTER_REQUEST) == Boolean.TRUE;
}
public static boolean isDirectRequest(final ThreadContext context) {
return "direct".equals(context.getTransient(ConfigConstants.OPENDISTRO_SECURITY_CHANNEL_TYPE))
|| context.getTransient(ConfigConstants.OPENDISTRO_SECURITY_CHANNEL_TYPE) == null;
}
public static String getSafeFromHeader(final ThreadContext context, final String headerName) {
if (context == null || headerName == null || headerName.isEmpty()) {
return null;
}
String headerValue = null;
Map<String, String> headers = context.getHeaders();
if (!headers.containsKey(headerName) || (headerValue = headers.get(headerName)) == null) {
return null;
}
if (isInterClusterRequest(context) || isTrustedClusterRequest(context) || isDirectRequest(context)) {
return headerValue;
}
return null;
}
public static Serializable deserializeSafeFromHeader(final ThreadContext context, final String headerName) {
final String objectAsBase64 = getSafeFromHeader(context, headerName);
if (!Strings.isNullOrEmpty(objectAsBase64)) {
return Base64Helper.deserializeObject(objectAsBase64);
}
return null;
}
public static boolean isTrustedClusterRequest(final ThreadContext context) {
return context.getTransient(ConfigConstants.OPENDISTRO_SECURITY_SSL_TRANSPORT_TRUSTED_CLUSTER_REQUEST) == Boolean.TRUE;
}
}

View File

@ -0,0 +1,69 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.support;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Map;
public class MapUtils {
public static void deepTraverseMap(final Map<String, Object> map, final Callback cb) {
deepTraverseMap(map, cb, null);
}
private static void deepTraverseMap(final Map<String, Object> map, final Callback cb, final List<String> stack) {
final List<String> localStack;
if(stack == null) {
localStack = new ArrayList<String>(30);
} else {
localStack = stack;
}
for(Map.Entry<String, Object> entry: map.entrySet()) {
if(entry.getValue() != null && entry.getValue() instanceof Map) {
@SuppressWarnings("unchecked")
final Map<String, Object> inner = (Map<String, Object>) entry.getValue();
localStack.add(entry.getKey());
deepTraverseMap(inner, cb, localStack);
if(!localStack.isEmpty()) {
localStack.remove(localStack.size()-1);
}
} else {
cb.call(entry.getKey(), map, Collections.unmodifiableList(localStack));
}
}
}
public static interface Callback {
public void call(String key, Map<String, Object> map, List<String> stack);
}
}

View File

@ -0,0 +1,188 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.support;
import java.io.IOException;
import java.io.Serializable;
import java.util.HashMap;
import java.util.Map;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
public class ModuleInfo implements Serializable, Writeable{
private static final long serialVersionUID = -1077651823194285138L;
private ModuleType moduleType;
private String classname;
private String classpath = "";
private String version = "";
private String buildTime = "";
private String gitsha1 = "";
public ModuleInfo(ModuleType moduleType, String classname) {
assert(moduleType != null);
this.moduleType = moduleType;
this.classname = classname;
}
public ModuleInfo(final StreamInput in) throws IOException {
moduleType = in.readEnum(ModuleType.class);
classname = in.readString();
classpath = in.readString();
version = in.readString();
buildTime = in.readString();
gitsha1 = in.readString();
assert(moduleType != null);
}
public void setClasspath(String classpath) {
this.classpath = classpath;
}
public void setVersion(String version) {
this.version = version;
}
public void setBuildTime(String buildTime) {
this.buildTime = buildTime;
}
public String getGitsha1() {
return gitsha1;
}
public void setGitsha1(String gitsha1) {
this.gitsha1 = gitsha1;
}
public ModuleType getModuleType() {
return moduleType;
}
public Map<String, String> getAsMap() {
Map<String, String> infoMap = new HashMap<>();
infoMap.put("type", moduleType.name());
infoMap.put("description", moduleType.getDescription());
infoMap.put("is_enterprise", moduleType.isEnterprise().toString());
infoMap.put("default_implementation", moduleType.getDefaultImplClass());
infoMap.put("actual_implementation", this.classname);
//infoMap.put("classpath", this.classpath); //this can disclose file locations
infoMap.put("version", this.version);
infoMap.put("buildTime", this.buildTime);
infoMap.put("gitsha1", this.gitsha1);
return infoMap;
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeEnum(moduleType);
out.writeString(classname);
out.writeString(classpath);
out.writeString(version);
out.writeString(buildTime);
out.writeString(gitsha1);
}
/* (non-Javadoc)
* @see java.lang.Object#hashCode()
*/
@Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((buildTime == null) ? 0 : buildTime.hashCode());
result = prime * result + ((classname == null) ? 0 : classname.hashCode());
result = prime * result + ((moduleType == null) ? 0 : moduleType.hashCode());
result = prime * result + ((version == null) ? 0 : version.hashCode());
result = prime * result + ((gitsha1 == null) ? 0 : gitsha1.hashCode());
return result;
}
/* (non-Javadoc)
* @see java.lang.Object#equals(java.lang.Object)
*/
@Override
public boolean equals(Object obj) {
if (this == obj) {
return true;
}
if (obj == null) {
return false;
}
if (!(obj instanceof ModuleInfo)) {
return false;
}
ModuleInfo other = (ModuleInfo) obj;
if (buildTime == null) {
if (other.buildTime != null) {
return false;
}
} else if (!buildTime.equals(other.buildTime)) {
return false;
}
if (classname == null) {
if (other.classname != null) {
return false;
}
} else if (!classname.equals(other.classname)) {
return false;
}
if (!moduleType.equals(other.moduleType)) {
return false;
}
if (version == null) {
if (other.version != null) {
return false;
}
} else if (!version.equals(other.version)) {
return false;
}
if (gitsha1 == null) {
if (other.gitsha1 != null) {
return false;
}
} else if (!gitsha1.equals(other.gitsha1)) {
return false;
}
return true;
}
@Override
public String toString() {
return "Module [type=" + this.moduleType.name() + ", implementing class=" + this.classname + "]";
}
}

View File

@ -0,0 +1,140 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.support;
import java.io.Serializable;
import java.util.HashMap;
import java.util.Map;
import com.amazon.opendistroforelasticsearch.security.auth.AuthenticationBackend;
import com.amazon.opendistroforelasticsearch.security.auth.AuthorizationBackend;
import com.amazon.opendistroforelasticsearch.security.auth.HTTPAuthenticator;
import com.amazon.opendistroforelasticsearch.security.auth.internal.InternalAuthenticationBackend;
import com.amazon.opendistroforelasticsearch.security.auth.internal.NoOpAuthenticationBackend;
import com.amazon.opendistroforelasticsearch.security.auth.internal.NoOpAuthorizationBackend;
import com.amazon.opendistroforelasticsearch.security.http.HTTPBasicAuthenticator;
import com.amazon.opendistroforelasticsearch.security.http.HTTPClientCertAuthenticator;
import com.amazon.opendistroforelasticsearch.security.http.HTTPProxyAuthenticator;
import com.amazon.opendistroforelasticsearch.security.ssl.transport.PrincipalExtractor;
import com.amazon.opendistroforelasticsearch.security.transport.InterClusterRequestEvaluator;
public enum ModuleType implements Serializable {
REST_MANAGEMENT_API("REST Management API", "com.amazon.opendistroforelasticsearch.security.dlic.rest.api.OpenDistroSecurityRestApiActions", Boolean.TRUE),
DLSFLS("Document- and Field-Level Security", "com.amazon.opendistroforelasticsearch.security.configuration.OpenDistroSecurityFlsDlsIndexSearcherWrapper", Boolean.TRUE),
AUDITLOG("Audit Logging", "com.amazon.opendistroforelasticsearch.security.auditlog.impl.AuditLogImpl", Boolean.TRUE),
MULTITENANCY("Kibana Multitenancy", "com.amazon.opendistroforelasticsearch.security.configuration.PrivilegesInterceptorImpl", Boolean.TRUE),
LDAP_AUTHENTICATION_BACKEND("LDAP authentication backend", "com.amazon.dlic.auth.ldap.backend.LDAPAuthenticationBackend", Boolean.TRUE),
LDAP_AUTHORIZATION_BACKEND("LDAP authorization backend", "com.amazon.dlic.auth.ldap.backend.LDAPAuthorizationBackend", Boolean.TRUE),
KERBEROS_AUTHENTICATION_BACKEND("Kerberos authentication backend", "com.amazon.dlic.auth.http.kerberos.HTTPSpnegoAuthenticator", Boolean.TRUE),
JWT_AUTHENTICATION_BACKEND("JWT authentication backend", "com.amazon.dlic.auth.http.jwt.HTTPJwtAuthenticator", Boolean.TRUE),
OPENID_AUTHENTICATION_BACKEND("OpenID authentication backend", "com.amazon.dlic.auth.http.jwt.keybyoidc.HTTPJwtKeyByOpenIdConnectAuthenticator", Boolean.TRUE),
SAML_AUTHENTICATION_BACKEND("SAML authentication backend", "com.amazon.dlic.auth.http.saml.HTTPSamlAuthenticator", Boolean.TRUE),
INTERNAL_USERS_AUTHENTICATION_BACKEND("Internal users authentication backend", InternalAuthenticationBackend.class.getName(), Boolean.FALSE),
NOOP_AUTHENTICATION_BACKEND("Noop authentication backend", NoOpAuthenticationBackend.class.getName(), Boolean.FALSE),
NOOP_AUTHORIZATION_BACKEND("Noop authorization backend", NoOpAuthorizationBackend.class.getName(), Boolean.FALSE),
HTTP_BASIC_AUTHENTICATOR("HTTP Basic Authenticator", HTTPBasicAuthenticator.class.getName(), Boolean.FALSE),
HTTP_PROXY_AUTHENTICATOR("HTTP Proxy Authenticator", HTTPProxyAuthenticator.class.getName(), Boolean.FALSE),
HTTP_CLIENTCERT_AUTHENTICATOR("HTTP Client Certificate Authenticator", HTTPClientCertAuthenticator.class.getName(), Boolean.FALSE),
CUSTOM_HTTP_AUTHENTICATOR("Custom HTTP authenticator", null, Boolean.TRUE),
CUSTOM_AUTHENTICATION_BACKEND("Custom authentication backend", null, Boolean.TRUE),
CUSTOM_AUTHORIZATION_BACKEND("Custom authorization backend", null, Boolean.TRUE),
CUSTOM_INTERCLUSTER_REQUEST_EVALUATOR("Intercluster Request Evaluator", null, Boolean.FALSE),
CUSTOM_PRINCIPAL_EXTRACTOR("TLS Principal Extractor", null, Boolean.FALSE),
COMPLIANCE("Compliance", "com.amazon.opendistroforelasticsearch.security.compliance.ComplianceIndexingOperationListenerImpl", Boolean.TRUE),
UNKNOWN("Unknown type", null, Boolean.TRUE);
private String description;
private String defaultImplClass;
private Boolean isEnterprise = Boolean.TRUE;
private static Map<String, ModuleType> modulesMap = new HashMap<>();
static{
for(ModuleType module : ModuleType.values()) {
if (module.defaultImplClass != null) {
modulesMap.put(module.getDefaultImplClass(), module);
}
}
}
private ModuleType(String description, String defaultImplClass, Boolean isEnterprise) {
this.description = description;
this.defaultImplClass = defaultImplClass;
this.isEnterprise = isEnterprise;
}
public static ModuleType getByDefaultImplClass(Class<?> clazz) {
ModuleType moduleType = modulesMap.get(clazz.getName());
if(moduleType == null) {
if(HTTPAuthenticator.class.isAssignableFrom(clazz)) {
moduleType = ModuleType.CUSTOM_HTTP_AUTHENTICATOR;
}
if(AuthenticationBackend.class.isAssignableFrom(clazz)) {
moduleType = ModuleType.CUSTOM_AUTHENTICATION_BACKEND;
}
if(AuthorizationBackend.class.isAssignableFrom(clazz)) {
moduleType = ModuleType.CUSTOM_AUTHORIZATION_BACKEND;
}
if(AuthorizationBackend.class.isAssignableFrom(clazz)) {
moduleType = ModuleType.CUSTOM_AUTHORIZATION_BACKEND;
}
if(InterClusterRequestEvaluator.class.isAssignableFrom(clazz)) {
moduleType = ModuleType.CUSTOM_INTERCLUSTER_REQUEST_EVALUATOR;
}
if(PrincipalExtractor.class.isAssignableFrom(clazz)) {
moduleType = ModuleType.CUSTOM_PRINCIPAL_EXTRACTOR;
}
}
if(moduleType == null) {
moduleType = ModuleType.UNKNOWN;
}
return moduleType;
}
public String getDescription() {
return this.description;
}
public String getDefaultImplClass() {
return defaultImplClass;
}
public Boolean isEnterprise() {
return isEnterprise;
}
}

View File

@ -0,0 +1,50 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.support;
import org.elasticsearch.common.xcontent.DeprecationHandler;
public class OpenDistroSecurityDeprecationHandler {
public final static DeprecationHandler INSTANCE = new DeprecationHandler() {
@Override
public void usedDeprecatedField(String usedName, String replacedWith) {
throw new UnsupportedOperationException("deprecated fields not supported here but got ["
+ usedName + "] which is a deprecated name for [" + replacedWith + "]");
}
@Override
public void usedDeprecatedName(String usedName, String modernName) {
throw new UnsupportedOperationException("deprecated fields not supported here but got ["
+ usedName + "] which has been replaced with [" + modernName + "]");
}
};
}

View File

@ -0,0 +1,91 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.support;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import java.util.Set;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
public final class OpenDistroSecurityUtils {
protected final static Logger log = LogManager.getLogger(OpenDistroSecurityUtils.class);
private OpenDistroSecurityUtils() {
}
public static String evalMap(final Map<String,Set<String>> map, final String index) {
if (map == null) {
return null;
}
if (map.get(index) != null) {
return index;
} else if (map.get("*") != null) {
return "*";
}
if (map.get("_all") != null) {
return "_all";
}
//regex
for(final String key: map.keySet()) {
if(WildcardMatcher.containsWildcard(key)
&& WildcardMatcher.match(key, index)) {
return key;
}
}
return null;
}
@SafeVarargs
public static <T> Map<T, T> mapFromArray(T ... keyValues) {
if(keyValues == null) {
return Collections.emptyMap();
}
if (keyValues.length % 2 != 0) {
log.error("Expected even number of key/value pairs, got {}.", Arrays.toString(keyValues));
return null;
}
Map<T, T> map = new HashMap<>();
for(int i = 0; i<keyValues.length; i+=2) {
map.put(keyValues[i], keyValues[i+1]);
}
return map;
}
}

View File

@ -0,0 +1,388 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.support;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.LinkOption;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.security.InvalidAlgorithmParameterException;
import java.security.InvalidKeyException;
import java.security.KeyException;
import java.security.KeyFactory;
import java.security.KeyStore;
import java.security.NoSuchAlgorithmException;
import java.security.PrivateKey;
import java.security.SecureRandom;
import java.security.cert.Certificate;
import java.security.cert.CertificateFactory;
import java.security.cert.X509Certificate;
import java.security.spec.InvalidKeySpecException;
import java.security.spec.PKCS8EncodedKeySpec;
import java.util.Collection;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import javax.crypto.Cipher;
import javax.crypto.EncryptedPrivateKeyInfo;
import javax.crypto.NoSuchPaddingException;
import javax.crypto.SecretKey;
import javax.crypto.SecretKeyFactory;
import javax.crypto.spec.PBEKeySpec;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.bouncycastle.util.encoders.Base64;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
public final class PemKeyReader {
//private static final String[] EMPTY_STRING_ARRAY = new String[0];
protected static final Logger log = LogManager.getLogger(PemKeyReader.class);
static final String JKS = "JKS";
static final String PKCS12 = "PKCS12";
private static final Pattern KEY_PATTERN = Pattern.compile(
"-+BEGIN\\s+.*PRIVATE\\s+KEY[^-]*-+(?:\\s|\\r|\\n)+" + // Header
"([a-z0-9+/=\\r\\n]+)" + // Base64 text
"-+END\\s+.*PRIVATE\\s+KEY[^-]*-+", // Footer
Pattern.CASE_INSENSITIVE);
private static byte[] readPrivateKey(File file) throws KeyException {
try {
InputStream in = new FileInputStream(file);
try {
return readPrivateKey(in);
} finally {
safeClose(in);
}
} catch (FileNotFoundException e) {
throw new KeyException("could not fine key file: " + file);
}
}
private static byte[] readPrivateKey(InputStream in) throws KeyException {
String content;
try {
content = readContent(in);
} catch (IOException e) {
throw new KeyException("failed to read key input stream", e);
}
Matcher m = KEY_PATTERN.matcher(content);
if (!m.find()) {
throw new KeyException("could not find a PKCS #8 private key in input stream" +
" (see http://netty.io/wiki/sslcontextbuilder-and-private-key.html for more information)");
}
return Base64.decode(m.group(1));
}
private static String readContent(InputStream in) throws IOException {
ByteArrayOutputStream out = new ByteArrayOutputStream();
try {
byte[] buf = new byte[8192];
for (;;) {
int ret = in.read(buf);
if (ret < 0) {
break;
}
out.write(buf, 0, ret);
}
return out.toString(StandardCharsets.US_ASCII.name());
} finally {
safeClose(out);
}
}
private static void safeClose(InputStream in) {
try {
in.close();
} catch (IOException e) {
//ignore
}
}
private static void safeClose(OutputStream out) {
try {
out.close();
} catch (IOException e) {
//ignore
}
}
public static PrivateKey toPrivateKey(File keyFile, String keyPassword) throws NoSuchAlgorithmException, NoSuchPaddingException,
InvalidKeySpecException, InvalidAlgorithmParameterException, KeyException, IOException {
if (keyFile == null) {
return null;
}
return getPrivateKeyFromByteBuffer(PemKeyReader.readPrivateKey(keyFile), keyPassword);
}
public static PrivateKey toPrivateKey(InputStream in, String keyPassword) throws NoSuchAlgorithmException, NoSuchPaddingException,
InvalidKeySpecException, InvalidAlgorithmParameterException, KeyException, IOException {
if (in == null) {
return null;
}
return getPrivateKeyFromByteBuffer(PemKeyReader.readPrivateKey(in), keyPassword);
}
private static PrivateKey getPrivateKeyFromByteBuffer(byte[] encodedKey, String keyPassword) throws NoSuchAlgorithmException,
NoSuchPaddingException, InvalidKeySpecException, InvalidAlgorithmParameterException, KeyException, IOException {
PKCS8EncodedKeySpec encodedKeySpec = generateKeySpec(keyPassword == null ? null : keyPassword.toCharArray(), encodedKey);
try {
return KeyFactory.getInstance("RSA").generatePrivate(encodedKeySpec);
} catch (InvalidKeySpecException ignore) {
try {
return KeyFactory.getInstance("DSA").generatePrivate(encodedKeySpec);
} catch (InvalidKeySpecException ignore2) {
try {
return KeyFactory.getInstance("EC").generatePrivate(encodedKeySpec);
} catch (InvalidKeySpecException e) {
throw new InvalidKeySpecException("Neither RSA, DSA nor EC worked", e);
}
}
}
}
private static PKCS8EncodedKeySpec generateKeySpec(char[] password, byte[] key)
throws IOException, NoSuchAlgorithmException, NoSuchPaddingException, InvalidKeySpecException,
InvalidKeyException, InvalidAlgorithmParameterException {
if (password == null) {
return new PKCS8EncodedKeySpec(key);
}
EncryptedPrivateKeyInfo encryptedPrivateKeyInfo = new EncryptedPrivateKeyInfo(key);
SecretKeyFactory keyFactory = SecretKeyFactory.getInstance(encryptedPrivateKeyInfo.getAlgName());
PBEKeySpec pbeKeySpec = new PBEKeySpec(password);
SecretKey pbeKey = keyFactory.generateSecret(pbeKeySpec);
Cipher cipher = Cipher.getInstance(encryptedPrivateKeyInfo.getAlgName());
cipher.init(Cipher.DECRYPT_MODE, pbeKey, encryptedPrivateKeyInfo.getAlgParameters());
return encryptedPrivateKeyInfo.getKeySpec(cipher);
}
public static X509Certificate loadCertificateFromFile(String file) throws Exception {
if(file == null) {
return null;
}
CertificateFactory fact = CertificateFactory.getInstance("X.509");
try(FileInputStream is = new FileInputStream(file)) {
return (X509Certificate) fact.generateCertificate(is);
}
}
public static X509Certificate loadCertificateFromStream(InputStream in) throws Exception {
if(in == null) {
return null;
}
CertificateFactory fact = CertificateFactory.getInstance("X.509");
return (X509Certificate) fact.generateCertificate(in);
}
public static KeyStore loadKeyStore(String storePath, String keyStorePassword, String type) throws Exception {
if(storePath == null) {
return null;
}
if(type == null || !type.toUpperCase().equals(JKS) || !type.toUpperCase().equals(PKCS12)) {
type = JKS;
}
final KeyStore store = KeyStore.getInstance(type.toUpperCase());
store.load(new FileInputStream(storePath), keyStorePassword==null?null:keyStorePassword.toCharArray());
return store;
}
public static PrivateKey loadKeyFromFile(String password, String keyFile) throws Exception {
if(keyFile == null) {
return null;
}
return PemKeyReader.toPrivateKey(new File(keyFile), password);
}
public static PrivateKey loadKeyFromStream(String password, InputStream in) throws Exception {
if(in == null) {
return null;
}
return PemKeyReader.toPrivateKey(in, password);
}
public static void checkPath(String keystoreFilePath, String fileNameLogOnly) {
if (keystoreFilePath == null || keystoreFilePath.length() == 0) {
throw new ElasticsearchException("Empty file path for "+fileNameLogOnly);
}
if (Files.isDirectory(Paths.get(keystoreFilePath), LinkOption.NOFOLLOW_LINKS)) {
throw new ElasticsearchException("Is a directory: " + keystoreFilePath+" Expected a file for "+fileNameLogOnly);
}
if(!Files.isReadable(Paths.get(keystoreFilePath))) {
throw new ElasticsearchException("Unable to read " + keystoreFilePath + " ("+Paths.get(keystoreFilePath)+"). Please make sure this files exists and is readable regarding to permissions. Property: "+fileNameLogOnly);
}
}
public static X509Certificate[] loadCertificatesFromFile(String file) throws Exception {
if(file == null) {
return null;
}
CertificateFactory fact = CertificateFactory.getInstance("X.509");
try(FileInputStream is = new FileInputStream(file)) {
Collection<? extends Certificate> certs = fact.generateCertificates(is);
X509Certificate[] x509Certs = new X509Certificate[certs.size()];
int i=0;
for(Certificate cert: certs) {
x509Certs[i++] = (X509Certificate) cert;
}
return x509Certs;
}
}
public static X509Certificate[] loadCertificatesFromStream(InputStream in) throws Exception {
if(in == null) {
return null;
}
CertificateFactory fact = CertificateFactory.getInstance("X.509");
Collection<? extends Certificate> certs = fact.generateCertificates(in);
X509Certificate[] x509Certs = new X509Certificate[certs.size()];
int i=0;
for(Certificate cert: certs) {
x509Certs[i++] = (X509Certificate) cert;
}
return x509Certs;
}
public static InputStream resolveStream(String propName, Settings settings) {
final String content = settings.get(propName, null);
if(content == null) {
return null;
}
return new ByteArrayInputStream(content.getBytes(StandardCharsets.US_ASCII));
}
public static String resolve(String propName, Settings settings, Path configPath, boolean mustBeValid) {
final String originalPath = settings.get(propName, null);
return resolve(originalPath, propName, settings, configPath, mustBeValid);
}
public static String resolve(String originalPath, String propName, Settings settings, Path configPath, boolean mustBeValid) {
log.debug("Path is is {}", originalPath);
String path = originalPath;
final Environment env = new Environment(settings, configPath);
if(env != null && originalPath != null && originalPath.length() > 0) {
path = env.configFile().resolve(originalPath).toAbsolutePath().toString();
log.debug("Resolved {} to {} against {}", originalPath, path, env.configFile().toAbsolutePath().toString());
}
if(mustBeValid) {
checkPath(path, propName);
}
if("".equals(path)) {
path = null;
}
return path;
}
public static KeyStore toTruststore(final String trustCertificatesAliasPrefix, final X509Certificate[] trustCertificates) throws Exception {
if(trustCertificates == null) {
return null;
}
KeyStore ks = KeyStore.getInstance(JKS);
ks.load(null);
if(trustCertificates != null && trustCertificates.length > 0) {
for (int i = 0; i < trustCertificates.length; i++) {
X509Certificate x509Certificate = trustCertificates[i];
ks.setCertificateEntry(trustCertificatesAliasPrefix+"_"+i, x509Certificate);
}
}
return ks;
}
public static KeyStore toKeystore(final String authenticationCertificateAlias, final char[] password, final X509Certificate authenticationCertificate[], final PrivateKey authenticationKey) throws Exception {
if(authenticationCertificateAlias != null && authenticationCertificate != null && authenticationKey != null) {
KeyStore ks = KeyStore.getInstance(JKS);
ks.load(null, null);
ks.setKeyEntry(authenticationCertificateAlias, authenticationKey, password, authenticationCertificate);
return ks;
} else {
return null;
}
}
public static char[] randomChars(int len) {
final SecureRandom r = new SecureRandom();
final char[] ret = new char[len];
for(int i=0; i<len;i++) {
ret[i] = (char)(r.nextInt(26) + 'a');
}
return ret;
}
private PemKeyReader() { }
}

View File

@ -0,0 +1,378 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.support;
import java.io.InputStream;
import java.lang.reflect.Constructor;
import java.net.URL;
import java.nio.file.Path;
import java.util.Collection;
import java.util.Collections;
import java.util.HashSet;
import java.util.Set;
import java.util.jar.Attributes;
import java.util.jar.Manifest;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.index.IndexService;
import org.elasticsearch.rest.RestController;
import org.elasticsearch.rest.RestHandler;
import org.elasticsearch.threadpool.ThreadPool;
import com.amazon.opendistroforelasticsearch.security.auditlog.AuditLog;
import com.amazon.opendistroforelasticsearch.security.auditlog.NullAuditLog;
import com.amazon.opendistroforelasticsearch.security.compliance.ComplianceConfig;
import com.amazon.opendistroforelasticsearch.security.compliance.ComplianceIndexingOperationListener;
import com.amazon.opendistroforelasticsearch.security.configuration.AdminDNs;
import com.amazon.opendistroforelasticsearch.security.configuration.DlsFlsRequestValve;
import com.amazon.opendistroforelasticsearch.security.configuration.IndexBaseConfigurationRepository;
import com.amazon.opendistroforelasticsearch.security.privileges.PrivilegesEvaluator;
import com.amazon.opendistroforelasticsearch.security.privileges.PrivilegesInterceptor;
import com.amazon.opendistroforelasticsearch.security.ssl.transport.DefaultPrincipalExtractor;
import com.amazon.opendistroforelasticsearch.security.ssl.transport.PrincipalExtractor;
import com.amazon.opendistroforelasticsearch.security.transport.DefaultInterClusterRequestEvaluator;
import com.amazon.opendistroforelasticsearch.security.transport.InterClusterRequestEvaluator;
public class ReflectionHelper {
protected static final Logger log = LogManager.getLogger(ReflectionHelper.class);
private static Set<ModuleInfo> modulesLoaded = new HashSet<>();
public static Set<ModuleInfo> getModulesLoaded() {
return Collections.unmodifiableSet(modulesLoaded);
}
private static boolean enterpriseModulesDisabled() {
return !enterpriseModulesEnabled;
}
public static void registerMngtRestApiHandler(final Settings settings) {
if (enterpriseModulesDisabled()) {
return;
}
if(!settings.getAsBoolean("http.enabled", true)) {
try {
final Class<?> clazz = Class.forName("com.amazon.opendistroforelasticsearch.security.dlic.rest.api.OpenDistroSecurityRestApiActions");
addLoadedModule(clazz);
} catch (final Throwable e) {
log.warn("Unable to register Rest Management Api Module due to {}", e.toString());
if(log.isDebugEnabled()) {
log.debug("Stacktrace: ",e);
}
}
}
}
@SuppressWarnings("unchecked")
public static Collection<RestHandler> instantiateMngtRestApiHandler(final Settings settings, final Path configPath, final RestController restController,
final Client localClient, final AdminDNs adminDns, final IndexBaseConfigurationRepository cr, final ClusterService cs, final PrincipalExtractor principalExtractor,
final PrivilegesEvaluator evaluator, final ThreadPool threadPool, final AuditLog auditlog) {
if (enterpriseModulesDisabled()) {
return Collections.emptyList();
}
try {
final Class<?> clazz = Class.forName("com.amazon.opendistroforelasticsearch.security.dlic.rest.api.OpenDistroSecurityRestApiActions");
final Collection<RestHandler> ret = (Collection<RestHandler>) clazz
.getDeclaredMethod("getHandler", Settings.class, Path.class, RestController.class, Client.class, AdminDNs.class, IndexBaseConfigurationRepository.class,
ClusterService.class, PrincipalExtractor.class, PrivilegesEvaluator.class, ThreadPool.class, AuditLog.class)
.invoke(null, settings, configPath, restController, localClient, adminDns, cr, cs, principalExtractor, evaluator, threadPool, auditlog);
addLoadedModule(clazz);
return ret;
} catch (final Throwable e) {
log.warn("Unable to enable Rest Management Api Module due to {}", e.toString());
if(log.isDebugEnabled()) {
log.debug("Stacktrace: ",e);
}
return Collections.emptyList();
}
}
@SuppressWarnings("rawtypes")
public static Constructor instantiateDlsFlsConstructor() {
if (enterpriseModulesDisabled()) {
return null;
}
try {
final Class<?> clazz = Class.forName("com.amazon.opendistroforelasticsearch.security.configuration.OpenDistroSecurityFlsDlsIndexSearcherWrapper");
final Constructor<?> ret = clazz.getConstructor(IndexService.class,
Settings.class, AdminDNs.class, ClusterService.class, AuditLog.class,
ComplianceIndexingOperationListener.class, ComplianceConfig.class);
addLoadedModule(clazz);
return ret;
} catch (final Throwable e) {
log.warn("Unable to enable DLS/FLS Module due to {}", e.toString());
if(log.isDebugEnabled()) {
log.debug("Stacktrace: ",e);
}
return null;
}
}
public static DlsFlsRequestValve instantiateDlsFlsValve() {
if (enterpriseModulesDisabled()) {
return new DlsFlsRequestValve.NoopDlsFlsRequestValve();
}
try {
final Class<?> clazz = Class.forName("com.amazon.opendistroforelasticsearch.security.configuration.DlsFlsValveImpl");
final DlsFlsRequestValve ret = (DlsFlsRequestValve) clazz.newInstance();
return ret;
} catch (final Throwable e) {
log.warn("Unable to enable DLS/FLS Valve Module due to {}", e.toString());
if(log.isDebugEnabled()) {
log.debug("Stacktrace: ",e);
}
return new DlsFlsRequestValve.NoopDlsFlsRequestValve();
}
}
public static AuditLog instantiateAuditLog(final Settings settings, final Path configPath, final Client localClient, final ThreadPool threadPool,
final IndexNameExpressionResolver resolver, final ClusterService clusterService) {
if (enterpriseModulesDisabled()) {
return new NullAuditLog();
}
try {
final Class<?> clazz = Class.forName("com.amazon.opendistroforelasticsearch.security.auditlog.impl.AuditLogImpl");
final AuditLog impl = (AuditLog) clazz
.getConstructor(Settings.class, Path.class, Client.class, ThreadPool.class, IndexNameExpressionResolver.class, ClusterService.class)
.newInstance(settings, configPath, localClient, threadPool, resolver, clusterService);
addLoadedModule(clazz);
return impl;
} catch (final Throwable e) {
log.warn("Unable to enable Auditlog Module due to {}", e.toString());
if(log.isDebugEnabled()) {
log.debug("Stacktrace: ",e);
}
return new NullAuditLog();
}
}
public static ComplianceIndexingOperationListener instantiateComplianceListener(ComplianceConfig complianceConfig, AuditLog auditlog) {
if (enterpriseModulesDisabled()) {
return new ComplianceIndexingOperationListener();
}
try {
final Class<?> clazz = Class.forName("com.amazon.opendistroforelasticsearch.security.compliance.ComplianceIndexingOperationListenerImpl");
final ComplianceIndexingOperationListener impl = (ComplianceIndexingOperationListener) clazz
.getConstructor(ComplianceConfig.class, AuditLog.class)
.newInstance(complianceConfig, auditlog);
addLoadedModule(clazz);
return impl;
} catch (final ClassNotFoundException e) {
//TODO produce a single warn msg, this here is issued for every index
log.debug("Unable to enable Compliance Module due to {}", e.toString());
if(log.isDebugEnabled()) {
log.debug("Stacktrace: ",e);
}
return new ComplianceIndexingOperationListener();
} catch (final Throwable e) {
log.error("Unable to enable Compliance Module due to {}", e.toString());
if(log.isDebugEnabled()) {
log.debug("Stacktrace: ",e);
}
return new ComplianceIndexingOperationListener();
}
}
public static PrivilegesInterceptor instantiatePrivilegesInterceptorImpl(final IndexNameExpressionResolver resolver, final ClusterService clusterService,
final Client localClient, final ThreadPool threadPool) {
final PrivilegesInterceptor noop = new PrivilegesInterceptor(resolver, clusterService, localClient, threadPool);
if (enterpriseModulesDisabled()) {
return noop;
}
try {
final Class<?> clazz = Class.forName("com.amazon.opendistroforelasticsearch.security.configuration.PrivilegesInterceptorImpl");
final PrivilegesInterceptor ret = (PrivilegesInterceptor) clazz.getConstructor(IndexNameExpressionResolver.class, ClusterService.class, Client.class, ThreadPool.class)
.newInstance(resolver, clusterService, localClient, threadPool);
addLoadedModule(clazz);
return ret;
} catch (final Throwable e) {
log.warn("Unable to enable Kibana Module due to {}", e.toString());
if(log.isDebugEnabled()) {
log.debug("Stacktrace: ",e);
}
return noop;
}
}
@SuppressWarnings("unchecked")
public static <T> T instantiateAAA(final String clazz, final Settings settings, final Path configPath, final boolean checkEnterprise) {
if (checkEnterprise && enterpriseModulesDisabled()) {
throw new ElasticsearchException("Can not load '{}' because enterprise modules are disabled", clazz);
}
try {
final Class<?> clazz0 = Class.forName(clazz);
final T ret = (T) clazz0.getConstructor(Settings.class, Path.class).newInstance(settings, configPath);
addLoadedModule(clazz0);
return ret;
} catch (final Throwable e) {
log.warn("Unable to enable '{}' due to {}", clazz, e.toString());
if(log.isDebugEnabled()) {
log.debug("Stacktrace: ",e);
}
throw new ElasticsearchException(e);
}
}
public static InterClusterRequestEvaluator instantiateInterClusterRequestEvaluator(final String clazz, final Settings settings) {
try {
final Class<?> clazz0 = Class.forName(clazz);
final InterClusterRequestEvaluator ret = (InterClusterRequestEvaluator) clazz0.getConstructor(Settings.class).newInstance(settings);
addLoadedModule(clazz0);
return ret;
} catch (final Throwable e) {
log.warn("Unable to load inter cluster request evaluator '{}' due to {}", clazz, e.toString());
if(log.isDebugEnabled()) {
log.debug("Stacktrace: ",e);
}
return new DefaultInterClusterRequestEvaluator(settings);
}
}
public static PrincipalExtractor instantiatePrincipalExtractor(final String clazz) {
try {
final Class<?> clazz0 = Class.forName(clazz);
final PrincipalExtractor ret = (PrincipalExtractor) clazz0.newInstance();
addLoadedModule(clazz0);
return ret;
} catch (final Throwable e) {
log.warn("Unable to load pricipal extractor '{}' due to {}", clazz, e.toString());
if(log.isDebugEnabled()) {
log.debug("Stacktrace: ",e);
}
return new DefaultPrincipalExtractor();
}
}
public static boolean isEnterpriseAAAModule(final String clazz) {
boolean enterpriseModuleInstalled = false;
if (clazz.equalsIgnoreCase("com.amazon.dlic.auth.ldap.backend.LDAPAuthorizationBackend")) {
enterpriseModuleInstalled = true;
}
if (clazz.equalsIgnoreCase("com.amazon.dlic.auth.ldap.backend.LDAPAuthenticationBackend")) {
enterpriseModuleInstalled = true;
}
if (clazz.equalsIgnoreCase("com.amazon.dlic.auth.http.jwt.HTTPJwtAuthenticator")) {
enterpriseModuleInstalled = true;
}
if (clazz.equalsIgnoreCase("com.amazon.dlic.auth.http.jwt.keybyoidc.HTTPJwtKeyByOpenIdConnectAuthenticator")) {
enterpriseModuleInstalled = true;
}
if (clazz.equalsIgnoreCase("com.amazon.dlic.auth.http.kerberos.HTTPSpnegoAuthenticator")) {
enterpriseModuleInstalled = true;
}
if (clazz.equalsIgnoreCase("com.amazon.dlic.auth.http.saml.HTTPSamlAuthenticator")) {
enterpriseModuleInstalled = true;
}
return enterpriseModuleInstalled;
}
public static boolean addLoadedModule(Class<?> clazz) {
ModuleInfo moduleInfo = getModuleInfo(clazz);
if (log.isDebugEnabled()) {
log.debug("Loaded module {}", moduleInfo);
}
return modulesLoaded.add(moduleInfo);
}
private static boolean enterpriseModulesEnabled;
// TODO static hack
public static void init(final boolean enterpriseModulesEnabled) {
ReflectionHelper.enterpriseModulesEnabled = enterpriseModulesEnabled;
}
private static ModuleInfo getModuleInfo(final Class<?> impl) {
ModuleType moduleType = ModuleType.getByDefaultImplClass(impl);
ModuleInfo moduleInfo = new ModuleInfo(moduleType, impl.getName());
try {
final String classPath = impl.getResource(impl.getSimpleName() + ".class").toString();
moduleInfo.setClasspath(classPath);
if (!classPath.startsWith("jar")) {
return moduleInfo;
}
final String manifestPath = classPath.substring(0, classPath.lastIndexOf("!") + 1) + "/META-INF/MANIFEST.MF";
try (InputStream stream = new URL(manifestPath).openStream()) {
final Manifest manifest = new Manifest(stream);
final Attributes attr = manifest.getMainAttributes();
moduleInfo.setVersion(attr.getValue("Implementation-Version"));
moduleInfo.setBuildTime(attr.getValue("Build-Time"));
moduleInfo.setGitsha1(attr.getValue("git-sha1"));
}
} catch (final Throwable e) {
log.error("Unable to retrieve module info for " + impl, e);
}
return moduleInfo;
}
}

View File

@ -0,0 +1,109 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.support;
import java.security.AccessController;
import java.security.PrivilegedAction;
import java.util.List;
import java.util.Objects;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.SpecialPermission;
import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotRequest;
import org.elasticsearch.repositories.RepositoriesService;
import org.elasticsearch.repositories.Repository;
import org.elasticsearch.snapshots.SnapshotId;
import org.elasticsearch.snapshots.SnapshotInfo;
import org.elasticsearch.snapshots.SnapshotUtils;
import org.elasticsearch.threadpool.ThreadPool;
import com.amazon.opendistroforelasticsearch.security.OpenDistroSecurityPlugin;
public class SnapshotRestoreHelper {
protected static final Logger log = LogManager.getLogger(SnapshotRestoreHelper.class);
public static List<String> resolveOriginalIndices(RestoreSnapshotRequest restoreRequest) {
final SnapshotInfo snapshotInfo = getSnapshotInfo(restoreRequest);
if (snapshotInfo == null) {
log.warn("snapshot repository '" + restoreRequest.repository() + "', snapshot '" + restoreRequest.snapshot() + "' not found");
return null;
} else {
return SnapshotUtils.filterIndices(snapshotInfo.indices(), restoreRequest.indices(), restoreRequest.indicesOptions());
}
}
public static SnapshotInfo getSnapshotInfo(RestoreSnapshotRequest restoreRequest) {
final RepositoriesService repositoriesService = Objects.requireNonNull(OpenDistroSecurityPlugin.GuiceHolder.getRepositoriesService(), "RepositoriesService not initialized");
final Repository repository = repositoriesService.repository(restoreRequest.repository());
final String threadName = Thread.currentThread().getName();
SnapshotInfo snapshotInfo = null;
try {
setCurrentThreadName(ThreadPool.Names.GENERIC);
for (final SnapshotId snapshotId : repository.getRepositoryData().getSnapshotIds()) {
if (snapshotId.getName().equals(restoreRequest.snapshot())) {
if(log.isDebugEnabled()) {
log.debug("snapshot found: {} (UUID: {})", snapshotId.getName(), snapshotId.getUUID());
}
snapshotInfo = repository.getSnapshotInfo(snapshotId);
break;
}
}
} finally {
setCurrentThreadName(threadName);
}
return snapshotInfo;
}
private static void setCurrentThreadName(final String name) {
final SecurityManager sm = System.getSecurityManager();
if (sm != null) {
sm.checkPermission(new SpecialPermission());
}
AccessController.doPrivileged(new PrivilegedAction<Object>() {
@Override
public Object run() {
Thread.currentThread().setName(name);
return null;
}
});
}
}

View File

@ -0,0 +1,114 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.support;
import java.io.Serializable;
import java.util.Arrays;
import org.elasticsearch.action.get.GetRequest;
import org.elasticsearch.action.search.SearchRequest;
public class SourceFieldsContext implements Serializable {
private String[] includes;
private String[] excludes;
//private String[] storedFields;
private boolean fetchSource = true;
/**
*
*/
private static final long serialVersionUID = 1L;
public static boolean isNeeded(SearchRequest request) {
return (request.source() != null && request.source().fetchSource() != null && (request.source().fetchSource().includes() != null || request
.source().fetchSource().excludes() != null))
|| (request.source() != null && request.source().storedFields() != null
&& request.source().storedFields().fieldNames() != null && !request.source().storedFields().fieldNames().isEmpty());
}
public static boolean isNeeded(GetRequest request) {
return (request.fetchSourceContext() != null && (request.fetchSourceContext().includes() != null || request.fetchSourceContext()
.excludes() != null)) || (request.storedFields() != null && request.storedFields().length > 0);
}
public SourceFieldsContext() {
super();
}
public SourceFieldsContext(SearchRequest request) {
if (request.source() != null && request.source().fetchSource() != null) {
includes = request.source().fetchSource().includes();
excludes = request.source().fetchSource().excludes();
fetchSource = request.source().fetchSource().fetchSource();
}
//if (request.source() != null && request.source().storedFields() != null && request.source().storedFields().fieldNames() != null) {
// storedFields = request.source().storedFields().fieldNames().toArray(new String[0]);
//}
}
public SourceFieldsContext(GetRequest request) {
if (request.fetchSourceContext() != null) {
includes = request.fetchSourceContext().includes();
excludes = request.fetchSourceContext().excludes();
fetchSource = request.fetchSourceContext().fetchSource();
}
//storedFields = request.storedFields();
}
public String[] getIncludes() {
return includes;
}
public String[] getExcludes() {
return excludes;
}
//public String[] getStoredFields() {
// return storedFields;
//}
public boolean hasIncludesOrExcludes() {
return (includes != null && includes.length > 0) || (excludes != null && excludes.length > 0);
}
public boolean isFetchSource() {
return fetchSource;
}
@Override
public String toString() {
return "SourceFieldsContext [includes=" + Arrays.toString(includes) + ", excludes=" + Arrays.toString(excludes) + ", fetchSource="
+ fetchSource + "]";
}
}

View File

@ -0,0 +1,588 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.support;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashSet;
import java.util.List;
import java.util.Optional;
import java.util.Set;
import java.util.Stack;
import java.util.regex.Pattern;
public class WildcardMatcher {
private static final int NOT_FOUND = -1;
/**
* returns true if at least one candidate match at least one pattern (case sensitive)
* @param pattern
* @param candidate
* @return
*/
public static boolean matchAny(final String[] pattern, final String[] candidate) {
return matchAny(pattern, candidate, false);
}
public static boolean matchAny(final Collection<String> pattern, final Collection<String> candidate) {
return matchAny(pattern, candidate, false);
}
/**
* returns true if at least one candidate match at least one pattern
*
* @param pattern
* @param candidate
* @param ignoreCase
* @return
*/
public static boolean matchAny(final String[] pattern, final String[] candidate, boolean ignoreCase) {
for (int i = 0; i < pattern.length; i++) {
final String string = pattern[i];
if (matchAny(string, candidate, ignoreCase)) {
return true;
}
}
return false;
}
/**
* returns true if at least one candidate match at least one pattern
*
* @param pattern
* @param candidate
* @param ignoreCase
* @return
*/
public static boolean matchAny(final Collection<String> pattern, final String[] candidate, boolean ignoreCase) {
for (String string: pattern) {
if (matchAny(string, candidate, ignoreCase)) {
return true;
}
}
return false;
}
public static boolean matchAny(final Collection<String> pattern, final Collection<String> candidate, boolean ignoreCase) {
for (String string: pattern) {
if (matchAny(string, candidate, ignoreCase)) {
return true;
}
}
return false;
}
/**
* return true if all candidates find a matching pattern
*
* @param pattern
* @param candidate
* @return
*/
public static boolean matchAll(final String[] pattern, final String[] candidate) {
for (int i = 0; i < candidate.length; i++) {
final String string = candidate[i];
if (!matchAny(pattern, string)) {
return false;
}
}
return true;
}
/**
*
* @param pattern
* @param candidate
* @return
*/
public static boolean allPatternsMatched(final String[] pattern, final String[] candidate) {
int matchedPatternNum = 0;
for (int i = 0; i < pattern.length; i++) {
final String string = pattern[i];
if (matchAny(string, candidate)) {
matchedPatternNum++;
}
}
return matchedPatternNum == pattern.length && pattern.length > 0;
}
public static boolean matchAny(final String pattern, final String[] candidate) {
return matchAny(pattern, candidate, false);
}
public static boolean matchAny(final String pattern, final Collection<String> candidate) {
return matchAny(pattern, candidate, false);
}
/**
* return true if at least one candidate matches the given pattern
*
* @param pattern
* @param candidate
* @param ignoreCase
* @return
*/
public static boolean matchAny(final String pattern, final String[] candidate, boolean ignoreCase) {
for (int i = 0; i < candidate.length; i++) {
final String string = candidate[i];
if (match(pattern, string, ignoreCase)) {
return true;
}
}
return false;
}
public static boolean matchAny(final String pattern, final Collection<String> candidates, boolean ignoreCase) {
for (String candidate: candidates) {
if (match(pattern, candidate, ignoreCase)) {
return true;
}
}
return false;
}
public static String[] matches(final String pattern, final String[] candidate, boolean ignoreCase) {
final List<String> ret = new ArrayList<String>(candidate.length);
for (int i = 0; i < candidate.length; i++) {
final String string = candidate[i];
if (match(pattern, string, ignoreCase)) {
ret.add(string);
}
}
return ret.toArray(new String[0]);
}
public static List<String> getMatchAny(final String pattern, final String[] candidate) {
final List<String> matches = new ArrayList<String>(candidate.length);
for (int i = 0; i < candidate.length; i++) {
final String string = candidate[i];
if (match(pattern, string)) {
matches.add(string);
}
}
return matches;
}
public static List<String> getMatchAny(final String[] patterns, final String[] candidate) {
final List<String> matches = new ArrayList<String>(candidate.length);
for (int i = 0; i < candidate.length; i++) {
final String string = candidate[i];
if (matchAny(patterns, string)) {
matches.add(string);
}
}
return matches;
}
public static List<String> getMatchAny(final String pattern, final Collection<String> candidate) {
final List<String> matches = new ArrayList<String>(candidate.size());
for (final String string: candidate) {
if (match(pattern, string)) {
matches.add(string);
}
}
return matches;
}
public static List<String> getMatchAny(final String[] patterns, final Collection<String> candidate) {
final List<String> matches = new ArrayList<String>(candidate.size());
for (final String string: candidate) {
if (matchAny(patterns, string)) {
matches.add(string);
}
}
return matches;
}
public static Optional<String> getFirstMatchingPattern(final Collection<String> pattern, final String candidate) {
for (String p : pattern) {
if (match(p, candidate)) {
return Optional.of(p);
}
}
return Optional.empty();
}
/**
* returns true if the candidate matches at least one pattern
*
* @param pattern
* @param candidate
* @return
*/
public static boolean matchAny(final String pattern[], final String candidate) {
for (int i = 0; i < pattern.length; i++) {
final String string = pattern[i];
if (match(string, candidate)) {
return true;
}
}
return false;
}
/**
* returns true if the candidate matches at least one pattern
*
* @param pattern
* @param candidate
* @return
*/
public static boolean matchAny(final Collection<String> pattern, final String candidate) {
for (String string: pattern) {
if (match(string, candidate)) {
return true;
}
}
return false;
}
public static boolean match(final String pattern, final String candidate) {
return match(pattern, candidate, false);
}
public static boolean match(String pattern, String candidate, boolean ignoreCase) {
if (pattern == null || candidate == null) {
return false;
}
if(ignoreCase) {
pattern = pattern.toLowerCase();
candidate = candidate.toLowerCase();
}
if (pattern.startsWith("/") && pattern.endsWith("/")) {
// regex
return Pattern.matches("^"+pattern.substring(1, pattern.length() - 1)+"$", candidate);
} else if (pattern.length() == 1 && pattern.charAt(0) == '*') {
return true;
} else if (pattern.indexOf('?') == NOT_FOUND && pattern.indexOf('*') == NOT_FOUND) {
return pattern.equals(candidate);
} else {
return simpleWildcardMatch(pattern, candidate);
}
}
public static boolean containsWildcard(final String pattern) {
if (pattern != null
&& (pattern.indexOf("*") > NOT_FOUND || pattern.indexOf("?") > NOT_FOUND || (pattern.startsWith("/") && pattern
.endsWith("/")))) {
return true;
}
return false;
}
/**
*
* @param set will be modified
* @param stringContainingWc
* @return
*/
public static boolean wildcardRemoveFromSet(Set<String> set, String stringContainingWc) {
if(set == null || set.isEmpty()) {
return false;
}
if(!containsWildcard(stringContainingWc) && set.contains(stringContainingWc)) {
return set.remove(stringContainingWc);
} else {
boolean modified = false;
Set<String> copy = new HashSet<>(set);
for(String it: copy) {
if(WildcardMatcher.match(stringContainingWc, it)) {
modified = set.remove(it) || modified;
}
}
return modified;
}
}
/**
*
* @param set will be modified
* @param stringContainingWc
* @return
*/
public static boolean wildcardRetainInSet(Set<String> set, String[] setContainingWc) {
if(set == null || set.isEmpty()) {
return false;
}
boolean modified = false;
Set<String> copy = new HashSet<>(set);
for(String it: copy) {
if(!WildcardMatcher.matchAny(setContainingWc, it)) {
modified = set.remove(it) || modified;
}
}
return modified;
}
//All code below is copied (and slightly modified) from Apache Commons IO
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/**
* Checks a filename to see if it matches the specified wildcard matcher
* allowing control over case-sensitivity.
* <p>
* The wildcard matcher uses the characters '?' and '*' to represent a
* single or multiple (zero or more) wildcard characters.
* N.B. the sequence "*?" does not work properly at present in match strings.
*
* @param candidate the filename to match on
* @param pattern the wildcard string to match against
* @return true if the filename matches the wilcard string
* @since 1.3
*/
private static boolean simpleWildcardMatch(final String pattern, final String candidate) {
if (candidate == null && pattern == null) {
return true;
}
if (candidate == null || pattern == null) {
return false;
}
final String[] wcs = splitOnTokens(pattern);
boolean anyChars = false;
int textIdx = 0;
int wcsIdx = 0;
final Stack<int[]> backtrack = new Stack<>();
// loop around a backtrack stack, to handle complex * matching
do {
if (backtrack.size() > 0) {
final int[] array = backtrack.pop();
wcsIdx = array[0];
textIdx = array[1];
anyChars = true;
}
// loop whilst tokens and text left to process
while (wcsIdx < wcs.length) {
if (wcs[wcsIdx].equals("?")) {
// ? so move to next text char
textIdx++;
if (textIdx > candidate.length()) {
break;
}
anyChars = false;
} else if (wcs[wcsIdx].equals("*")) {
// set any chars status
anyChars = true;
if (wcsIdx == wcs.length - 1) {
textIdx = candidate.length();
}
} else {
// matching text token
if (anyChars) {
// any chars then try to locate text token
textIdx = checkIndexOf(candidate, textIdx, wcs[wcsIdx]);
if (textIdx == NOT_FOUND) {
// token not found
break;
}
final int repeat = checkIndexOf(candidate, textIdx + 1, wcs[wcsIdx]);
if (repeat >= 0) {
backtrack.push(new int[] {wcsIdx, repeat});
}
} else {
// matching from current position
if (!checkRegionMatches(candidate, textIdx, wcs[wcsIdx])) {
// couldnt match token
break;
}
}
// matched text token, move text index to end of matched token
textIdx += wcs[wcsIdx].length();
anyChars = false;
}
wcsIdx++;
}
// full match
if (wcsIdx == wcs.length && textIdx == candidate.length()) {
return true;
}
} while (backtrack.size() > 0);
return false;
}
/**
* Splits a string into a number of tokens.
* The text is split by '?' and '*'.
* Where multiple '*' occur consecutively they are collapsed into a single '*'.
*
* @param text the text to split
* @return the array of tokens, never null
*/
private static String[] splitOnTokens(final String text) {
// used by wildcardMatch
// package level so a unit test may run on this
if (text.indexOf('?') == NOT_FOUND && text.indexOf('*') == NOT_FOUND) {
return new String[] { text };
}
final char[] array = text.toCharArray();
final ArrayList<String> list = new ArrayList<>();
final StringBuilder buffer = new StringBuilder();
char prevChar = 0;
for (final char ch : array) {
if (ch == '?' || ch == '*') {
if (buffer.length() != 0) {
list.add(buffer.toString());
buffer.setLength(0);
}
if (ch == '?') {
list.add("?");
} else if (prevChar != '*') {// ch == '*' here; check if previous char was '*'
list.add("*");
}
} else {
buffer.append(ch);
}
prevChar = ch;
}
if (buffer.length() != 0) {
list.add(buffer.toString());
}
return list.toArray( new String[ list.size() ] );
}
/**
* Checks if one string contains another starting at a specific index using the
* case-sensitivity rule.
* <p>
* This method mimics parts of {@link String#indexOf(String, int)}
* but takes case-sensitivity into account.
*
* @param str the string to check, not null
* @param strStartIndex the index to start at in str
* @param search the start to search for, not null
* @return the first index of the search String,
* -1 if no match or {@code null} string input
* @throws NullPointerException if either string is null
* @since 2.0
*/
private static int checkIndexOf(final String str, final int strStartIndex, final String search) {
final int endIndex = str.length() - search.length();
if (endIndex >= strStartIndex) {
for (int i = strStartIndex; i <= endIndex; i++) {
if (checkRegionMatches(str, i, search)) {
return i;
}
}
}
return -1;
}
/**
* Checks if one string contains another at a specific index using the case-sensitivity rule.
* <p>
* This method mimics parts of {@link String#regionMatches(boolean, int, String, int, int)}
* but takes case-sensitivity into account.
*
* @param str the string to check, not null
* @param strStartIndex the index to start at in str
* @param search the start to search for, not null
* @return true if equal using the case rules
* @throws NullPointerException if either string is null
*/
private static boolean checkRegionMatches(final String str, final int strStartIndex, final String search) {
return str.regionMatches(false, strStartIndex, search, 0, search.length());
}
}

View File

@ -0,0 +1,90 @@
/*
* Copyright 2015-2018 _floragunn_ GmbH
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Portions Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.opendistroforelasticsearch.security.tools;
import java.io.Console;
import java.security.SecureRandom;
import java.util.Arrays;
import java.util.Objects;
import org.apache.commons.cli.CommandLine;
import org.apache.commons.cli.CommandLineParser;
import org.apache.commons.cli.DefaultParser;
import org.apache.commons.cli.HelpFormatter;
import org.apache.commons.cli.Option;
import org.apache.commons.cli.Options;
import org.bouncycastle.crypto.generators.OpenBSDBCrypt;
public class Hasher {
public static void main(final String[] args) {
final Options options = new Options();
final HelpFormatter formatter = new HelpFormatter();
options.addOption(Option.builder("p").argName("password").hasArg().desc("Cleartext password to hash").build());
options.addOption(Option.builder("env").argName("name environment variable").hasArg().desc("name environment variable to read password from").build());
final CommandLineParser parser = new DefaultParser();
try {
final CommandLine line = parser.parse(options, args);
if(line.hasOption("p")) {
System.out.println(hash(line.getOptionValue("p").toCharArray()));
} else if(line.hasOption("env")) {
final String pwd = System.getenv(line.getOptionValue("env"));
if(pwd == null || pwd.isEmpty()) {
throw new Exception("No environment variable '"+line.getOptionValue("env")+"' set");
}
System.out.println(hash(pwd.toCharArray()));
} else {
final Console console = System.console();
if(console == null) {
throw new Exception("Cannot allocate a console");
}
final char[] passwd = console.readPassword("[%s]", "Password:");
System.out.println(hash(passwd));
}
} catch (final Exception exp) {
System.err.println("Parsing failed. Reason: " + exp.getMessage());
formatter.printHelp("hasher.sh", options, true);
System.exit(-1);
}
}
public static String hash(final char[] clearTextPassword) {
final byte[] salt = new byte[16];
new SecureRandom().nextBytes(salt);
final String hash = OpenBSDBCrypt.generate((Objects.requireNonNull(clearTextPassword)), salt, 12);
Arrays.fill(salt, (byte)0);
Arrays.fill(clearTextPassword, '\0');
return hash;
}
}

Some files were not shown because too many files have changed in this diff Show More