Merge pull request #39576 from rallytime/merge-develop

[develop] Merge forward from 2016.11 to develop
This commit is contained in:
Mike Place 2017-02-22 19:09:56 -07:00 committed by GitHub
commit 4279c39f41
27 changed files with 702 additions and 152 deletions

View File

@ -89,12 +89,33 @@ A simpler returner, such as Slack or HipChat, requires:
Step 2: Configure the Returner Step 2: Configure the Returner
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
After you understand the configuration and have the external system ready, add After you understand the configuration and have the external system ready, the
the returner configuration settings to the Salt Minion configuration file for configuration requirements must be declared.
the External Job Cache, or to the Salt Master configuration file for the Master
Job Cache.
For example, MySQL requires: External Job Cache
""""""""""""""""""
The returner configuration settings can be declared in the Salt Minion
configuration file, the Minion's pillar data, or the Minion's grains.
If ``external_job_cache`` configuration settings are specified in more than
one place, the options are retrieved in the following order. The first
configuration location that is found is the one that will be used.
- Minion configuration file
- Minion's grains
- Minion's pillar data
Master Job Cache
""""""""""""""""
The returner configuration settings for the Master Job Cache should be
declared in the Salt Master's configuration file.
Configuration File Examples
"""""""""""""""""""""""""""
MySQL requires:
.. code-block:: yaml .. code-block:: yaml

View File

@ -31,6 +31,41 @@ actual message that we are sending. With this flexible wire protocol we can
implement any message semantics that we'd like-- including multiplexed message implement any message semantics that we'd like-- including multiplexed message
passing on a single socket. passing on a single socket.
TLS Support
===========
.. version_added:: 2016.11.1
The TCP transport allows for the master/minion communication to be optionally
wrapped in a TLS connection. Enabling this is simple, the master and minion need
to be using the tcp connection, then the `ssl` option is enabled. The `ssl`
option is passed as a dict and corresponds to the options passed to the
Python `ssl.wrap_socket <https://docs.python.org/2/library/ssl.html#ssl.wrap_socket>`
function.
A simple setup looks like this, on the Salt Master add the `ssl` option to the
master configuration file:
.. code-block:: yaml
ssl:
keyfile: <path_to_keyfile>
certfile: <path_to_certfile>
ssl_version: PROTOCOL_TLSv1_2
The minimal `ssl` option in the minion configuration file looks like this:
.. code-block:: yaml
ssl: {}
.. note::
While setting the ssl_version is not required, we recomend it. Some older
versions of python do not support the latest TLS protocol and if this is
the case for your version of python we strongly recommend upgrading your
version of Python.
Crypto Crypto
====== ======

View File

@ -101,11 +101,13 @@ releases pygit2_ 0.20.3 and libgit2_ 0.20.0 is the recommended combination.
RedHat Pygit2 Issues RedHat Pygit2 Issues
~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
Around the time of the release of RedHat 7.3, RedHat effectively broke pygit2_ The release of RedHat/CentOS 7.3 upgraded both ``python-cffi`` and
by upgrading python-cffi_ to a release incompatible with the version of pygit2_ ``http-parser``, both of which are dependencies for pygit2_/libgit2_. Both
available in their repositories. This prevents Python from importing the pygit2_ and libgit2_ (which are from the EPEL repository and not managed
pygit2_ module at all, leading to a master that refuses to start, and leaving directly by RedHat) need to be rebuilt against these updated dependencies.
the following errors in the master log file:
The below errors will show up in the master log if an incompatible
``python-pygit2`` package is installed:
.. code-block:: text .. code-block:: text
@ -114,34 +116,37 @@ the following errors in the master log file:
2017-02-10 09:07:34,907 [salt.utils.gitfs ][CRITICAL][11211] No suitable gitfs provider module is installed. 2017-02-10 09:07:34,907 [salt.utils.gitfs ][CRITICAL][11211] No suitable gitfs provider module is installed.
2017-02-10 09:07:34,912 [salt.master ][CRITICAL][11211] Master failed pre flight checks, exiting 2017-02-10 09:07:34,912 [salt.master ][CRITICAL][11211] Master failed pre flight checks, exiting
This issue has been reported on the `RedHat Bugzilla`_. In the meantime, you The below errors will show up in the master log if an incompatible ``libgit2``
can work around it by downgrading python-cffi_. To do this, go to `this page`_ package is installed:
and download the appropriate python-cffi_ 0.8.6 RPM. Then copy that RPM to the
master and downgrade using the ``rpm`` command. For example: .. code-block:: text
2017-02-15 18:04:45,211 [salt.utils.gitfs ][ERROR ][6211] Error occurred fetching gitfs remote 'https://foo.com/bar.git': No Content-Type header in response
As of 15 February 2017, ``python-pygit2`` has been rebuilt and is in the stable
EPEL repository. However, ``libgit2`` remains broken (a `bug report`_ has been
filed to get it rebuilt).
In the meantime, you can work around this by downgrading ``http-parser``. To do
this, go to `this page`_ and download the appropriate ``http-parser`` RPM for
the OS architecture you are using (x86_64, etc.). Then downgrade using the
``rpm`` command. For example:
.. code-block:: bash .. code-block:: bash
# rpm -Uvh --oldpackage python-cffi-0.8.6-1.el7.x86_64.rpm [root@784e8a8c5028 /]# curl --silent -O https://kojipkgs.fedoraproject.org//packages/http-parser/2.0/5.20121128gitcd01361.el7/x86_64/http-parser-2.0-5.20121128gitcd01361.el7.x86_64.rpm
[root@784e8a8c5028 /]# rpm -Uvh --oldpackage http-parser-2.0-5.20121128gitcd01361.el7.x86_64.rpm
Preparing... ################################# [100%] Preparing... ################################# [100%]
Updating / installing... Updating / installing...
1:python-cffi-0.8.6-1.el7 ################################# [ 50%] 1:http-parser-2.0-5.20121128gitcd01################################# [ 50%]
Cleaning up / removing... Cleaning up / removing...
2:python-cffi-1.6.0-5.el7 ################################# [100%] 2:http-parser-2.7.1-3.el7 ################################# [100%]
# rpm -q python-cffi
python-cffi-0.8.6-1.el7.x86_64
To confirm that pygit2_ is now "fixed", you can test trying to import it like so: A restart of the salt-master daemon may be required to allow http(s)
repositories to continue to be fetched.
.. code-block:: bash .. _`this page`: https://koji.fedoraproject.org/koji/buildinfo?buildID=703753
.. _`bug report`: https://bugzilla.redhat.com/show_bug.cgi?id=1422583
# python -c 'import pygit2'
#
If the command produces no output, then your master should work when you start
it again.
.. _`this page`: https://koji.fedoraproject.org/koji/buildinfo?buildID=569520
.. _`RedHat Bugzilla`: https://bugzilla.redhat.com/show_bug.cgi?id=1400668
GitPython GitPython

View File

@ -245,6 +245,8 @@ def create(vm_):
) )
for private_ip in private: for private_ip in private:
private_ip = preferred_ip(vm_, [private_ip]) private_ip = preferred_ip(vm_, [private_ip])
if private_ip is False:
continue
if salt.utils.cloud.is_public_ip(private_ip): if salt.utils.cloud.is_public_ip(private_ip):
log.warning('%s is a public IP', private_ip) log.warning('%s is a public IP', private_ip)
data.public_ips.append(private_ip) data.public_ips.append(private_ip)

View File

@ -2221,14 +2221,21 @@ def query_instance(vm_=None, call=None):
log.debug('Returned query data: {0}'.format(data)) log.debug('Returned query data: {0}'.format(data))
if ssh_interface(vm_) == 'public_ips' and 'ipAddress' in data[0]['instancesSet']['item']: if ssh_interface(vm_) == 'public_ips':
if 'ipAddress' in data[0]['instancesSet']['item']:
return data
else:
log.error( log.error(
'Public IP not detected.' 'Public IP not detected.'
) )
if ssh_interface(vm_) == 'private_ips':
if 'privateIpAddress' in data[0]['instancesSet']['item']:
return data return data
if ssh_interface(vm_) == 'private_ips' and \ else:
'privateIpAddress' in data[0]['instancesSet']['item']: log.error(
return data 'Private IP not detected.'
)
try: try:
data = salt.utils.cloud.wait_for_ip( data = salt.utils.cloud.wait_for_ip(

View File

@ -921,6 +921,8 @@ def create(vm_):
) )
for private_ip in private: for private_ip in private:
private_ip = preferred_ip(vm_, [private_ip]) private_ip = preferred_ip(vm_, [private_ip])
if private_ip is False:
continue
if salt.utils.cloud.is_public_ip(private_ip): if salt.utils.cloud.is_public_ip(private_ip):
log.warning('{0} is a public IP'.format(private_ip)) log.warning('{0} is a public IP'.format(private_ip))
data.public_ips.append(private_ip) data.public_ips.append(private_ip)

View File

@ -722,6 +722,8 @@ def create(vm_):
) )
for private_ip in private: for private_ip in private:
private_ip = preferred_ip(vm_, [private_ip]) private_ip = preferred_ip(vm_, [private_ip])
if private_ip is False:
continue
if salt.utils.cloud.is_public_ip(private_ip): if salt.utils.cloud.is_public_ip(private_ip):
log.warning('{0} is a public IP'.format(private_ip)) log.warning('{0} is a public IP'.format(private_ip))
data.public_ips.append(private_ip) data.public_ips.append(private_ip)

View File

@ -290,6 +290,8 @@ def create(vm_):
) )
for private_ip in private: for private_ip in private:
private_ip = preferred_ip(vm_, [private_ip]) private_ip = preferred_ip(vm_, [private_ip])
if private_ip is False:
continue
if salt.utils.cloud.is_public_ip(private_ip): if salt.utils.cloud.is_public_ip(private_ip):
log.warning('{0} is a public IP'.format(private_ip)) log.warning('{0} is a public IP'.format(private_ip))
data.public_ips.append(private_ip) data.public_ips.append(private_ip)

View File

@ -976,7 +976,7 @@ VALID_OPTS = {
# http://docs.python.org/2/library/ssl.html#ssl.wrap_socket # http://docs.python.org/2/library/ssl.html#ssl.wrap_socket
# Note: to set enum arguments values like `cert_reqs` and `ssl_version` use constant names # Note: to set enum arguments values like `cert_reqs` and `ssl_version` use constant names
# without ssl module prefix: `CERT_REQUIRED` or `PROTOCOL_SSLv23`. # without ssl module prefix: `CERT_REQUIRED` or `PROTOCOL_SSLv23`.
'ssl': (dict, None), 'ssl': (dict, bool, type(None)),
# Controls how a multi-function job returns its data. If this is False, # Controls how a multi-function job returns its data. If this is False,
# it will return its data using a dictionary with the function name as # it will return its data using a dictionary with the function name as
@ -3169,7 +3169,11 @@ def _update_ssl_config(opts):
''' '''
Resolves string names to integer constant in ssl configuration. Resolves string names to integer constant in ssl configuration.
''' '''
if opts['ssl'] is None: if opts['ssl'] in (None, False):
opts['ssl'] = None
return
if opts['ssl'] is True:
opts['ssl'] = {}
return return
import ssl import ssl
for key, prefix in (('cert_reqs', 'CERT_'), for key, prefix in (('cert_reqs', 'CERT_'),

View File

@ -4,6 +4,12 @@ Connection module for Amazon CloudTrail
.. versionadded:: 2016.3.0 .. versionadded:: 2016.3.0
:depends:
- boto
- boto3
The dependencies listed above can be installed via package or pip.
:configuration: This module accepts explicit Lambda credentials but can also :configuration: This module accepts explicit Lambda credentials but can also
utilize IAM roles assigned to the instance through Instance Profiles. utilize IAM roles assigned to the instance through Instance Profiles.
Dynamic credentials are then automatically obtained from AWS API and no Dynamic credentials are then automatically obtained from AWS API and no
@ -39,8 +45,6 @@ Connection module for Amazon CloudTrail
key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs
region: us-east-1 region: us-east-1
:depends: boto3
''' '''
# keep lint from choking on _get_conn and _cache_id # keep lint from choking on _get_conn and _cache_id
#pylint: disable=E0602 #pylint: disable=E0602

View File

@ -4,6 +4,12 @@ Connection module for Amazon IoT
.. versionadded:: 2016.3.0 .. versionadded:: 2016.3.0
:depends:
- boto
- boto3
The dependencies listed above can be installed via package or pip.
:configuration: This module accepts explicit Lambda credentials but can also :configuration: This module accepts explicit Lambda credentials but can also
utilize IAM roles assigned to the instance through Instance Profiles. utilize IAM roles assigned to the instance through Instance Profiles.
Dynamic credentials are then automatically obtained from AWS API and no Dynamic credentials are then automatically obtained from AWS API and no
@ -39,8 +45,6 @@ Connection module for Amazon IoT
key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs
region: us-east-1 region: us-east-1
:depends: boto3
''' '''
# keep lint from choking on _get_conn and _cache_id # keep lint from choking on _get_conn and _cache_id
#pylint: disable=E0602 #pylint: disable=E0602

View File

@ -4,6 +4,12 @@ Connection module for Amazon Lambda
.. versionadded:: 2016.3.0 .. versionadded:: 2016.3.0
:depends:
- boto
- boto3
The dependencies listed above can be installed via package or pip.
:configuration: This module accepts explicit Lambda credentials but can also :configuration: This module accepts explicit Lambda credentials but can also
utilize IAM roles assigned to the instance through Instance Profiles. utilize IAM roles assigned to the instance through Instance Profiles.
Dynamic credentials are then automatically obtained from AWS API and no Dynamic credentials are then automatically obtained from AWS API and no
@ -69,8 +75,6 @@ Connection module for Amazon Lambda
error: error:
message: error message message: error message
:depends: boto3
''' '''
# keep lint from choking on _get_conn and _cache_id # keep lint from choking on _get_conn and _cache_id
# pylint: disable=E0602 # pylint: disable=E0602

View File

@ -4,6 +4,12 @@ Connection module for Amazon S3 Buckets
.. versionadded:: 2016.3.0 .. versionadded:: 2016.3.0
:depends:
- boto
- boto3
The dependencies listed above can be installed via package or pip.
:configuration: This module accepts explicit Lambda credentials but can also :configuration: This module accepts explicit Lambda credentials but can also
utilize IAM roles assigned to the instance through Instance Profiles. utilize IAM roles assigned to the instance through Instance Profiles.
Dynamic credentials are then automatically obtained from AWS API and no Dynamic credentials are then automatically obtained from AWS API and no
@ -39,8 +45,6 @@ Connection module for Amazon S3 Buckets
key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs
region: us-east-1 region: us-east-1
:depends: boto3
''' '''
# keep lint from choking on _get_conn and _cache_id # keep lint from choking on _get_conn and _cache_id
#pylint: disable=E0602 #pylint: disable=E0602

View File

@ -191,11 +191,12 @@ Executing Commands Within a Running Container
--------------------------------------------- ---------------------------------------------
.. note:: .. note::
With the release of Docker 1.3.1, the Execution Driver has been removed. With the release of Docker 1.13.1, the Execution Driver has been removed.
Starting in Salt 2016.3.6, 2016.11.4, and Nitrogen, Salt defaults to using Starting in versions 2016.3.6, 2016.11.4, and Nitrogen, Salt defaults to
the ``docker-exec`` driver, however for older Salt releases it will be using ``docker exec`` to run commands in containers, however for older Salt
necessary to set the ``docker.exec_driver`` config option to either releases it will be necessary to set the ``docker.exec_driver`` config
``docker-exec`` or ``nsenter`` for Docker versions 1.3.1 and newer. option to either ``docker-exec`` or ``nsenter`` for Docker versions 1.13.1
and newer.
Multiple methods exist for executing commands within Docker containers: Multiple methods exist for executing commands within Docker containers:
@ -258,7 +259,6 @@ import distutils.version # pylint: disable=import-error,no-name-in-module,unuse
import fnmatch import fnmatch
import functools import functools
import gzip import gzip
import inspect as inspect_module
import io import io
import json import json
import logging import logging
@ -278,6 +278,7 @@ import subprocess
from salt.exceptions import CommandExecutionError, SaltInvocationError from salt.exceptions import CommandExecutionError, SaltInvocationError
import salt.ext.six as six import salt.ext.six as six
from salt.ext.six.moves import map # pylint: disable=import-error,redefined-builtin from salt.ext.six.moves import map # pylint: disable=import-error,redefined-builtin
from salt.utils.args import get_function_argspec as _argspec
import salt.utils import salt.utils
import salt.utils.decorators import salt.utils.decorators
import salt.utils.files import salt.utils.files
@ -292,11 +293,22 @@ import salt.client.ssh.state
# pylint: disable=import-error # pylint: disable=import-error
try: try:
import docker import docker
import docker.utils
HAS_DOCKER_PY = True HAS_DOCKER_PY = True
except ImportError: except ImportError:
HAS_DOCKER_PY = False HAS_DOCKER_PY = False
# These next two imports are only necessary to have access to the needed
# functions so that we can get argspecs for the container config, host config,
# and networking config (see the get_client_args() function).
try:
import docker.types
except ImportError:
pass
try:
import docker.utils
except ImportError:
pass
try: try:
if six.PY2: if six.PY2:
import backports.lzma as lzma import backports.lzma as lzma
@ -3180,7 +3192,7 @@ def create(image,
# https://docs.docker.com/engine/reference/api/docker_remote_api_v1.15/#create-a-container # https://docs.docker.com/engine/reference/api/docker_remote_api_v1.15/#create-a-container
if salt.utils.version_cmp(version()['ApiVersion'], '1.15') > 0: if salt.utils.version_cmp(version()['ApiVersion'], '1.15') > 0:
client = __context__['docker.client'] client = __context__['docker.client']
host_config_args = inspect_module.getargspec(docker.utils.create_host_config).args host_config_args = get_client_args()['host_config']
create_kwargs['host_config'] = client.create_host_config( create_kwargs['host_config'] = client.create_host_config(
**dict((arg, create_kwargs.pop(arg, None)) for arg in host_config_args if arg != 'version') **dict((arg, create_kwargs.pop(arg, None)) for arg in host_config_args if arg != 'version')
) )
@ -5789,7 +5801,6 @@ def call(name, function, *args, **kwargs):
.. code-block:: bash .. code-block:: bash
salt myminion docker.call test.ping salt myminion docker.call test.ping
salt myminion test.arg arg1 arg2 key1=val1 salt myminion test.arg arg1 arg2 key1=val1
The container does not need to have Salt installed, but Python The container does not need to have Salt installed, but Python
@ -5991,3 +6002,72 @@ def sls_build(name, base='opensuse/python', mods=None, saltenv='base',
rm_(id_) rm_(id_)
return ret return ret
return commit(id_, name) return commit(id_, name)
def get_client_args():
'''
.. versionadded:: 2016.3.6,2016.11.4,Nitrogen
Returns the args for docker-py's `low-level API`_, organized by container
config, host config, and networking config.
.. _`low-level API`: http://docker-py.readthedocs.io/en/stable/api.html
salt myminion docker.get_client_args
'''
try:
config_args = _argspec(docker.types.ContainerConfig.__init__).args
except AttributeError:
try:
config_args = _argspec(docker.utils.create_container_config).args
except AttributeError:
raise CommandExecutionError(
'Failed to get create_container_config argspec'
)
try:
host_config_args = \
_argspec(docker.types.HostConfig.__init__).args
except AttributeError:
try:
host_config_args = _argspec(docker.utils.create_host_config).args
except AttributeError:
raise CommandExecutionError(
'Failed to get create_host_config argspec'
)
try:
endpoint_config_args = \
_argspec(docker.types.EndpointConfig.__init__).args
except AttributeError:
try:
endpoint_config_args = \
_argspec(docker.utils.create_endpoint_config).args
except AttributeError:
raise CommandExecutionError(
'Failed to get create_host_config argspec'
)
for arglist in (config_args, host_config_args, endpoint_config_args):
try:
# The API version is passed automagically by the API code that
# imports these classes/functions and is not an arg that we will be
# passing, so remove it if present.
arglist.remove('version')
except ValueError:
pass
# Remove any args in host or networking config from the main config dict.
# This keeps us from accidentally allowing args that have been moved from
# the container config to the host config (but are still accepted by
# create_container_config so warnings can be issued).
for arglist in (host_config_args, endpoint_config_args):
for item in arglist:
try:
config_args.remove(item)
except ValueError:
# Arg is not in config_args
pass
return {'config': config_args,
'host_config': host_config_args,
'networking_config': endpoint_config_args}

View File

@ -201,7 +201,7 @@ def item(*args, **kwargs):
return ret return ret
def setvals(grains, destructive=False): def setvals(grains, destructive=False, refresh=True):
''' '''
Set new grains values in the grains config file Set new grains values in the grains config file
@ -209,6 +209,10 @@ def setvals(grains, destructive=False):
If an operation results in a key being removed, delete the key, too. If an operation results in a key being removed, delete the key, too.
Defaults to False. Defaults to False.
refresh
Refresh minion grains using saltutil.sync_grains.
Defaults to True.
CLI Example: CLI Example:
.. code-block:: bash .. code-block:: bash
@ -284,12 +288,12 @@ def setvals(grains, destructive=False):
log.error(msg.format(fn_)) log.error(msg.format(fn_))
if not __opts__.get('local', False): if not __opts__.get('local', False):
# Sync the grains # Sync the grains
__salt__['saltutil.sync_grains']() __salt__['saltutil.sync_grains'](refresh=refresh)
# Return the grains we just set to confirm everything was OK # Return the grains we just set to confirm everything was OK
return new_grains return new_grains
def setval(key, val, destructive=False): def setval(key, val, destructive=False, refresh=True):
''' '''
Set a grains value in the grains config file Set a grains value in the grains config file
@ -303,6 +307,10 @@ def setval(key, val, destructive=False):
If an operation results in a key being removed, delete the key, too. If an operation results in a key being removed, delete the key, too.
Defaults to False. Defaults to False.
refresh
Refresh minion grains using saltutil.sync_grains.
Defaults to True.
CLI Example: CLI Example:
.. code-block:: bash .. code-block:: bash
@ -310,7 +318,7 @@ def setval(key, val, destructive=False):
salt '*' grains.setval key val salt '*' grains.setval key val
salt '*' grains.setval key "{'sub-key': 'val', 'sub-key2': 'val2'}" salt '*' grains.setval key "{'sub-key': 'val', 'sub-key2': 'val2'}"
''' '''
return setvals({key: val}, destructive) return setvals({key: val}, destructive, refresh)
def append(key, val, convert=False, delimiter=DEFAULT_TARGET_DELIM): def append(key, val, convert=False, delimiter=DEFAULT_TARGET_DELIM):

105
salt/modules/openscap.py Normal file
View File

@ -0,0 +1,105 @@
# -*- coding: utf-8 -*-
from __future__ import absolute_import
import tempfile
import shlex
import shutil
from subprocess import Popen, PIPE
from salt.client import Caller
ArgumentParser = object
try:
import argparse # pylint: disable=minimum-python-version
ArgumentParser = argparse.ArgumentParser
HAS_ARGPARSE = True
except ImportError: # python 2.6
HAS_ARGPARSE = False
_XCCDF_MAP = {
'eval': {
'parser_arguments': [
(('--profile',), {'required': True}),
],
'cmd_pattern': (
"oscap xccdf eval "
"--oval-results --results results.xml --report report.html "
"--profile {0} {1}"
)
}
}
def __virtual__():
return HAS_ARGPARSE, 'argparse module is required.'
class _ArgumentParser(ArgumentParser):
def __init__(self, action=None, *args, **kwargs):
super(_ArgumentParser, self).__init__(*args, prog='oscap', **kwargs)
self.add_argument('action', choices=['eval'])
add_arg = None
for params, kwparams in _XCCDF_MAP['eval']['parser_arguments']:
self.add_argument(*params, **kwparams)
def error(self, message, *args, **kwargs):
raise Exception(message)
_OSCAP_EXIT_CODES_MAP = {
0: True, # all rules pass
1: False, # there is an error during evaluation
2: True # there is at least one rule with either fail or unknown result
}
def xccdf(params):
'''
Run ``oscap xccdf`` commands on minions.
It uses cp.push_dir to upload the generated files to the salt master
in the master's minion files cachedir
(defaults to ``/var/cache/salt/master/minions/minion-id/files``)
It needs ``file_recv`` set to ``True`` in the master configuration file.
CLI Example:
.. code-block:: bash
salt '*' openscap.xccdf "eval --profile Default /usr/share/openscap/scap-yast2sec-xccdf.xml"
'''
params = shlex.split(params)
policy = params[-1]
success = True
error = None
upload_dir = None
action = None
try:
parser = _ArgumentParser()
action = parser.parse_known_args(params)[0].action
args, argv = _ArgumentParser(action=action).parse_known_args(args=params)
except Exception as err:
success = False
error = str(err)
if success:
cmd = _XCCDF_MAP[action]['cmd_pattern'].format(args.profile, policy)
tempdir = tempfile.mkdtemp()
proc = Popen(
shlex.split(cmd), stdout=PIPE, stderr=PIPE, cwd=tempdir)
(stdoutdata, stderrdata) = proc.communicate()
success = _OSCAP_EXIT_CODES_MAP[proc.returncode]
if success:
caller = Caller()
caller.cmd('cp.push_dir', tempdir)
shutil.rmtree(tempdir, ignore_errors=True)
upload_dir = tempdir
else:
error = stderrdata
return dict(success=success, upload_dir=upload_dir, error=error)

View File

@ -4,7 +4,6 @@ Module for returning various status data about a minion.
These data can be useful for compiling into stats later. These data can be useful for compiling into stats later.
''' '''
# Import python libs # Import python libs
from __future__ import absolute_import from __future__ import absolute_import
import datetime import datetime
@ -41,6 +40,16 @@ __func_alias__ = {
} }
def __virtual__():
'''
Not all functions supported by Windows
'''
if salt.utils.is_windows():
return False, 'Windows platform is not supported by this module'
return __virtualname__
def _number(text): def _number(text):
''' '''
Convert a string to a number. Convert a string to a number.
@ -67,8 +76,6 @@ def procs():
salt '*' status.procs salt '*' status.procs
''' '''
# Get the user, pid and cmd # Get the user, pid and cmd
if salt.utils.is_windows():
raise CommandExecutionError('This platform is not supported')
ret = {} ret = {}
uind = 0 uind = 0
pind = 0 pind = 0
@ -117,8 +124,6 @@ def custom():
salt '*' status.custom salt '*' status.custom
''' '''
if salt.utils.is_windows():
raise CommandExecutionError('This platform is not supported')
ret = {} ret = {}
conf = __salt__['config.dot_vals']('status') conf = __salt__['config.dot_vals']('status')
for key, val in six.iteritems(conf): for key, val in six.iteritems(conf):
@ -587,10 +592,6 @@ def diskusage(*args):
salt '*' status.diskusage ext? # usage for ext[234] filesystems salt '*' status.diskusage ext? # usage for ext[234] filesystems
salt '*' status.diskusage / ext? # usage for / and all ext filesystems salt '*' status.diskusage / ext? # usage for / and all ext filesystems
''' '''
if salt.utils.is_windows():
raise CommandExecutionError('This platform is not supported')
selected = set() selected = set()
fstypes = set() fstypes = set()
if not args: if not args:
@ -929,8 +930,6 @@ def w(): # pylint: disable=C0103
salt '*' status.w salt '*' status.w
''' '''
if salt.utils.is_windows():
raise CommandExecutionError('This platform is not supported')
user_list = [] user_list = []
users = __salt__['cmd.run']('w -h').splitlines() users = __salt__['cmd.run']('w -h').splitlines()
for row in users: for row in users:

View File

@ -107,6 +107,8 @@ def _get_zone_etc_localtime():
return get_zonecode() return get_zonecode()
raise CommandExecutionError(tzfile + ' does not exist') raise CommandExecutionError(tzfile + ' does not exist')
elif exc.errno == errno.EINVAL: elif exc.errno == errno.EINVAL:
if 'FreeBSD' in __grains__['os_family']:
return get_zonecode()
log.warning( log.warning(
tzfile + ' is not a symbolic link, attempting to match ' + tzfile + ' is not a symbolic link, attempting to match ' +
tzfile + ' to zoneinfo files' tzfile + ' to zoneinfo files'

View File

@ -12,47 +12,59 @@ or for problem solving if your minion is having problems.
# Import Python Libs # Import Python Libs
from __future__ import absolute_import from __future__ import absolute_import
import os
import ctypes
import sys
import time
import datetime
import logging import logging
log = logging.getLogger(__name__)
# Import Salt Libs # Import Salt Libs
import salt.utils import salt.utils
import salt.ext.six as six import salt.ext.six as six
import salt.utils.event import salt.utils.event
from salt._compat import subprocess from salt._compat import subprocess
from salt.utils.network import host_to_ips as _host_to_ips from salt.utils.network import host_to_ips as _host_to_ips
# pylint: disable=W0611
from salt.modules.status import ping_master, time_
import copy
# pylint: enable=W0611
from salt.utils import namespaced_function as _namespaced_function
import os # Import 3rd Party Libs
import ctypes if salt.utils.is_windows():
import sys
import time
import datetime
from subprocess import list2cmdline
log = logging.getLogger(__name__)
try:
import wmi import wmi
import salt.utils.winapi import salt.utils.winapi
has_required_packages = True HAS_WMI = True
except ImportError: else:
if salt.utils.is_windows(): HAS_WMI = False
log.exception('pywin32 and wmi python packages are required '
'in order to use the status module.')
has_required_packages = False
__opts__ = {} __opts__ = {}
# Define the module's virtual name
__virtualname__ = 'status' __virtualname__ = 'status'
def __virtual__(): def __virtual__():
''' '''
Only works on Windows systems Only works on Windows systems with WMI and WinAPI
''' '''
if salt.utils.is_windows() and has_required_packages: if not salt.utils.is_windows():
return False, 'win_status.py: Requires Windows'
if not HAS_WMI:
return False, 'win_status.py: Requires WMI and WinAPI'
# Namespace modules from `status.py`
global ping_master, time_
ping_master = _namespaced_function(ping_master, globals())
time_ = _namespaced_function(time_, globals())
return __virtualname__ return __virtualname__
return (False, 'Cannot load win_status module on non-windows')
__func_alias__ = {
'time_': 'time'
}
def cpuload(): def cpuload():
@ -69,17 +81,8 @@ def cpuload():
''' '''
# Pull in the information from WMIC # Pull in the information from WMIC
cmd = list2cmdline(['wmic', 'cpu']) cmd = ['wmic', 'cpu', 'get', 'loadpercentage', '/value']
info = __salt__['cmd.run'](cmd).split('\r\n') return int(__salt__['cmd.run'](cmd).split('=')[1])
# Find the location of LoadPercentage
column = info[0].index('LoadPercentage')
# Get the end of the number.
end = info[1].index(' ', column+1)
# Return pull it out of the informatin and cast it to an int
return int(info[1][column:end])
def diskusage(human_readable=False, path=None): def diskusage(human_readable=False, path=None):
@ -203,18 +206,9 @@ def uptime(human_readable=False):
''' '''
# Open up a subprocess to get information from WMIC # Open up a subprocess to get information from WMIC
cmd = list2cmdline(['wmic', 'os', 'get', 'lastbootuptime']) cmd = ['wmic', 'os', 'get', 'lastbootuptime', '/value']
outs = __salt__['cmd.run'](cmd) startup_time = __salt__['cmd.run'](cmd).split('=')[1][:14]
# Get the line that has when the computer started in it:
stats_line = ''
# use second line from output
stats_line = outs.split('\r\n')[1]
# Extract the time string from the line and parse
#
# Get string, just use the leading 14 characters
startup_time = stats_line[:14]
# Convert to time struct # Convert to time struct
startup_time = time.strptime(startup_time, '%Y%m%d%H%M%S') startup_time = time.strptime(startup_time, '%Y%m%d%H%M%S')
# Convert to datetime object # Convert to datetime object

View File

@ -8,7 +8,11 @@ Manage CloudTrail Objects
Create and destroy CloudTrail objects. Be aware that this interacts with Amazon's services, Create and destroy CloudTrail objects. Be aware that this interacts with Amazon's services,
and so may incur charges. and so may incur charges.
This module uses ``boto3``, which can be installed via package, or pip. :depends:
- boto
- boto3
The dependencies listed above can be installed via package or pip.
This module accepts explicit vpc credentials but can also utilize This module accepts explicit vpc credentials but can also utilize
IAM roles assigned to the instance through Instance Profiles. Dynamic IAM roles assigned to the instance through Instance Profiles. Dynamic

View File

@ -8,7 +8,11 @@ Manage IoT Objects
Create and destroy IoT objects. Be aware that this interacts with Amazon's services, Create and destroy IoT objects. Be aware that this interacts with Amazon's services,
and so may incur charges. and so may incur charges.
This module uses ``boto3``, which can be installed via package, or pip. :depends:
- boto
- boto3
The dependencies listed above can be installed via package or pip.
This module accepts explicit vpc credentials but can also utilize This module accepts explicit vpc credentials but can also utilize
IAM roles assigned to the instance through Instance Profiles. Dynamic IAM roles assigned to the instance through Instance Profiles. Dynamic

View File

@ -8,7 +8,11 @@ Manage Lambda Functions
Create and destroy Lambda Functions. Be aware that this interacts with Amazon's services, Create and destroy Lambda Functions. Be aware that this interacts with Amazon's services,
and so may incur charges. and so may incur charges.
This module uses ``boto3``, which can be installed via package, or pip. :depends:
- boto
- boto3
The dependencies listed above can be installed via package or pip.
This module accepts explicit vpc credentials but can also utilize This module accepts explicit vpc credentials but can also utilize
IAM roles assigned to the instance through Instance Profiles. Dynamic IAM roles assigned to the instance through Instance Profiles. Dynamic

View File

@ -8,7 +8,11 @@ Manage S3 Buckets
Create and destroy S3 buckets. Be aware that this interacts with Amazon's services, Create and destroy S3 buckets. Be aware that this interacts with Amazon's services,
and so may incur charges. and so may incur charges.
This module uses ``boto3``, which can be installed via package, or pip. :depends:
- boto
- boto3
The dependencies listed above can be installed via package or pip.
This module accepts explicit vpc credentials but can also utilize This module accepts explicit vpc credentials but can also utilize
IAM roles assigned to the instance through Instance Profiles. Dynamic IAM roles assigned to the instance through Instance Profiles. Dynamic

View File

@ -433,6 +433,21 @@ def _compare(actual, create_kwargs, defaults_from_image):
if data != actual_data: if data != actual_data:
ret.update({item: {'old': actual_data, 'new': data}}) ret.update({item: {'old': actual_data, 'new': data}})
continue continue
elif item == 'security_opt':
if actual_data is None:
actual_data = []
if data is None:
data = []
actual_data = sorted(set(actual_data))
desired_data = sorted(set(data))
log.trace('dockerng.running ({0}): munged actual value: {1}'
.format(item, actual_data))
log.trace('dockerng.running ({0}): munged desired value: {1}'
.format(item, desired_data))
if actual_data != desired_data:
ret.update({item: {'old': actual_data,
'new': desired_data}})
continue
elif item in ('cmd', 'command', 'entrypoint'): elif item in ('cmd', 'command', 'entrypoint'):
if (actual_data is None and item not in create_kwargs and if (actual_data is None and item not in create_kwargs and
_image_get(config['image_path'])): _image_get(config['image_path'])):

View File

@ -17,6 +17,7 @@ from __future__ import absolute_import
# Import python libs # Import python libs
import logging import logging
import os
# Import salt libs # Import salt libs
import salt.utils import salt.utils
@ -319,6 +320,15 @@ def package_installed(name,
'comment': '', 'comment': '',
'changes': {}} 'changes': {}}
# Fail if using a non-existent package path
if '~' not in name and not os.path.exists(name):
if __opts__['test']:
ret['result'] = None
else:
ret['result'] = False
ret['comment'] = 'Package path {0} does not exist'.format(name)
return ret
old = __salt__['dism.installed_packages']() old = __salt__['dism.installed_packages']()
# Get package info so we can see if it's already installed # Get package info so we can see if it's already installed
@ -387,6 +397,15 @@ def package_removed(name, image=None, restart=False):
'comment': '', 'comment': '',
'changes': {}} 'changes': {}}
# Fail if using a non-existent package path
if '~' not in name and not os.path.exists(name):
if __opts__['test']:
ret['result'] = None
else:
ret['result'] = False
ret['comment'] = 'Package path {0} does not exist'.format(name)
return ret
old = __salt__['dism.installed_packages']() old = __salt__['dism.installed_packages']()
# Get package info so we can see if it's already removed # Get package info so we can see if it's already removed

View File

@ -0,0 +1,207 @@
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from subprocess import PIPE
from salt.modules import openscap
from salttesting import skipIf, TestCase
from salttesting.mock import (
Mock,
MagicMock,
patch,
NO_MOCK,
NO_MOCK_REASON
)
@skipIf(NO_MOCK, NO_MOCK_REASON)
class OpenscapTestCase(TestCase):
random_temp_dir = '/tmp/unique-name'
policy_file = '/usr/share/openscap/policy-file-xccdf.xml'
def setUp(self):
patchers = [
patch('salt.modules.openscap.Caller', MagicMock()),
patch('salt.modules.openscap.shutil.rmtree', Mock()),
patch(
'salt.modules.openscap.tempfile.mkdtemp',
Mock(return_value=self.random_temp_dir)
),
]
for patcher in patchers:
self.apply_patch(patcher)
def apply_patch(self, patcher):
patcher.start()
self.addCleanup(patcher.stop)
@patch(
'salt.modules.openscap.Popen',
MagicMock(
return_value=Mock(
**{'returncode': 0, 'communicate.return_value': ('', '')}
)
)
)
def test_openscap_xccdf_eval_success(self):
response = openscap.xccdf(
'eval --profile Default {0}'.format(self.policy_file))
self.assertEqual(openscap.tempfile.mkdtemp.call_count, 1)
expected_cmd = [
'oscap',
'xccdf',
'eval',
'--oval-results',
'--results', 'results.xml',
'--report', 'report.html',
'--profile', 'Default',
self.policy_file
]
openscap.Popen.assert_called_once_with(
expected_cmd,
cwd=openscap.tempfile.mkdtemp.return_value,
stderr=PIPE,
stdout=PIPE)
openscap.Caller().cmd.assert_called_once_with(
'cp.push_dir', self.random_temp_dir)
self.assertEqual(openscap.shutil.rmtree.call_count, 1)
self.assertEqual(
response,
{
'upload_dir': self.random_temp_dir,
'error': None, 'success': True
}
)
@patch(
'salt.modules.openscap.Popen',
MagicMock(
return_value=Mock(
**{'returncode': 2, 'communicate.return_value': ('', '')}
)
)
)
def test_openscap_xccdf_eval_success_with_failing_rules(self):
response = openscap.xccdf(
'eval --profile Default {0}'.format(self.policy_file))
self.assertEqual(openscap.tempfile.mkdtemp.call_count, 1)
expected_cmd = [
'oscap',
'xccdf',
'eval',
'--oval-results',
'--results', 'results.xml',
'--report', 'report.html',
'--profile', 'Default',
self.policy_file
]
openscap.Popen.assert_called_once_with(
expected_cmd,
cwd=openscap.tempfile.mkdtemp.return_value,
stderr=PIPE,
stdout=PIPE)
openscap.Caller().cmd.assert_called_once_with(
'cp.push_dir', self.random_temp_dir)
self.assertEqual(openscap.shutil.rmtree.call_count, 1)
self.assertEqual(
response,
{
'upload_dir': self.random_temp_dir,
'error': None,
'success': True
}
)
def test_openscap_xccdf_eval_fail_no_profile(self):
response = openscap.xccdf(
'eval --param Default /unknown/param')
self.assertEqual(
response,
{
'error': 'argument --profile is required',
'upload_dir': None,
'success': False
}
)
@patch(
'salt.modules.openscap.Popen',
MagicMock(
return_value=Mock(
**{'returncode': 2, 'communicate.return_value': ('', '')}
)
)
)
def test_openscap_xccdf_eval_success_ignore_unknown_params(self):
response = openscap.xccdf(
'eval --profile Default --param Default /policy/file')
self.assertEqual(
response,
{
'upload_dir': self.random_temp_dir,
'error': None,
'success': True
}
)
expected_cmd = [
'oscap',
'xccdf',
'eval',
'--oval-results',
'--results', 'results.xml',
'--report', 'report.html',
'--profile', 'Default',
'/policy/file'
]
openscap.Popen.assert_called_once_with(
expected_cmd,
cwd=openscap.tempfile.mkdtemp.return_value,
stderr=PIPE,
stdout=PIPE)
@patch(
'salt.modules.openscap.Popen',
MagicMock(
return_value=Mock(**{
'returncode': 1,
'communicate.return_value': ('', 'evaluation error')
})
)
)
def test_openscap_xccdf_eval_evaluation_error(self):
response = openscap.xccdf(
'eval --profile Default {0}'.format(self.policy_file))
self.assertEqual(
response,
{
'upload_dir': None,
'error': 'evaluation error',
'success': False
}
)
@patch(
'salt.modules.openscap.Popen',
MagicMock(
return_value=Mock(**{
'returncode': 1,
'communicate.return_value': ('', 'evaluation error')
})
)
)
def test_openscap_xccdf_eval_fail_not_implemented_action(self):
response = openscap.xccdf('info {0}'.format(self.policy_file))
self.assertEqual(
response,
{
'upload_dir': None,
'error': "argument action: invalid choice: 'info' (choose from 'eval')",
'success': False
}
)

View File

@ -95,6 +95,7 @@ class WinDismTestCase(TestCase):
dism.__salt__, {'dism.installed_capabilities': mock_installed, dism.__salt__, {'dism.installed_capabilities': mock_installed,
'dism.add_capability': mock_add}): 'dism.add_capability': mock_add}):
with patch.dict(dism.__opts__, {'test': False}):
out = dism.capability_installed('Capa2', 'somewhere', True) out = dism.capability_installed('Capa2', 'somewhere', True)
mock_installed.assert_called_once_with() mock_installed.assert_called_once_with()
@ -360,6 +361,7 @@ class WinDismTestCase(TestCase):
'dism.add_package': mock_add, 'dism.add_package': mock_add,
'dism.package_info': mock_info}): 'dism.package_info': mock_info}):
with patch.dict(dism.__opts__, {'test': False}): with patch.dict(dism.__opts__, {'test': False}):
with patch('os.path.exists'):
out = dism.package_installed('Pack2') out = dism.package_installed('Pack2')
@ -390,6 +392,7 @@ class WinDismTestCase(TestCase):
'dism.add_package': mock_add, 'dism.add_package': mock_add,
'dism.package_info': mock_info}): 'dism.package_info': mock_info}):
with patch.dict(dism.__opts__, {'test': False}): with patch.dict(dism.__opts__, {'test': False}):
with patch('os.path.exists'):
out = dism.package_installed('Pack2') out = dism.package_installed('Pack2')
@ -418,6 +421,8 @@ class WinDismTestCase(TestCase):
dism.__salt__, {'dism.installed_packages': mock_installed, dism.__salt__, {'dism.installed_packages': mock_installed,
'dism.add_package': mock_add, 'dism.add_package': mock_add,
'dism.package_info': mock_info}): 'dism.package_info': mock_info}):
with patch.dict(dism.__opts__, {'test': False}):
with patch('os.path.exists'):
out = dism.package_installed('Pack2') out = dism.package_installed('Pack2')
@ -448,6 +453,7 @@ class WinDismTestCase(TestCase):
'dism.remove_package': mock_remove, 'dism.remove_package': mock_remove,
'dism.package_info': mock_info}): 'dism.package_info': mock_info}):
with patch.dict(dism.__opts__, {'test': False}): with patch.dict(dism.__opts__, {'test': False}):
with patch('os.path.exists'):
out = dism.package_removed('Pack2') out = dism.package_removed('Pack2')
@ -478,6 +484,7 @@ class WinDismTestCase(TestCase):
'dism.remove_package': mock_remove, 'dism.remove_package': mock_remove,
'dism.package_info': mock_info}): 'dism.package_info': mock_info}):
with patch.dict(dism.__opts__, {'test': False}): with patch.dict(dism.__opts__, {'test': False}):
with patch('os.path.exists'):
out = dism.package_removed('Pack2') out = dism.package_removed('Pack2')
@ -507,6 +514,8 @@ class WinDismTestCase(TestCase):
'dism.remove_package': mock_remove, 'dism.remove_package': mock_remove,
'dism.package_info': mock_info}): 'dism.package_info': mock_info}):
with patch.dict(dism.__opts__, {'test': False}):
with patch('os.path.exists'):
out = dism.package_removed('Pack2') out = dism.package_removed('Pack2')
mock_removed.assert_called_once_with() mock_removed.assert_called_once_with()