Merge branch '2017.7' into 'develop'

Conflicts:
  - pkg/salt.bash
  - salt/client/mixins.py
  - salt/minion.py
  - salt/modules/aptpkg.py
  - salt/modules/boto_vpc.py
  - salt/modules/win_pkg.py
  - salt/utils/reactor.py
  - tests/unit/utils/test_reactor.py
This commit is contained in:
rallytime 2017-09-22 10:24:04 -04:00
commit e0ae50e489
No known key found for this signature in database
GPG Key ID: E8F1A4B90D0DEA19
34 changed files with 1560 additions and 510 deletions

View File

@ -4175,7 +4175,9 @@ information.
.. code-block:: yaml
reactor: []
reactor:
- 'salt/minion/*/start':
- salt://reactor/startup_tasks.sls
.. conf_master:: reactor_refresh_interval

View File

@ -1,5 +1,5 @@
salt.runners.digicertapi module
===============================
salt.runners.digicertapi
========================
.. automodule:: salt.runners.digicertapi
:members:

View File

@ -1,5 +1,11 @@
salt.runners.mattermost module
==============================
salt.runners.mattermost
=======================
**Note for 2017.7 releases!**
Due to the `salt.runners.config <https://github.com/saltstack/salt/blob/develop/salt/runners/config.py>`_ module not being available in this release series, importing the `salt.runners.config <https://github.com/saltstack/salt/blob/develop/salt/runners/config.py>`_ module from the develop branch is required to make this module work.
Ref: `Mattermost runner failing to retrieve config values due to unavailable config runner #43479 <https://github.com/saltstack/salt/issues/43479>`_
.. automodule:: salt.runners.mattermost
:members:

View File

@ -1,5 +1,5 @@
salt.runners.vault module
=========================
salt.runners.vault
==================
.. automodule:: salt.runners.vault
:members:

View File

@ -1,5 +1,5 @@
salt.runners.venafiapi module
=============================
salt.runners.venafiapi
======================
.. automodule:: salt.runners.venafiapi
:members:

View File

@ -253,9 +253,8 @@ in ``/etc/salt/master.d/reactor.conf``:
.. note::
You can have only one top level ``reactor`` section, so if one already
exists, add this code to the existing section. See :ref:`Understanding the
Structure of Reactor Formulas <reactor-structure>` to learn more about
reactor SLS syntax.
exists, add this code to the existing section. See :ref:`here
<reactor-sls>` to learn more about reactor SLS syntax.
Start the Salt Master in Debug Mode

View File

@ -27,7 +27,12 @@ Salt engines are configured under an ``engines`` top-level section in your Salt
port: 5959
proto: tcp
Salt engines must be in the Salt path, or you can add the ``engines_dirs`` option in your Salt master configuration with a list of directories under which Salt attempts to find Salt engines.
Salt engines must be in the Salt path, or you can add the ``engines_dirs`` option in your Salt master configuration with a list of directories under which Salt attempts to find Salt engines. This option should be formatted as a list of directories to search, such as:
.. code-block:: yaml
engines_dirs:
- /home/bob/engines
Writing an Engine
=================

View File

@ -27,9 +27,9 @@ event bus is an open system used for sending information notifying Salt and
other systems about operations.
The event system fires events with a very specific criteria. Every event has a
:strong:`tag`. Event tags allow for fast top level filtering of events. In
addition to the tag, each event has a data structure. This data structure is a
dict, which contains information about the event.
**tag**. Event tags allow for fast top-level filtering of events. In addition
to the tag, each event has a data structure. This data structure is a
dictionary, which contains information about the event.
.. _reactor-mapping-events:
@ -65,15 +65,12 @@ and each event tag has a list of reactor SLS files to be run.
the :ref:`querystring syntax <querystring-syntax>` (e.g.
``salt://reactor/mycustom.sls?saltenv=reactor``).
Reactor sls files are similar to state and pillar sls files. They are
by default yaml + Jinja templates and are passed familiar context variables.
Reactor SLS files are similar to State and Pillar SLS files. They are by
default YAML + Jinja templates and are passed familiar context variables.
Click :ref:`here <reactor-jinja-context>` for more detailed information on the
variables availble in Jinja templating.
They differ because of the addition of the ``tag`` and ``data`` variables.
- The ``tag`` variable is just the tag in the fired event.
- The ``data`` variable is the event's data dict.
Here is a simple reactor sls:
Here is the SLS for a simple reaction:
.. code-block:: jinja
@ -90,71 +87,278 @@ data structure and compiler used for the state system is used for the reactor
system. The only difference is that the data is matched up to the salt command
API and the runner system. In this example, a command is published to the
``mysql1`` minion with a function of :py:func:`state.apply
<salt.modules.state.apply_>`. Similarly, a runner can be called:
<salt.modules.state.apply_>`, which performs a :ref:`highstate
<running-highstate>`. Similarly, a runner can be called:
.. code-block:: jinja
{% if data['data']['custom_var'] == 'runit' %}
call_runit_orch:
runner.state.orchestrate:
- mods: _orch.runit
- args:
- mods: orchestrate.runit
{% endif %}
This example will execute the state.orchestrate runner and intiate an execution
of the runit orchestrator located at ``/srv/salt/_orch/runit.sls``. Using
``_orch/`` is any arbitrary path but it is recommended to avoid using "orchestrate"
as this is most likely to cause confusion.
of the ``runit`` orchestrator located at ``/srv/salt/orchestrate/runit.sls``.
Writing SLS Files
-----------------
Types of Reactions
==================
Reactor SLS files are stored in the same location as State SLS files. This means
that both ``file_roots`` and ``gitfs_remotes`` impact what SLS files are
available to the reactor and orchestrator.
============================== ==================================================================================
Name Description
============================== ==================================================================================
:ref:`local <reactor-local>` Runs a :ref:`remote-execution function <all-salt.modules>` on targeted minions
:ref:`runner <reactor-runner>` Executes a :ref:`runner function <all-salt.runners>`
:ref:`wheel <reactor-wheel>` Executes a :ref:`wheel function <all-salt.wheel>` on the master
:ref:`caller <reactor-caller>` Runs a :ref:`remote-execution function <all-salt.modules>` on a masterless minion
============================== ==================================================================================
It is recommended to keep reactor and orchestrator SLS files in their own uniquely
named subdirectories such as ``_orch/``, ``orch/``, ``_orchestrate/``, ``react/``,
``_reactor/``, etc. Keeping a unique name helps prevent confusion when trying to
read through this a few years down the road.
.. note::
The ``local`` and ``caller`` reaction types will be renamed for the Oxygen
release. These reaction types were named after Salt's internal client
interfaces, and are not intuitively named. Both ``local`` and ``caller``
will continue to work in Reactor SLS files, but for the Oxygen release the
documentation will be updated to reflect the new preferred naming.
The Goal of Writing Reactor SLS Files
=====================================
Where to Put Reactor SLS Files
==============================
Reactor SLS files share the familiar syntax from Salt States but there are
important differences. The goal of a Reactor file is to process a Salt event as
quickly as possible and then to optionally start a **new** process in response.
Reactor SLS files can come both from files local to the master, and from any of
backends enabled via the :conf_master:`fileserver_backend` config option. Files
placed in the Salt fileserver can be referenced using a ``salt://`` URL, just
like they can in State SLS files.
1. The Salt Reactor watches Salt's event bus for new events.
2. The event tag is matched against the list of event tags under the
``reactor`` section in the Salt Master config.
3. The SLS files for any matches are Rendered into a data structure that
represents one or more function calls.
4. That data structure is given to a pool of worker threads for execution.
It is recommended to place reactor and orchestrator SLS files in their own
uniquely-named subdirectories such as ``orch/``, ``orchestrate/``, ``react/``,
``reactor/``, etc., to keep them organized.
.. _reactor-sls:
Writing Reactor SLS
===================
The different reaction types were developed separately and have historically
had different methods for passing arguments. For the 2017.7.2 release a new,
unified configuration schema has been introduced, which applies to all reaction
types.
The old config schema will continue to be supported, and there is no plan to
deprecate it at this time.
.. _reactor-local:
Local Reactions
---------------
A ``local`` reaction runs a :ref:`remote-execution function <all-salt.modules>`
on the targeted minions.
The old config schema required the positional and keyword arguments to be
manually separated by the user under ``arg`` and ``kwarg`` parameters. However,
this is not very user-friendly, as it forces the user to distinguish which type
of argument is which, and make sure that positional arguments are ordered
properly. Therefore, the new config schema is recommended if the master is
running a supported release.
The below two examples are equivalent:
+---------------------------------+-----------------------------+
| Supported in 2017.7.2 and later | Supported in all releases |
+=================================+=============================+
| :: | :: |
| | |
| install_zsh: | install_zsh: |
| local.state.single: | local.state.single: |
| - tgt: 'kernel:Linux' | - tgt: 'kernel:Linux' |
| - tgt_type: grain | - tgt_type: grain |
| - args: | - arg: |
| - fun: pkg.installed | - pkg.installed |
| - name: zsh | - zsh |
| - fromrepo: updates | - kwarg: |
| | fromrepo: updates |
+---------------------------------+-----------------------------+
This reaction would be equvalent to running the following Salt command:
.. code-block:: bash
salt -G 'kernel:Linux' state.single pkg.installed name=zsh fromrepo=updates
.. note::
Any other parameters in the :py:meth:`LocalClient().cmd_async()
<salt.client.LocalClient.cmd_async>` method can be passed at the same
indentation level as ``tgt``.
.. note::
``tgt_type`` is only required when the target expression defined in ``tgt``
uses a :ref:`target type <targeting>` other than a minion ID glob.
The ``tgt_type`` argument was named ``expr_form`` in releases prior to
2017.7.0.
.. _reactor-runner:
Runner Reactions
----------------
Runner reactions execute :ref:`runner functions <all-salt.runners>` locally on
the master.
The old config schema called for passing arguments to the reaction directly
under the name of the runner function. However, this can cause unpredictable
interactions with the Reactor system's internal arguments. It is also possible
to pass positional and keyword arguments under ``arg`` and ``kwarg`` like above
in :ref:`local reactions <reactor-local>`, but as noted above this is not very
user-friendly. Therefore, the new config schema is recommended if the master
is running a supported release.
The below two examples are equivalent:
+-------------------------------------------------+-------------------------------------------------+
| Supported in 2017.7.2 and later | Supported in all releases |
+=================================================+=================================================+
| :: | :: |
| | |
| deploy_app: | deploy_app: |
| runner.state.orchestrate: | runner.state.orchestrate: |
| - args: | - mods: orchestrate.deploy_app |
| - mods: orchestrate.deploy_app | - kwarg: |
| - pillar: | pillar: |
| event_tag: {{ tag }} | event_tag: {{ tag }} |
| event_data: {{ data['data']|json }} | event_data: {{ data['data']|json }} |
+-------------------------------------------------+-------------------------------------------------+
Assuming that the event tag is ``foo``, and the data passed to the event is
``{'bar': 'baz'}``, then this reaction is equvalent to running the following
Salt command:
.. code-block:: bash
salt-run state.orchestrate mods=orchestrate.deploy_app pillar='{"event_tag": "foo", "event_data": {"bar": "baz"}}'
.. _reactor-wheel:
Wheel Reactions
---------------
Wheel reactions run :ref:`wheel functions <all-salt.wheel>` locally on the
master.
Like :ref:`runner reactions <reactor-runner>`, the old config schema called for
wheel reactions to have arguments passed directly under the name of the
:ref:`wheel function <all-salt.wheel>` (or in ``arg`` or ``kwarg`` parameters).
The below two examples are equivalent:
+-----------------------------------+---------------------------------+
| Supported in 2017.7.2 and later | Supported in all releases |
+===================================+=================================+
| :: | :: |
| | |
| remove_key: | remove_key: |
| wheel.key.delete: | wheel.key.delete: |
| - args: | - match: {{ data['id'] }} |
| - match: {{ data['id'] }} | |
+-----------------------------------+---------------------------------+
.. _reactor-caller:
Caller Reactions
----------------
Caller reactions run :ref:`remote-execution functions <all-salt.modules>` on a
minion daemon's Reactor system. To run a Reactor on the minion, it is necessary
to configure the :mod:`Reactor Engine <salt.engines.reactor>` in the minion
config file, and then setup your watched events in a ``reactor`` section in the
minion config file as well.
.. note:: Masterless Minions use this Reactor
This is the only way to run the Reactor if you use masterless minions.
Both the old and new config schemas involve passing arguments under an ``args``
parameter. However, the old config schema only supports positional arguments.
Therefore, the new config schema is recommended if the masterless minion is
running a supported release.
The below two examples are equivalent:
+---------------------------------+---------------------------+
| Supported in 2017.7.2 and later | Supported in all releases |
+=================================+===========================+
| :: | :: |
| | |
| touch_file: | touch_file: |
| caller.file.touch: | caller.file.touch: |
| - args: | - args: |
| - name: /tmp/foo | - /tmp/foo |
+---------------------------------+---------------------------+
This reaction is equvalent to running the following Salt command:
.. code-block:: bash
salt-call file.touch name=/tmp/foo
Best Practices for Writing Reactor SLS Files
============================================
The Reactor works as follows:
1. The Salt Reactor watches Salt's event bus for new events.
2. Each event's tag is matched against the list of event tags configured under
the :conf_master:`reactor` section in the Salt Master config.
3. The SLS files for any matches are rendered into a data structure that
represents one or more function calls.
4. That data structure is given to a pool of worker threads for execution.
Matching and rendering Reactor SLS files is done sequentially in a single
process. Complex Jinja that calls out to slow Execution or Runner modules slows
down the rendering and causes other reactions to pile up behind the current
one. The worker pool is designed to handle complex and long-running processes
such as Salt Orchestrate.
process. For that reason, reactor SLS files should contain few individual
reactions (one, if at all possible). Also, keep in mind that reactions are
fired asynchronously (with the exception of :ref:`caller <reactor-caller>`) and
do *not* support :ref:`requisites <requisites>`.
tl;dr: Rendering Reactor SLS files MUST be simple and quick. The new process
started by the worker threads can be long-running. Using the reactor to fire
an orchestrate runner would be ideal.
Complex Jinja templating that calls out to slow :ref:`remote-execution
<all-salt.modules>` or :ref:`runner <all-salt.runners>` functions slows down
the rendering and causes other reactions to pile up behind the current one. The
worker pool is designed to handle complex and long-running processes like
:ref:`orchestration <orchestrate-runner>` jobs.
Therefore, when complex tasks are in order, :ref:`orchestration
<orchestrate-runner>` is a natural fit. Orchestration SLS files can be more
complex, and use requisites. Performing a complex task using orchestration lets
the Reactor system fire off the orchestration job and proceed with processing
other reactions.
.. _reactor-jinja-context:
Jinja Context
-------------
=============
Reactor files only have access to a minimal Jinja context. ``grains`` and
``pillar`` are not available. The ``salt`` object is available for calling
Runner and Execution modules but it should be used sparingly and only for quick
tasks for the reasons mentioned above.
Reactor SLS files only have access to a minimal Jinja context. ``grains`` and
``pillar`` are *not* available. The ``salt`` object is available for calling
:ref:`remote-execution <all-salt.modules>` or :ref:`runner <all-salt.runners>`
functions, but it should be used sparingly and only for quick tasks for the
reasons mentioned above.
In addition to the ``salt`` object, the following variables are available in
the Jinja context:
- ``tag`` - the tag from the event that triggered execution of the Reactor SLS
file
- ``data`` - the event's data dictionary
The ``data`` dict will contain an ``id`` key containing the minion ID, if the
event was fired from a minion, and a ``data`` key containing the data passed to
the event.
Advanced State System Capabilities
----------------------------------
==================================
Reactor SLS files, by design, do not support Requisites, ordering,
``onlyif``/``unless`` conditionals and most other powerful constructs from
Salt's State system.
Reactor SLS files, by design, do not support :ref:`requisites <requisites>`,
ordering, ``onlyif``/``unless`` conditionals and most other powerful constructs
from Salt's State system.
Complex Master-side operations are best performed by Salt's Orchestrate system
so using the Reactor to kick off an Orchestrate run is a very common pairing.
@ -166,7 +370,7 @@ For example:
# /etc/salt/master.d/reactor.conf
# A custom event containing: {"foo": "Foo!", "bar: "bar*", "baz": "Baz!"}
reactor:
- myco/custom/event:
- my/custom/event:
- /srv/reactor/some_event.sls
.. code-block:: jinja
@ -174,15 +378,15 @@ For example:
# /srv/reactor/some_event.sls
invoke_orchestrate_file:
runner.state.orchestrate:
- mods: _orch.do_complex_thing # /srv/salt/_orch/do_complex_thing.sls
- kwarg:
pillar:
event_tag: {{ tag }}
event_data: {{ data|json() }}
- args:
- mods: orchestrate.do_complex_thing
- pillar:
event_tag: {{ tag }}
event_data: {{ data|json }}
.. code-block:: jinja
# /srv/salt/_orch/do_complex_thing.sls
# /srv/salt/orchestrate/do_complex_thing.sls
{% set tag = salt.pillar.get('event_tag') %}
{% set data = salt.pillar.get('event_data') %}
@ -209,7 +413,7 @@ For example:
.. _beacons-and-reactors:
Beacons and Reactors
--------------------
====================
An event initiated by a beacon, when it arrives at the master will be wrapped
inside a second event, such that the data object containing the beacon
@ -219,27 +423,52 @@ For example, to access the ``id`` field of the beacon event in a reactor file,
you will need to reference ``{{ data['data']['id'] }}`` rather than ``{{
data['id'] }}`` as for events initiated directly on the event bus.
Similarly, the data dictionary attached to the event would be located in
``{{ data['data']['data'] }}`` instead of ``{{ data['data'] }}``.
See the :ref:`beacon documentation <beacon-example>` for examples.
Fire an event
=============
Manually Firing an Event
========================
To fire an event from a minion call ``event.send``
From the Master
---------------
Use the :py:func:`event.send <salt.runners.event.send>` runner:
.. code-block:: bash
salt-call event.send 'foo' '{orchestrate: refresh}'
salt-run event.send foo '{orchestrate: refresh}'
After this is called, any reactor sls files matching event tag ``foo`` will
execute with ``{{ data['data']['orchestrate'] }}`` equal to ``'refresh'``.
From the Minion
---------------
See :py:mod:`salt.modules.event` for more information.
To fire an event to the master from a minion, call :py:func:`event.send
<salt.modules.event.send>`:
Knowing what event is being fired
=================================
.. code-block:: bash
The best way to see exactly what events are fired and what data is available in
each event is to use the :py:func:`state.event runner
salt-call event.send foo '{orchestrate: refresh}'
To fire an event to the minion's local event bus, call :py:func:`event.fire
<salt.modules.event.fire>`:
.. code-block:: bash
salt-call event.fire '{orchestrate: refresh}' foo
Referencing Data Passed in Events
---------------------------------
Assuming any of the above examples, any reactor SLS files triggered by watching
the event tag ``foo`` will execute with ``{{ data['data']['orchestrate'] }}``
equal to ``'refresh'``.
Getting Information About Events
================================
The best way to see exactly what events have been fired and what data is
available in each event is to use the :py:func:`state.event runner
<salt.runners.state.event>`.
.. seealso:: :ref:`Common Salt Events <event-master_events>`
@ -308,156 +537,10 @@ rendered SLS file (or any errors generated while rendering the SLS file).
view the result of referencing Jinja variables. If the result is empty then
Jinja produced an empty result and the Reactor will ignore it.
.. _reactor-structure:
Passing Event Data to Minions or Orchestration as Pillar
--------------------------------------------------------
Understanding the Structure of Reactor Formulas
===============================================
**I.e., when to use `arg` and `kwarg` and when to specify the function
arguments directly.**
While the reactor system uses the same basic data structure as the state
system, the functions that will be called using that data structure are
different functions than are called via Salt's state system. The Reactor can
call Runner modules using the `runner` prefix, Wheel modules using the `wheel`
prefix, and can also cause minions to run Execution modules using the `local`
prefix.
.. versionchanged:: 2014.7.0
The ``cmd`` prefix was renamed to ``local`` for consistency with other
parts of Salt. A backward-compatible alias was added for ``cmd``.
The Reactor runs on the master and calls functions that exist on the master. In
the case of Runner and Wheel functions the Reactor can just call those
functions directly since they exist on the master and are run on the master.
In the case of functions that exist on minions and are run on minions, the
Reactor still needs to call a function on the master in order to send the
necessary data to the minion so the minion can execute that function.
The Reactor calls functions exposed in :ref:`Salt's Python API documentation
<client-apis>`. and thus the structure of Reactor files very transparently
reflects the function signatures of those functions.
Calling Execution modules on Minions
------------------------------------
The Reactor sends commands down to minions in the exact same way Salt's CLI
interface does. It calls a function locally on the master that sends the name
of the function as well as a list of any arguments and a dictionary of any
keyword arguments that the minion should use to execute that function.
Specifically, the Reactor calls the async version of :py:meth:`this function
<salt.client.LocalClient.cmd>`. You can see that function has 'arg' and 'kwarg'
parameters which are both values that are sent down to the minion.
Executing remote commands maps to the :strong:`LocalClient` interface which is
used by the :strong:`salt` command. This interface more specifically maps to
the :strong:`cmd_async` method inside of the :strong:`LocalClient` class. This
means that the arguments passed are being passed to the :strong:`cmd_async`
method, not the remote method. A field starts with :strong:`local` to use the
:strong:`LocalClient` subsystem. The result is, to execute a remote command,
a reactor formula would look like this:
.. code-block:: yaml
clean_tmp:
local.cmd.run:
- tgt: '*'
- arg:
- rm -rf /tmp/*
The ``arg`` option takes a list of arguments as they would be presented on the
command line, so the above declaration is the same as running this salt
command:
.. code-block:: bash
salt '*' cmd.run 'rm -rf /tmp/*'
Use the ``tgt_type`` argument to specify a matcher:
.. code-block:: yaml
clean_tmp:
local.cmd.run:
- tgt: 'os:Ubuntu'
- tgt_type: grain
- arg:
- rm -rf /tmp/*
clean_tmp:
local.cmd.run:
- tgt: 'G@roles:hbase_master'
- tgt_type: compound
- arg:
- rm -rf /tmp/*
.. note::
The ``tgt_type`` argument was named ``expr_form`` in releases prior to
2017.7.0 (2016.11.x and earlier).
Any other parameters in the :py:meth:`LocalClient().cmd()
<salt.client.LocalClient.cmd>` method can be specified as well.
Executing Reactors from the Minion
----------------------------------
The minion can be setup to use the Reactor via a reactor engine. This just
sets up and listens to the minions event bus, instead of to the masters.
The biggest difference is that you have to use the caller method on the
Reactor, which is the equivalent of salt-call, to run your commands.
:mod:`Reactor Engine setup <salt.engines.reactor>`
.. code-block:: yaml
clean_tmp:
caller.cmd.run:
- arg:
- rm -rf /tmp/*
.. note:: Masterless Minions use this Reactor
This is the only way to run the Reactor if you use masterless minions.
Calling Runner modules and Wheel modules
----------------------------------------
Calling Runner modules and Wheel modules from the Reactor uses a more direct
syntax since the function is being executed locally instead of sending a
command to a remote system to be executed there. There are no 'arg' or 'kwarg'
parameters (unless the Runner function or Wheel function accepts a parameter
with either of those names.)
For example:
.. code-block:: yaml
clear_the_grains_cache_for_all_minions:
runner.cache.clear_grains
If the :py:func:`the runner takes arguments <salt.runners.cloud.profile>` then
they must be specified as keyword arguments.
.. code-block:: yaml
spin_up_more_web_machines:
runner.cloud.profile:
- prof: centos_6
- instances:
- web11 # These VM names would be generated via Jinja in a
- web12 # real-world example.
To determine the proper names for the arguments, check the documentation
or source code for the runner function you wish to call.
Passing event data to Minions or Orchestrate as Pillar
------------------------------------------------------
An interesting trick to pass data from the Reactor script to
An interesting trick to pass data from the Reactor SLS file to
:py:func:`state.apply <salt.modules.state.apply_>` is to pass it as inline
Pillar data since both functions take a keyword argument named ``pillar``.
@ -484,10 +567,9 @@ from the event to the state file via inline Pillar.
add_new_minion_to_pool:
local.state.apply:
- tgt: 'haproxy*'
- arg:
- haproxy.refresh_pool
- kwarg:
pillar:
- args:
- mods: haproxy.refresh_pool
- pillar:
new_minion: {{ data['id'] }}
{% endif %}
@ -503,17 +585,16 @@ This works with Orchestrate files as well:
call_some_orchestrate_file:
runner.state.orchestrate:
- mods: _orch.some_orchestrate_file
- pillar:
stuff: things
- args:
- mods: orchestrate.some_orchestrate_file
- pillar:
stuff: things
Which is equivalent to the following command at the CLI:
.. code-block:: bash
salt-run state.orchestrate _orch.some_orchestrate_file pillar='{stuff: things}'
This expects to find a file at /srv/salt/_orch/some_orchestrate_file.sls.
salt-run state.orchestrate orchestrate.some_orchestrate_file pillar='{stuff: things}'
Finally, that data is available in the state file using the normal Pillar
lookup syntax. The following example is grabbing web server names and IP
@ -564,7 +645,7 @@ includes the minion id, which we can use for matching.
- 'salt/minion/ink*/start':
- /srv/reactor/auth-complete.sls
In this sls file, we say that if the key was rejected we will delete the key on
In this SLS file, we say that if the key was rejected we will delete the key on
the master and then also tell the master to ssh in to the minion and tell it to
restart the minion, since a minion process will die if the key is rejected.
@ -580,19 +661,21 @@ authentication every ten seconds by default.
{% if not data['result'] and data['id'].startswith('ink') %}
minion_remove:
wheel.key.delete:
- match: {{ data['id'] }}
- args:
- match: {{ data['id'] }}
minion_rejoin:
local.cmd.run:
- tgt: salt-master.domain.tld
- arg:
- ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no "{{ data['id'] }}" 'sleep 10 && /etc/init.d/salt-minion restart'
- args:
- cmd: ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no "{{ data['id'] }}" 'sleep 10 && /etc/init.d/salt-minion restart'
{% endif %}
{# Ink server is sending new key -- accept this key #}
{% if 'act' in data and data['act'] == 'pend' and data['id'].startswith('ink') %}
minion_add:
wheel.key.accept:
- match: {{ data['id'] }}
- args:
- match: {{ data['id'] }}
{% endif %}
No if statements are needed here because we already limited this action to just

View File

@ -132,7 +132,7 @@ fi
###############################################################################
# Remove the salt from the paths.d
###############################################################################
if [ ! -f "/etc/paths.d/salt" ]; then
if [ -f "/etc/paths.d/salt" ]; then
echo "Path: Removing salt from the path..." >> "$TEMP_DIR/preinstall.txt"
rm "/etc/paths.d/salt"
echo "Path: Removed Successfully" >> "$TEMP_DIR/preinstall.txt"

View File

@ -35,8 +35,9 @@ _salt_get_keys(){
}
_salt(){
local _salt_cache_functions=${SALT_COMP_CACHE_FUNCTIONS:-"$HOME/.cache/salt-comp-cache_functions"}
local _salt_cache_timeout=${SALT_COMP_CACHE_TIMEOUT:-"last hour"}
CACHE_DIR="$HOME/.cache/salt-comp-cache_functions"
local _salt_cache_functions=${SALT_COMP_CACHE_FUNCTIONS:=$CACHE_DIR}
local _salt_cache_timeout=${SALT_COMP_CACHE_TIMEOUT:='last hour'}
if [ ! -d "$(dirname ${_salt_cache_functions})" ]; then
mkdir -p "$(dirname ${_salt_cache_functions})"

View File

@ -73,7 +73,7 @@ class Cache(object):
self.cachedir = opts.get('cachedir', salt.syspaths.CACHE_DIR)
else:
self.cachedir = cachedir
self.driver = opts.get('cache', salt.config.DEFAULT_MASTER_OPTS)
self.driver = opts.get('cache', salt.config.DEFAULT_MASTER_OPTS['cache'])
self.serial = Serial(opts)
self._modules = None
self._kwargs = kwargs

View File

@ -364,29 +364,19 @@ class SyncClientMixin(object):
# packed into the top level object. The plan is to move away from
# that since the caller knows what is an arg vs a kwarg, but while
# we make the transition we will load "kwargs" using format_call if
# there are no kwargs in the low object passed in
f_call = None
if u'arg' not in low:
# there are no kwargs in the low object passed in.
if u'arg' in low and u'kwarg' in low:
args = low[u'arg']
kwargs = low[u'kwarg']
else:
f_call = salt.utils.format_call(
self.functions[fun],
low,
expected_extra_kws=CLIENT_INTERNAL_KEYWORDS
)
args = f_call.get(u'args', ())
else:
args = low[u'arg']
if u'kwarg' not in low:
log.critical(
u'kwargs must be passed inside the low data within the '
u'\'kwarg\' key. See usage of '
u'salt.utils.args.parse_input() and '
u'salt.minion.load_args_and_kwargs() elsewhere in the '
u'codebase.'
)
kwargs = {}
else:
kwargs = low[u'kwarg']
kwargs = f_call.get(u'kwargs', {})
# Update the event data with loaded args and kwargs
data[u'fun_args'] = list(args) + ([kwargs] if kwargs else [])

View File

@ -266,6 +266,12 @@ class SaltCacheError(SaltException):
'''
class TimeoutError(SaltException):
'''
Thrown when an opration cannot be completet within a given time limit.
'''
class SaltReqTimeoutError(SaltException):
'''
Thrown when a salt master request call fails to return within the timeout

View File

@ -1205,6 +1205,10 @@ _OS_FAMILY_MAP = {
'Raspbian': 'Debian',
'Devuan': 'Debian',
'antiX': 'Debian',
'Kali': 'Debian',
'neon': 'Debian',
'Cumulus': 'Debian',
'Deepin': 'Debian',
'NILinuxRT': 'NILinuxRT',
'NILinuxRT-XFCE': 'NILinuxRT',
'KDE neon': 'Debian',

View File

@ -1643,13 +1643,24 @@ class Minion(MinionBase):
minion side execution.
'''
salt.utils.appendproctitle(u'{0}._thread_multi_return {1}'.format(cls.__name__, data[u'jid']))
ret = {
u'return': {},
u'retcode': {},
u'success': {}
}
for ind in range(0, len(data[u'fun'])):
ret[u'success'][data[u'fun'][ind]] = False
multifunc_ordered = opts.get(u'multifunc_ordered', False)
num_funcs = len(data[u'fun'])
if multifunc_ordered:
ret = {
u'return': [None] * num_funcs,
u'retcode': [None] * num_funcs,
u'success': [False] * num_funcs
}
else:
ret = {
u'return': {},
u'retcode': {},
u'success': {}
}
for ind in range(0, num_funcs):
if not multifunc_ordered:
ret[u'success'][data[u'fun'][ind]] = False
try:
minion_blackout_violation = False
if minion_instance.connected and minion_instance.opts[u'pillar'].get(u'minion_blackout', False):
@ -1673,16 +1684,27 @@ class Minion(MinionBase):
data[u'arg'][ind],
data)
minion_instance.functions.pack[u'__context__'][u'retcode'] = 0
ret[u'return'][data[u'fun'][ind]] = func(*args, **kwargs)
ret[u'retcode'][data[u'fun'][ind]] = minion_instance.functions.pack[u'__context__'].get(
u'retcode',
0
)
ret[u'success'][data[u'fun'][ind]] = True
if multifunc_ordered:
ret[u'return'][ind] = func(*args, **kwargs)
ret[u'retcode'][ind] = minion_instance.functions.pack[u'__context__'].get(
u'retcode',
0
)
ret[u'success'][ind] = True
else:
ret[u'return'][data[u'fun'][ind]] = func(*args, **kwargs)
ret[u'retcode'][data[u'fun'][ind]] = minion_instance.functions.pack[u'__context__'].get(
u'retcode',
0
)
ret[u'success'][data[u'fun'][ind]] = True
except Exception as exc:
trb = traceback.format_exc()
log.warning(u'The minion function caused an exception: %s', exc)
ret[u'return'][data[u'fun'][ind]] = trb
if multifunc_ordered:
ret[u'return'][ind] = trb
else:
ret[u'return'][data[u'fun'][ind]] = trb
ret[u'jid'] = data[u'jid']
ret[u'fun'] = data[u'fun']
ret[u'fun_args'] = data[u'arg']
@ -2674,6 +2696,8 @@ class SyndicManager(MinionBase):
'''
if kwargs is None:
kwargs = {}
successful = False
# Call for each master
for master, syndic_future in self.iter_master_options(master_id):
if not syndic_future.done() or syndic_future.exception():
log.error(
@ -2684,15 +2708,15 @@ class SyndicManager(MinionBase):
try:
getattr(syndic_future.result(), func)(*args, **kwargs)
return
successful = True
except SaltClientError:
log.error(
u'Unable to call %s on %s, trying another...',
func, master
)
self._mark_master_dead(master)
continue
log.critical(u'Unable to call %s on any masters!', func)
if not successful:
log.critical(u'Unable to call %s on any masters!', func)
def _return_pub_syndic(self, values, master_id=None):
'''

View File

@ -97,11 +97,15 @@ __virtualname__ = 'pkg'
def __virtual__():
'''
Confirm this module is on a Debian based system
Confirm this module is on a Debian-based system
'''
if __grains__.get('os_family') in ('Kali', 'Debian', 'neon', 'Deepin'):
return __virtualname__
elif __grains__.get('os_family', False) == 'Cumulus':
# If your minion is running an OS which is Debian-based but does not have
# an "os_family" grain of Debian, then the proper fix is NOT to check for
# the minion's "os_family" grain here in the __virtual__. The correct fix
# is to add the value from the minion's "os" grain to the _OS_FAMILY_MAP
# dict in salt/grains/core.py, so that we assign the correct "os_family"
# grain to the minion.
if __grains__.get('os_family') == 'Debian':
return __virtualname__
return (False, 'The pkg module could not be loaded: unsupported OS family')

View File

@ -2456,11 +2456,10 @@ def describe_route_table(route_table_id=None, route_table_name=None,
salt myminion boto_vpc.describe_route_table route_table_id='rtb-1f382e7d'
'''
salt.utils.versions.warn_until(
'Oxygen',
'The \'describe_route_table\' method has been deprecated and '
'replaced by \'describe_route_tables\'.'
'Neon',
'The \'describe_route_table\' method has been deprecated and '
'replaced by \'describe_route_tables\'.'
)
if not any((route_table_id, route_table_name, tags)):
raise SaltInvocationError('At least one of the following must be specified: '

View File

@ -40,11 +40,16 @@ import base64
import logging
import yaml
import tempfile
import signal
from time import sleep
from contextlib import contextmanager
from salt.exceptions import CommandExecutionError
from salt.ext.six import iteritems
import salt.utils.files
import salt.utils.templates
from salt.exceptions import TimeoutError
from salt.ext.six.moves import range # pylint: disable=import-error
try:
import kubernetes # pylint: disable=import-self
@ -78,6 +83,21 @@ def __virtual__():
return False, 'python kubernetes library not found'
if not salt.utils.is_windows():
@contextmanager
def _time_limit(seconds):
def signal_handler(signum, frame):
raise TimeoutError
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(seconds)
try:
yield
finally:
signal.alarm(0)
POLLING_TIME_LIMIT = 30
# pylint: disable=no-member
def _setup_conn(**kwargs):
'''
@ -692,7 +712,30 @@ def delete_deployment(name, namespace='default', **kwargs):
name=name,
namespace=namespace,
body=body)
return api_response.to_dict()
mutable_api_response = api_response.to_dict()
if not salt.utils.is_windows():
try:
with _time_limit(POLLING_TIME_LIMIT):
while show_deployment(name, namespace) is not None:
sleep(1)
else: # pylint: disable=useless-else-on-loop
mutable_api_response['code'] = 200
except TimeoutError:
pass
else:
# Windows has not signal.alarm implementation, so we are just falling
# back to loop-counting.
for i in range(60):
if show_deployment(name, namespace) is None:
mutable_api_response['code'] = 200
break
else:
sleep(1)
if mutable_api_response['code'] != 200:
log.warning('Reached polling time limit. Deployment is not yet '
'deleted, but we are backing off. Sorry, but you\'ll '
'have to check manually.')
return mutable_api_response
except (ApiException, HTTPError) as exc:
if isinstance(exc, ApiException) and exc.status == 404:
return None

View File

@ -1,6 +1,9 @@
# -*- coding: utf-8 -*-
'''
Support for Linux File Access Control Lists
The Linux ACL module requires the `getfacl` and `setfacl` binaries.
'''
from __future__ import absolute_import

View File

@ -688,11 +688,20 @@ def file_query(database, file_name, **connection_args):
.. versionadded:: 2017.7.0
database
database to run script inside
file_name
File name of the script. This can be on the minion, or a file that is reachable by the fileserver
CLI Example:
.. code-block:: bash
salt '*' mysql.file_query mydb file_name=/tmp/sqlfile.sql
salt '*' mysql.file_query mydb file_name=salt://sqlfile.sql
Return data:
@ -701,6 +710,9 @@ def file_query(database, file_name, **connection_args):
{'query time': {'human': '39.0ms', 'raw': '0.03899'}, 'rows affected': 1L}
'''
if any(file_name.startswith(proto) for proto in ('salt://', 'http://', 'https://', 'swift://', 's3://')):
file_name = __salt__['cp.cache_file'](file_name)
if os.path.exists(file_name):
with salt.utils.files.fopen(file_name, 'r') as ifile:
contents = ifile.read()
@ -709,7 +721,7 @@ def file_query(database, file_name, **connection_args):
return False
query_string = ""
ret = {'rows returned': 0, 'columns': 0, 'results': 0, 'rows affected': 0, 'query time': {'raw': 0}}
ret = {'rows returned': 0, 'columns': [], 'results': [], 'rows affected': 0, 'query time': {'raw': 0}}
for line in contents.splitlines():
if re.match(r'--', line): # ignore sql comments
continue
@ -729,16 +741,16 @@ def file_query(database, file_name, **connection_args):
if 'rows returned' in query_result:
ret['rows returned'] += query_result['rows returned']
if 'columns' in query_result:
ret['columns'] += query_result['columns']
ret['columns'].append(query_result['columns'])
if 'results' in query_result:
ret['results'] += query_result['results']
ret['results'].append(query_result['results'])
if 'rows affected' in query_result:
ret['rows affected'] += query_result['rows affected']
ret['query time']['human'] = str(round(float(ret['query time']['raw']), 2)) + 's'
ret['query time']['raw'] = round(float(ret['query time']['raw']), 5)
# Remove empty keys in ret
ret = dict((k, v) for k, v in six.iteritems(ret) if v)
ret = {k: v for k, v in six.iteritems(ret) if v}
return ret

View File

@ -375,8 +375,10 @@ def list_semod():
def _validate_filetype(filetype):
'''
Checks if the given filetype is a valid SELinux filetype specification.
Throws an SaltInvocationError if it isn't.
.. versionadded:: 2017.7.0
Checks if the given filetype is a valid SELinux filetype
specification. Throws an SaltInvocationError if it isn't.
'''
if filetype not in _SELINUX_FILETYPES.keys():
raise SaltInvocationError('Invalid filetype given: {0}'.format(filetype))
@ -385,6 +387,8 @@ def _validate_filetype(filetype):
def _context_dict_to_string(context):
'''
.. versionadded:: 2017.7.0
Converts an SELinux file context from a dict to a string.
'''
return '{sel_user}:{sel_role}:{sel_type}:{sel_level}'.format(**context)
@ -392,6 +396,8 @@ def _context_dict_to_string(context):
def _context_string_to_dict(context):
'''
.. versionadded:: 2017.7.0
Converts an SELinux file context from string to dict.
'''
if not re.match('[^:]+:[^:]+:[^:]+:[^:]+$', context):
@ -406,8 +412,11 @@ def _context_string_to_dict(context):
def filetype_id_to_string(filetype='a'):
'''
Translates SELinux filetype single-letter representation
to a more human-readable version (which is also used in `semanage fcontext -l`).
.. versionadded:: 2017.7.0
Translates SELinux filetype single-letter representation to a more
human-readable version (which is also used in `semanage fcontext
-l`).
'''
_validate_filetype(filetype)
return _SELINUX_FILETYPES.get(filetype, 'error')
@ -415,20 +424,27 @@ def filetype_id_to_string(filetype='a'):
def fcontext_get_policy(name, filetype=None, sel_type=None, sel_user=None, sel_level=None):
'''
Returns the current entry in the SELinux policy list as a dictionary.
Returns None if no exact match was found
.. versionadded:: 2017.7.0
Returns the current entry in the SELinux policy list as a
dictionary. Returns None if no exact match was found.
Returned keys are:
- filespec (the name supplied and matched)
- filetype (the descriptive name of the filetype supplied)
- sel_user, sel_role, sel_type, sel_level (the selinux context)
* filespec (the name supplied and matched)
* filetype (the descriptive name of the filetype supplied)
* sel_user, sel_role, sel_type, sel_level (the selinux context)
For a more in-depth explanation of the selinux context, go to
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Security-Enhanced_Linux/chap-Security-Enhanced_Linux-SELinux_Contexts.html
name: filespec of the file or directory. Regex syntax is allowed.
filetype: The SELinux filetype specification.
Use one of [a, f, d, c, b, s, l, p].
See also `man semanage-fcontext`.
Defaults to 'a' (all files)
name
filespec of the file or directory. Regex syntax is allowed.
filetype
The SELinux filetype specification. Use one of [a, f, d, c, b,
s, l, p]. See also `man semanage-fcontext`. Defaults to 'a'
(all files).
CLI Example:
@ -461,20 +477,34 @@ def fcontext_get_policy(name, filetype=None, sel_type=None, sel_user=None, sel_l
def fcontext_add_or_delete_policy(action, name, filetype=None, sel_type=None, sel_user=None, sel_level=None):
'''
Sets or deletes the SELinux policy for a given filespec and other optional parameters.
Returns the result of the call to semanage.
Note that you don't have to remove an entry before setting a new one for a given
filespec and filetype, as adding one with semanage automatically overwrites a
previously configured SELinux context.
.. versionadded:: 2017.7.0
name: filespec of the file or directory. Regex syntax is allowed.
file_type: The SELinux filetype specification.
Use one of [a, f, d, c, b, s, l, p].
See also ``man semanage-fcontext``.
Defaults to 'a' (all files)
sel_type: SELinux context type. There are many.
sel_user: SELinux user. Use ``semanage login -l`` to determine which ones are available to you
sel_level: The MLS range of the SELinux context.
Sets or deletes the SELinux policy for a given filespec and other
optional parameters.
Returns the result of the call to semanage.
Note that you don't have to remove an entry before setting a new
one for a given filespec and filetype, as adding one with semanage
automatically overwrites a previously configured SELinux context.
name
filespec of the file or directory. Regex syntax is allowed.
file_type
The SELinux filetype specification. Use one of [a, f, d, c, b,
s, l, p]. See also ``man semanage-fcontext``. Defaults to 'a'
(all files).
sel_type
SELinux context type. There are many.
sel_user
SELinux user. Use ``semanage login -l`` to determine which ones
are available to you.
sel_level
The MLS range of the SELinux context.
CLI Example:
@ -500,10 +530,14 @@ def fcontext_add_or_delete_policy(action, name, filetype=None, sel_type=None, se
def fcontext_policy_is_applied(name, recursive=False):
'''
Returns an empty string if the SELinux policy for a given filespec is applied,
returns string with differences in policy and actual situation otherwise.
.. versionadded:: 2017.7.0
name: filespec of the file or directory. Regex syntax is allowed.
Returns an empty string if the SELinux policy for a given filespec
is applied, returns string with differences in policy and actual
situation otherwise.
name
filespec of the file or directory. Regex syntax is allowed.
CLI Example:
@ -520,11 +554,17 @@ def fcontext_policy_is_applied(name, recursive=False):
def fcontext_apply_policy(name, recursive=False):
'''
Applies SElinux policies to filespec using `restorecon [-R] filespec`.
Returns dict with changes if succesful, the output of the restorecon command otherwise.
.. versionadded:: 2017.7.0
name: filespec of the file or directory. Regex syntax is allowed.
recursive: Recursively apply SELinux policies.
Applies SElinux policies to filespec using `restorecon [-R]
filespec`. Returns dict with changes if succesful, the output of
the restorecon command otherwise.
name
filespec of the file or directory. Regex syntax is allowed.
recursive
Recursively apply SELinux policies.
CLI Example:

View File

@ -1280,10 +1280,10 @@ def install(name=None, refresh=False, pkgs=None, **kwargs):
arguments = ['/i', cached_pkg]
if pkginfo[version_num].get('allusers', True):
arguments.append('ALLUSERS="1"')
arguments.extend(salt.utils.shlex_split(install_flags))
arguments.extend(salt.utils.shlex_split(install_flags, posix=False))
else:
cmd = cached_pkg
arguments = salt.utils.shlex_split(install_flags)
arguments = salt.utils.shlex_split(install_flags, posix=False)
# Install the software
# Check Use Scheduler Option
@ -1356,7 +1356,6 @@ def install(name=None, refresh=False, pkgs=None, **kwargs):
# Launch the command
result = __salt__['cmd.run_all'](cmd,
cache_path,
output_loglevel='quiet',
python_shell=False,
redirect_stderr=True)
if not result['retcode']:
@ -1615,19 +1614,19 @@ def remove(name=None, pkgs=None, version=None, **kwargs):
#Compute msiexec string
use_msiexec, msiexec = _get_msiexec(pkginfo[target].get('msiexec', False))
# Build cmd and arguments
# cmd and arguments must be separated for use with the task scheduler
if use_msiexec:
cmd = msiexec
arguments = ['/x']
arguments.extend(salt.utils.shlex_split(uninstall_flags, posix=False))
else:
cmd = expanded_cached_pkg
arguments = salt.utils.shlex_split(uninstall_flags, posix=False)
# Uninstall the software
# Check Use Scheduler Option
if pkginfo[target].get('use_scheduler', False):
# Build Scheduled Task Parameters
if use_msiexec:
cmd = msiexec
arguments = ['/x']
arguments.extend(salt.utils.args.shlex_split(uninstall_flags))
else:
cmd = expanded_cached_pkg
arguments = salt.utils.args.shlex_split(uninstall_flags)
# Create Scheduled Task
__salt__['task.create_task'](name='update-salt-software',
user_name='System',
@ -1648,16 +1647,12 @@ def remove(name=None, pkgs=None, version=None, **kwargs):
ret[pkgname] = {'uninstall status': 'failed'}
else:
# Build the install command
cmd = []
if use_msiexec:
cmd.extend([msiexec, '/x', expanded_cached_pkg])
else:
cmd.append(expanded_cached_pkg)
cmd.extend(salt.utils.args.shlex_split(uninstall_flags))
cmd = [cmd]
cmd.extend(arguments)
# Launch the command
result = __salt__['cmd.run_all'](
cmd,
output_loglevel='trace',
python_shell=False,
redirect_stderr=True)
if not result['retcode']:

View File

@ -6,9 +6,10 @@ Module for sending messages to Mattermost
:configuration: This module can be used by either passing an api_url and hook
directly or by specifying both in a configuration profile in the salt
master/minion config.
For example:
master/minion config. For example:
.. code-block:: yaml
mattermost:
hook: peWcBiMOS9HrZG15peWcBiMOS9HrZG15
api_url: https://example.com

View File

@ -97,29 +97,61 @@ def installed(name, version=None, source=None, force=False, pre_versions=False,
ret['changes'] = {name: 'Version {0} will be installed'
''.format(version)}
else:
ret['changes'] = {name: 'Will be installed'}
ret['changes'] = {name: 'Latest version will be installed'}
# Package installed
else:
version_info = __salt__['chocolatey.version'](name, check_remote=True)
full_name = name
lower_name = name.lower()
for pkg in version_info:
if lower_name == pkg.lower():
if name.lower() == pkg.lower():
full_name = pkg
available_version = version_info[full_name]['available'][0]
version = version if version else available_version
installed_version = version_info[full_name]['installed'][0]
if force:
ret['changes'] = {name: 'Version {0} will be forcibly installed'
''.format(version)}
elif allow_multiple:
ret['changes'] = {name: 'Version {0} will be installed side by side'
''.format(version)}
if version:
if salt.utils.compare_versions(
ver1=installed_version, oper="==", ver2=version):
if force:
ret['changes'] = {
name: 'Version {0} will be reinstalled'.format(version)}
ret['comment'] = 'Reinstall {0} {1}' \
''.format(full_name, version)
else:
ret['comment'] = '{0} {1} is already installed' \
''.format(name, version)
if __opts__['test']:
ret['result'] = None
return ret
else:
if allow_multiple:
ret['changes'] = {
name: 'Version {0} will be installed side by side with '
'Version {1} if supported'
''.format(version, installed_version)}
ret['comment'] = 'Install {0} {1} side-by-side with {0} {2}' \
''.format(full_name, version, installed_version)
else:
ret['changes'] = {
name: 'Version {0} will be installed over Version {1} '
''.format(version, installed_version)}
ret['comment'] = 'Install {0} {1} over {0} {2}' \
''.format(full_name, version, installed_version)
force = True
else:
ret['comment'] = 'The Package {0} is already installed'.format(name)
return ret
version = installed_version
if force:
ret['changes'] = {
name: 'Version {0} will be reinstalled'.format(version)}
ret['comment'] = 'Reinstall {0} {1}' \
''.format(full_name, version)
else:
ret['comment'] = '{0} {1} is already installed' \
''.format(name, version)
if __opts__['test']:
ret['result'] = None
return ret
if __opts__['test']:
ret['result'] = None

View File

@ -2,6 +2,8 @@
'''
Linux File Access Control Lists
The Linux ACL state module requires the `getfacl` and `setfacl` binaries.
Ensure a Linux ACL is present
.. code-block:: yaml
@ -50,7 +52,7 @@ def __virtual__():
if salt.utils.path.which('getfacl') and salt.utils.path.which('setfacl'):
return __virtualname__
return False
return False, 'The linux_acl state cannot be loaded: the getfacl or setfacl binary is not in the path.'
def present(name, acl_type, acl_name='', perms='', recurse=False):
@ -85,11 +87,12 @@ def present(name, acl_type, acl_name='', perms='', recurse=False):
# applied to the user/group that owns the file, e.g.,
# default:group::rwx would be listed as default:group:root:rwx
# In this case, if acl_name is empty, we really want to search for root
# but still uses '' for other
# We search through the dictionary getfacl returns for the owner of the
# file if acl_name is empty.
if acl_name == '':
_search_name = __current_perms[name].get('comment').get(_acl_type)
_search_name = __current_perms[name].get('comment').get(_acl_type, '')
else:
_search_name = acl_name
@ -187,11 +190,12 @@ def absent(name, acl_type, acl_name='', perms='', recurse=False):
# applied to the user/group that owns the file, e.g.,
# default:group::rwx would be listed as default:group:root:rwx
# In this case, if acl_name is empty, we really want to search for root
# but still uses '' for other
# We search through the dictionary getfacl returns for the owner of the
# file if acl_name is empty.
if acl_name == '':
_search_name = __current_perms[name].get('comment').get(_acl_type)
_search_name = __current_perms[name].get('comment').get(_acl_type, '')
else:
_search_name = acl_name

View File

@ -310,17 +310,27 @@ def module_remove(name):
def fcontext_policy_present(name, sel_type, filetype='a', sel_user=None, sel_level=None):
'''
Makes sure a SELinux policy for a given filespec (name),
filetype and SELinux context type is present.
.. versionadded:: 2017.7.0
name: filespec of the file or directory. Regex syntax is allowed.
sel_type: SELinux context type. There are many.
filetype: The SELinux filetype specification.
Use one of [a, f, d, c, b, s, l, p].
See also `man semanage-fcontext`.
Defaults to 'a' (all files)
sel_user: The SELinux user.
sel_level: The SELinux MLS range
Makes sure a SELinux policy for a given filespec (name), filetype
and SELinux context type is present.
name
filespec of the file or directory. Regex syntax is allowed.
sel_type
SELinux context type. There are many.
filetype
The SELinux filetype specification. Use one of [a, f, d, c, b,
s, l, p]. See also `man semanage-fcontext`. Defaults to 'a'
(all files).
sel_user
The SELinux user.
sel_level
The SELinux MLS range.
'''
ret = {'name': name, 'result': False, 'changes': {}, 'comment': ''}
new_state = {}
@ -383,17 +393,27 @@ def fcontext_policy_present(name, sel_type, filetype='a', sel_user=None, sel_lev
def fcontext_policy_absent(name, filetype='a', sel_type=None, sel_user=None, sel_level=None):
'''
Makes sure an SELinux file context policy for a given filespec (name),
filetype and SELinux context type is absent.
.. versionadded:: 2017.7.0
name: filespec of the file or directory. Regex syntax is allowed.
filetype: The SELinux filetype specification.
Use one of [a, f, d, c, b, s, l, p].
See also `man semanage-fcontext`.
Defaults to 'a' (all files).
sel_type: The SELinux context type. There are many.
sel_user: The SELinux user.
sel_level: The SELinux MLS range
Makes sure an SELinux file context policy for a given filespec
(name), filetype and SELinux context type is absent.
name
filespec of the file or directory. Regex syntax is allowed.
filetype
The SELinux filetype specification. Use one of [a, f, d, c, b,
s, l, p]. See also `man semanage-fcontext`. Defaults to 'a'
(all files).
sel_type
The SELinux context type. There are many.
sel_user
The SELinux user.
sel_level
The SELinux MLS range.
'''
ret = {'name': name, 'result': False, 'changes': {}, 'comment': ''}
new_state = {}
@ -433,7 +453,10 @@ def fcontext_policy_absent(name, filetype='a', sel_type=None, sel_user=None, sel
def fcontext_policy_applied(name, recursive=False):
'''
Checks and makes sure the SELinux policies for a given filespec are applied.
.. versionadded:: 2017.7.0
Checks and makes sure the SELinux policies for a given filespec are
applied.
'''
ret = {'name': name, 'result': False, 'changes': {}, 'comment': ''}

View File

@ -968,7 +968,14 @@ class DaemonMixIn(six.with_metaclass(MixInMeta, object)):
# We've loaded and merged options into the configuration, it's safe
# to query about the pidfile
if self.check_pidfile():
os.unlink(self.config['pidfile'])
try:
os.unlink(self.config['pidfile'])
except OSError as err:
self.info(
'PIDfile could not be deleted: {0}'.format(
self.config['pidfile']
)
)
def set_pidfile(self):
from salt.utils.process import set_pidfile

View File

@ -7,6 +7,7 @@ import glob
import logging
# Import salt libs
import salt.client
import salt.runner
import salt.state
import salt.utils
@ -14,6 +15,7 @@ import salt.utils.cache
import salt.utils.event
import salt.utils.files
import salt.utils.process
import salt.wheel
import salt.defaults.exitcodes
# Import 3rd-party libs
@ -22,6 +24,15 @@ from salt.ext import six
log = logging.getLogger(__name__)
REACTOR_INTERNAL_KEYWORDS = frozenset([
'__id__',
'__sls__',
'name',
'order',
'fun',
'state',
])
class Reactor(salt.utils.process.SignalHandlingMultiprocessingProcess, salt.state.Compiler):
'''
@ -30,6 +41,10 @@ class Reactor(salt.utils.process.SignalHandlingMultiprocessingProcess, salt.stat
The reactor has the capability to execute pre-programmed executions
as reactions to events
'''
aliases = {
'cmd': 'local',
}
def __init__(self, opts, log_queue=None):
super(Reactor, self).__init__(log_queue=log_queue)
local_minion_opts = opts.copy()
@ -172,6 +187,16 @@ class Reactor(salt.utils.process.SignalHandlingMultiprocessingProcess, salt.stat
return {'status': False, 'comment': 'Reactor does not exists.'}
def resolve_aliases(self, chunks):
'''
Preserve backward compatibility by rewriting the 'state' key in the low
chunks if it is using a legacy type.
'''
for idx, _ in enumerate(chunks):
new_state = self.aliases.get(chunks[idx]['state'])
if new_state is not None:
chunks[idx]['state'] = new_state
def reactions(self, tag, data, reactors):
'''
Render a list of reactor files and returns a reaction struct
@ -192,6 +217,7 @@ class Reactor(salt.utils.process.SignalHandlingMultiprocessingProcess, salt.stat
except Exception as exc:
log.error('Exception trying to compile reactions: {0}'.format(exc), exc_info=True)
self.resolve_aliases(chunks)
return chunks
def call_reactions(self, chunks):
@ -249,12 +275,19 @@ class Reactor(salt.utils.process.SignalHandlingMultiprocessingProcess, salt.stat
class ReactWrap(object):
'''
Create a wrapper that executes low data for the reaction system
Wrapper that executes low data for the Reactor System
'''
# class-wide cache of clients
client_cache = None
event_user = 'Reactor'
reaction_class = {
'local': salt.client.LocalClient,
'runner': salt.runner.RunnerClient,
'wheel': salt.wheel.Wheel,
'caller': salt.client.Caller,
}
def __init__(self, opts):
self.opts = opts
if ReactWrap.client_cache is None:
@ -265,21 +298,49 @@ class ReactWrap(object):
queue_size=self.opts['reactor_worker_hwm'] # queue size for those workers
)
def populate_client_cache(self, low):
'''
Populate the client cache with an instance of the specified type
'''
reaction_type = low['state']
if reaction_type not in self.client_cache:
log.debug('Reactor is populating %s client cache', reaction_type)
if reaction_type in ('runner', 'wheel'):
# Reaction types that run locally on the master want the full
# opts passed.
self.client_cache[reaction_type] = \
self.reaction_class[reaction_type](self.opts)
# The len() function will cause the module functions to load if
# they aren't already loaded. We want to load them so that the
# spawned threads don't need to load them. Loading in the
# spawned threads creates race conditions such as sometimes not
# finding the required function because another thread is in
# the middle of loading the functions.
len(self.client_cache[reaction_type].functions)
else:
# Reactions which use remote pubs only need the conf file when
# instantiating a client instance.
self.client_cache[reaction_type] = \
self.reaction_class[reaction_type](self.opts['conf_file'])
def run(self, low):
'''
Execute the specified function in the specified state by passing the
low data
Execute a reaction by invoking the proper wrapper func
'''
l_fun = getattr(self, low['state'])
self.populate_client_cache(low)
try:
f_call = salt.utils.format_call(l_fun, low)
kwargs = f_call.get('kwargs', {})
if 'arg' not in kwargs:
kwargs['arg'] = []
if 'kwarg' not in kwargs:
kwargs['kwarg'] = {}
l_fun = getattr(self, low['state'])
except AttributeError:
log.error(
'ReactWrap is missing a wrapper function for \'%s\'',
low['state']
)
# TODO: Setting the user doesn't seem to work for actual remote publishes
try:
wrap_call = salt.utils.format_call(l_fun, low)
args = wrap_call.get('args', ())
kwargs = wrap_call.get('kwargs', {})
# TODO: Setting user doesn't seem to work for actual remote pubs
if low['state'] in ('runner', 'wheel'):
# Update called function's low data with event user to
# segregate events fired by reactor and avoid reaction loops
@ -287,81 +348,106 @@ class ReactWrap(object):
# Replace ``state`` kwarg which comes from high data compiler.
# It breaks some runner functions and seems unnecessary.
kwargs['__state__'] = kwargs.pop('state')
# NOTE: if any additional keys are added here, they will also
# need to be added to filter_kwargs()
l_fun(*f_call.get('args', ()), **kwargs)
if 'args' in kwargs:
# New configuration
reactor_args = kwargs.pop('args')
for item in ('arg', 'kwarg'):
if item in low:
log.warning(
'Reactor \'%s\' is ignoring \'%s\' param %s due to '
'presence of \'args\' param. Check the Reactor System '
'documentation for the correct argument format.',
low['__id__'], item, low[item]
)
if low['state'] == 'caller' \
and isinstance(reactor_args, list) \
and not salt.utils.is_dictlist(reactor_args):
# Legacy 'caller' reactors were already using the 'args'
# param, but only supported a list of positional arguments.
# If low['args'] is a list but is *not* a dictlist, then
# this is actually using the legacy configuration. So, put
# the reactor args into kwarg['arg'] so that the wrapper
# interprets them as positional args.
kwargs['arg'] = reactor_args
kwargs['kwarg'] = {}
else:
kwargs['arg'] = ()
kwargs['kwarg'] = reactor_args
if not isinstance(kwargs['kwarg'], dict):
kwargs['kwarg'] = salt.utils.repack_dictlist(kwargs['kwarg'])
if not kwargs['kwarg']:
log.error(
'Reactor \'%s\' failed to execute %s \'%s\': '
'Incorrect argument format, check the Reactor System '
'documentation for the correct format.',
low['__id__'], low['state'], low['fun']
)
return
else:
# Legacy configuration
react_call = {}
if low['state'] in ('runner', 'wheel'):
if 'arg' not in kwargs or 'kwarg' not in kwargs:
# Runner/wheel execute on the master, so we can use
# format_call to get the functions args/kwargs
react_fun = self.client_cache[low['state']].functions.get(low['fun'])
if react_fun is None:
log.error(
'Reactor \'%s\' failed to execute %s \'%s\': '
'function not available',
low['__id__'], low['state'], low['fun']
)
return
react_call = salt.utils.format_call(
react_fun,
low,
expected_extra_kws=REACTOR_INTERNAL_KEYWORDS
)
if 'arg' not in kwargs:
kwargs['arg'] = react_call.get('args', ())
if 'kwarg' not in kwargs:
kwargs['kwarg'] = react_call.get('kwargs', {})
# Execute the wrapper with the proper args/kwargs. kwargs['arg']
# and kwargs['kwarg'] contain the positional and keyword arguments
# that will be passed to the client interface to execute the
# desired runner/wheel/remote-exec/etc. function.
l_fun(*args, **kwargs)
except SystemExit:
log.warning(
'Reactor \'%s\' attempted to exit. Ignored.', low['__id__']
)
except Exception:
log.error(
'Failed to execute {0}: {1}\n'.format(low['state'], l_fun),
exc_info=True
)
def local(self, *args, **kwargs):
'''
Wrap LocalClient for running :ref:`execution modules <all-salt.modules>`
'''
if 'local' not in self.client_cache:
self.client_cache['local'] = salt.client.LocalClient(self.opts['conf_file'])
try:
self.client_cache['local'].cmd_async(*args, **kwargs)
except SystemExit:
log.warning('Attempt to exit reactor. Ignored.')
except Exception as exc:
log.warning('Exception caught by reactor: {0}'.format(exc))
cmd = local
'Reactor \'%s\' failed to execute %s \'%s\'',
low['__id__'], low['state'], low['fun'], exc_info=True
)
def runner(self, fun, **kwargs):
'''
Wrap RunnerClient for executing :ref:`runner modules <all-salt.runners>`
'''
if 'runner' not in self.client_cache:
self.client_cache['runner'] = salt.runner.RunnerClient(self.opts)
# The len() function will cause the module functions to load if
# they aren't already loaded. We want to load them so that the
# spawned threads don't need to load them. Loading in the spawned
# threads creates race conditions such as sometimes not finding
# the required function because another thread is in the middle
# of loading the functions.
len(self.client_cache['runner'].functions)
try:
self.pool.fire_async(self.client_cache['runner'].low, args=(fun, kwargs))
except SystemExit:
log.warning('Attempt to exit in reactor by runner. Ignored')
except Exception as exc:
log.warning('Exception caught by reactor: {0}'.format(exc))
self.pool.fire_async(self.client_cache['runner'].low, args=(fun, kwargs))
def wheel(self, fun, **kwargs):
'''
Wrap Wheel to enable executing :ref:`wheel modules <all-salt.wheel>`
'''
if 'wheel' not in self.client_cache:
self.client_cache['wheel'] = salt.wheel.Wheel(self.opts)
# The len() function will cause the module functions to load if
# they aren't already loaded. We want to load them so that the
# spawned threads don't need to load them. Loading in the spawned
# threads creates race conditions such as sometimes not finding
# the required function because another thread is in the middle
# of loading the functions.
len(self.client_cache['wheel'].functions)
try:
self.pool.fire_async(self.client_cache['wheel'].low, args=(fun, kwargs))
except SystemExit:
log.warning('Attempt to in reactor by whell. Ignored.')
except Exception as exc:
log.warning('Exception caught by reactor: {0}'.format(exc))
self.pool.fire_async(self.client_cache['wheel'].low, args=(fun, kwargs))
def caller(self, fun, *args, **kwargs):
def local(self, fun, tgt, **kwargs):
'''
Wrap Caller to enable executing :ref:`caller modules <all-salt.caller>`
Wrap LocalClient for running :ref:`execution modules <all-salt.modules>`
'''
log.debug("in caller with fun {0} args {1} kwargs {2}".format(fun, args, kwargs))
args = kwargs.get('args', [])
kwargs = kwargs.get('kwargs', {})
if 'caller' not in self.client_cache:
self.client_cache['caller'] = salt.client.Caller(self.opts['conf_file'])
try:
self.client_cache['caller'].cmd(fun, *args, **kwargs)
except SystemExit:
log.warning('Attempt to exit reactor. Ignored.')
except Exception as exc:
log.warning('Exception caught by reactor: {0}'.format(exc))
self.client_cache['local'].cmd_async(tgt, fun, **kwargs)
def caller(self, fun, **kwargs):
'''
Wrap LocalCaller to execute remote exec functions locally on the Minion
'''
self.client_cache['caller'].cmd(fun, *kwargs['arg'], **kwargs['kwarg'])

View File

@ -0,0 +1,7 @@
CREATE TABLE test_select (a INT);
insert into test_select values (1);
insert into test_select values (3);
insert into test_select values (4);
insert into test_select values (5);
update test_select set a=2 where a=1;
select * from test_select;

View File

@ -0,0 +1,3 @@
CREATE TABLE test_update (a INT);
insert into test_update values (1);
update test_update set a=2 where a=1;

View File

@ -1280,6 +1280,7 @@ class MysqlModuleUserGrantTest(ModuleCase, SaltReturnAssertsMixin):
testdb1 = 'tes.t\'"saltdb'
testdb2 = 't_st `(:=salt%b)'
testdb3 = 'test `(:=salteeb)'
test_file_query_db = 'test_query'
table1 = 'foo'
table2 = "foo `\'%_bar"
users = {
@ -1391,13 +1392,19 @@ class MysqlModuleUserGrantTest(ModuleCase, SaltReturnAssertsMixin):
name=self.testdb1,
connection_user=self.user,
connection_pass=self.password,
)
)
self.run_function(
'mysql.db_remove',
name=self.testdb2,
connection_user=self.user,
connection_pass=self.password,
)
)
self.run_function(
'mysql.db_remove',
name=self.test_file_query_db,
connection_user=self.user,
connection_pass=self.password,
)
def _userCreation(self,
uname,
@ -1627,3 +1634,123 @@ class MysqlModuleUserGrantTest(ModuleCase, SaltReturnAssertsMixin):
"GRANT USAGE ON *.* TO ''@'localhost'",
"GRANT DELETE ON `test ``(:=salteeb)`.* TO ''@'localhost'"
])
@skipIf(
NO_MYSQL,
'Please install MySQL bindings and a MySQL Server before running'
'MySQL integration tests.'
)
class MysqlModuleFileQueryTest(ModuleCase, SaltReturnAssertsMixin):
'''
Test file query module
'''
user = 'root'
password = 'poney'
testdb = 'test_file_query'
@destructiveTest
def setUp(self):
'''
Test presence of MySQL server, enforce a root password, create users
'''
super(MysqlModuleFileQueryTest, self).setUp()
NO_MYSQL_SERVER = True
# now ensure we know the mysql root password
# one of theses two at least should work
ret1 = self.run_state(
'cmd.run',
name='mysqladmin --host="localhost" -u '
+ self.user
+ ' flush-privileges password "'
+ self.password
+ '"'
)
ret2 = self.run_state(
'cmd.run',
name='mysqladmin --host="localhost" -u '
+ self.user
+ ' --password="'
+ self.password
+ '" flush-privileges password "'
+ self.password
+ '"'
)
key, value = ret2.popitem()
if value['result']:
NO_MYSQL_SERVER = False
else:
self.skipTest('No MySQL Server running, or no root access on it.')
# Create some users and a test db
self.run_function(
'mysql.db_create',
name=self.testdb,
connection_user=self.user,
connection_pass=self.password,
connection_db='mysql',
)
@destructiveTest
def tearDown(self):
'''
Removes created users and db
'''
self.run_function(
'mysql.db_remove',
name=self.testdb,
connection_user=self.user,
connection_pass=self.password,
connection_db='mysql',
)
@destructiveTest
def test_update_file_query(self):
'''
Test query without any output
'''
ret = self.run_function(
'mysql.file_query',
database=self.testdb,
file_name='salt://mysql/update_query.sql',
character_set='utf8',
collate='utf8_general_ci',
connection_user=self.user,
connection_pass=self.password
)
self.assertTrue('query time' in ret)
ret.pop('query time')
self.assertEqual(ret, {'rows affected': 2})
@destructiveTest
def test_select_file_query(self):
'''
Test query with table output
'''
ret = self.run_function(
'mysql.file_query',
database=self.testdb,
file_name='salt://mysql/select_query.sql',
character_set='utf8',
collate='utf8_general_ci',
connection_user=self.user,
connection_pass=self.password
)
expected = {
'rows affected': 5,
'rows returned': 4,
'results': [
[
['2'],
['3'],
['4'],
['5']
]
],
'columns': [
['a']
],
}
self.assertTrue('query time' in ret)
ret.pop('query time')
self.assertEqual(ret, expected)

View File

@ -106,9 +106,9 @@ class KubernetesTestCase(TestCase, LoaderModuleMockMixin):
with patch.dict(kubernetes.__salt__, {'config.option': Mock(return_value="")}):
mock_kubernetes_lib.client.V1DeleteOptions = Mock(return_value="")
mock_kubernetes_lib.client.ExtensionsV1beta1Api.return_value = Mock(
**{"delete_namespaced_deployment.return_value.to_dict.return_value": {}}
**{"delete_namespaced_deployment.return_value.to_dict.return_value": {'code': 200}}
)
self.assertEqual(kubernetes.delete_deployment("test"), {})
self.assertEqual(kubernetes.delete_deployment("test"), {'code': 200})
self.assertTrue(
kubernetes.kubernetes.client.ExtensionsV1beta1Api().
delete_namespaced_deployment().to_dict.called)

View File

@ -1,46 +1,387 @@
# -*- coding: utf-8 -*-
from __future__ import absolute_import
import time
import shutil
import tempfile
import codecs
import glob
import logging
import os
import textwrap
import yaml
from contextlib import contextmanager
import salt.loader
import salt.utils
import salt.utils.files
from salt.utils.process import clean_proc
import salt.utils.reactor as reactor
from tests.integration import AdaptedConfigurationTestCaseMixin
from tests.support.paths import TMP
from tests.support.unit import TestCase, skipIf
from tests.support.mock import patch, MagicMock
from tests.support.mixins import AdaptedConfigurationTestCaseMixin
from tests.support.mock import (
NO_MOCK,
NO_MOCK_REASON,
patch,
MagicMock,
Mock,
mock_open,
)
REACTOR_CONFIG = '''\
reactor:
- old_runner:
- /srv/reactor/old_runner.sls
- old_wheel:
- /srv/reactor/old_wheel.sls
- old_local:
- /srv/reactor/old_local.sls
- old_cmd:
- /srv/reactor/old_cmd.sls
- old_caller:
- /srv/reactor/old_caller.sls
- new_runner:
- /srv/reactor/new_runner.sls
- new_wheel:
- /srv/reactor/new_wheel.sls
- new_local:
- /srv/reactor/new_local.sls
- new_cmd:
- /srv/reactor/new_cmd.sls
- new_caller:
- /srv/reactor/new_caller.sls
'''
REACTOR_DATA = {
'runner': {'data': {'message': 'This is an error'}},
'wheel': {'data': {'id': 'foo'}},
'local': {'data': {'pkg': 'zsh', 'repo': 'updates'}},
'cmd': {'data': {'pkg': 'zsh', 'repo': 'updates'}},
'caller': {'data': {'path': '/tmp/foo'}},
}
SLS = {
'/srv/reactor/old_runner.sls': textwrap.dedent('''\
raise_error:
runner.error.error:
- name: Exception
- message: {{ data['data']['message'] }}
'''),
'/srv/reactor/old_wheel.sls': textwrap.dedent('''\
remove_key:
wheel.key.delete:
- match: {{ data['data']['id'] }}
'''),
'/srv/reactor/old_local.sls': textwrap.dedent('''\
install_zsh:
local.state.single:
- tgt: test
- arg:
- pkg.installed
- {{ data['data']['pkg'] }}
- kwarg:
fromrepo: {{ data['data']['repo'] }}
'''),
'/srv/reactor/old_cmd.sls': textwrap.dedent('''\
install_zsh:
cmd.state.single:
- tgt: test
- arg:
- pkg.installed
- {{ data['data']['pkg'] }}
- kwarg:
fromrepo: {{ data['data']['repo'] }}
'''),
'/srv/reactor/old_caller.sls': textwrap.dedent('''\
touch_file:
caller.file.touch:
- args:
- {{ data['data']['path'] }}
'''),
'/srv/reactor/new_runner.sls': textwrap.dedent('''\
raise_error:
runner.error.error:
- args:
- name: Exception
- message: {{ data['data']['message'] }}
'''),
'/srv/reactor/new_wheel.sls': textwrap.dedent('''\
remove_key:
wheel.key.delete:
- args:
- match: {{ data['data']['id'] }}
'''),
'/srv/reactor/new_local.sls': textwrap.dedent('''\
install_zsh:
local.state.single:
- tgt: test
- args:
- fun: pkg.installed
- name: {{ data['data']['pkg'] }}
- fromrepo: {{ data['data']['repo'] }}
'''),
'/srv/reactor/new_cmd.sls': textwrap.dedent('''\
install_zsh:
cmd.state.single:
- tgt: test
- args:
- fun: pkg.installed
- name: {{ data['data']['pkg'] }}
- fromrepo: {{ data['data']['repo'] }}
'''),
'/srv/reactor/new_caller.sls': textwrap.dedent('''\
touch_file:
caller.file.touch:
- args:
- name: {{ data['data']['path'] }}
'''),
}
LOW_CHUNKS = {
# Note that the "name" value in the chunk has been overwritten by the
# "name" argument in the SLS. This is one reason why the new schema was
# needed.
'old_runner': [{
'state': 'runner',
'__id__': 'raise_error',
'__sls__': '/srv/reactor/old_runner.sls',
'order': 1,
'fun': 'error.error',
'name': 'Exception',
'message': 'This is an error',
}],
'old_wheel': [{
'state': 'wheel',
'__id__': 'remove_key',
'name': 'remove_key',
'__sls__': '/srv/reactor/old_wheel.sls',
'order': 1,
'fun': 'key.delete',
'match': 'foo',
}],
'old_local': [{
'state': 'local',
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/old_local.sls',
'order': 1,
'tgt': 'test',
'fun': 'state.single',
'arg': ['pkg.installed', 'zsh'],
'kwarg': {'fromrepo': 'updates'},
}],
'old_cmd': [{
'state': 'local', # 'cmd' should be aliased to 'local'
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/old_cmd.sls',
'order': 1,
'tgt': 'test',
'fun': 'state.single',
'arg': ['pkg.installed', 'zsh'],
'kwarg': {'fromrepo': 'updates'},
}],
'old_caller': [{
'state': 'caller',
'__id__': 'touch_file',
'name': 'touch_file',
'__sls__': '/srv/reactor/old_caller.sls',
'order': 1,
'fun': 'file.touch',
'args': ['/tmp/foo'],
}],
'new_runner': [{
'state': 'runner',
'__id__': 'raise_error',
'name': 'raise_error',
'__sls__': '/srv/reactor/new_runner.sls',
'order': 1,
'fun': 'error.error',
'args': [
{'name': 'Exception'},
{'message': 'This is an error'},
],
}],
'new_wheel': [{
'state': 'wheel',
'__id__': 'remove_key',
'name': 'remove_key',
'__sls__': '/srv/reactor/new_wheel.sls',
'order': 1,
'fun': 'key.delete',
'args': [
{'match': 'foo'},
],
}],
'new_local': [{
'state': 'local',
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/new_local.sls',
'order': 1,
'tgt': 'test',
'fun': 'state.single',
'args': [
{'fun': 'pkg.installed'},
{'name': 'zsh'},
{'fromrepo': 'updates'},
],
}],
'new_cmd': [{
'state': 'local',
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/new_cmd.sls',
'order': 1,
'tgt': 'test',
'fun': 'state.single',
'args': [
{'fun': 'pkg.installed'},
{'name': 'zsh'},
{'fromrepo': 'updates'},
],
}],
'new_caller': [{
'state': 'caller',
'__id__': 'touch_file',
'name': 'touch_file',
'__sls__': '/srv/reactor/new_caller.sls',
'order': 1,
'fun': 'file.touch',
'args': [
{'name': '/tmp/foo'},
],
}],
}
WRAPPER_CALLS = {
'old_runner': (
'error.error',
{
'__state__': 'runner',
'__id__': 'raise_error',
'__sls__': '/srv/reactor/old_runner.sls',
'__user__': 'Reactor',
'order': 1,
'arg': [],
'kwarg': {
'name': 'Exception',
'message': 'This is an error',
},
'name': 'Exception',
'message': 'This is an error',
},
),
'old_wheel': (
'key.delete',
{
'__state__': 'wheel',
'__id__': 'remove_key',
'name': 'remove_key',
'__sls__': '/srv/reactor/old_wheel.sls',
'order': 1,
'__user__': 'Reactor',
'arg': ['foo'],
'kwarg': {},
'match': 'foo',
},
),
'old_local': {
'args': ('test', 'state.single'),
'kwargs': {
'state': 'local',
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/old_local.sls',
'order': 1,
'arg': ['pkg.installed', 'zsh'],
'kwarg': {'fromrepo': 'updates'},
},
},
'old_cmd': {
'args': ('test', 'state.single'),
'kwargs': {
'state': 'local',
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/old_cmd.sls',
'order': 1,
'arg': ['pkg.installed', 'zsh'],
'kwarg': {'fromrepo': 'updates'},
},
},
'old_caller': {
'args': ('file.touch', '/tmp/foo'),
'kwargs': {},
},
'new_runner': (
'error.error',
{
'__state__': 'runner',
'__id__': 'raise_error',
'name': 'raise_error',
'__sls__': '/srv/reactor/new_runner.sls',
'__user__': 'Reactor',
'order': 1,
'arg': (),
'kwarg': {
'name': 'Exception',
'message': 'This is an error',
},
},
),
'new_wheel': (
'key.delete',
{
'__state__': 'wheel',
'__id__': 'remove_key',
'name': 'remove_key',
'__sls__': '/srv/reactor/new_wheel.sls',
'order': 1,
'__user__': 'Reactor',
'arg': (),
'kwarg': {'match': 'foo'},
},
),
'new_local': {
'args': ('test', 'state.single'),
'kwargs': {
'state': 'local',
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/new_local.sls',
'order': 1,
'arg': (),
'kwarg': {
'fun': 'pkg.installed',
'name': 'zsh',
'fromrepo': 'updates',
},
},
},
'new_cmd': {
'args': ('test', 'state.single'),
'kwargs': {
'state': 'local',
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/new_cmd.sls',
'order': 1,
'arg': (),
'kwarg': {
'fun': 'pkg.installed',
'name': 'zsh',
'fromrepo': 'updates',
},
},
},
'new_caller': {
'args': ('file.touch',),
'kwargs': {'name': '/tmp/foo'},
},
}
log = logging.getLogger(__name__)
@contextmanager
def reactor_process(opts, reactor):
opts = dict(opts)
opts['reactor'] = reactor
proc = reactor.Reactor(opts)
proc.start()
try:
if os.environ.get('TRAVIS_PYTHON_VERSION', None) is not None:
# Travis is slow
time.sleep(10)
else:
time.sleep(2)
yield
finally:
clean_proc(proc)
def _args_sideffect(*args, **kwargs):
return args, kwargs
@skipIf(True, 'Skipping until its clear what and how is this supposed to be testing')
class TestReactor(TestCase, AdaptedConfigurationTestCaseMixin):
@skipIf(NO_MOCK, NO_MOCK_REASON)
class TestReactorBasic(TestCase, AdaptedConfigurationTestCaseMixin):
def setUp(self):
self.opts = self.get_temp_config('master')
self.tempdir = tempfile.mkdtemp(dir=TMP)
@ -72,3 +413,180 @@ update_fileserver:
'name': 'foo_action',
'__id__': 'foo_action'})
raise Exception(ret)
@skipIf(NO_MOCK, NO_MOCK_REASON)
class TestReactor(TestCase, AdaptedConfigurationTestCaseMixin):
'''
Tests for constructing the low chunks to be executed via the Reactor
'''
@classmethod
def setUpClass(cls):
'''
Load the reactor config for mocking
'''
cls.opts = cls.get_temp_config('master')
reactor_config = yaml.safe_load(REACTOR_CONFIG)
cls.opts.update(reactor_config)
cls.reactor = reactor.Reactor(cls.opts)
cls.reaction_map = salt.utils.repack_dictlist(reactor_config['reactor'])
renderers = salt.loader.render(cls.opts, {})
cls.render_pipe = [(renderers[x], '') for x in ('jinja', 'yaml')]
@classmethod
def tearDownClass(cls):
del cls.opts
del cls.reactor
del cls.render_pipe
def test_list_reactors(self):
'''
Ensure that list_reactors() returns the correct list of reactor SLS
files for each tag.
'''
for schema in ('old', 'new'):
for rtype in REACTOR_DATA:
tag = '_'.join((schema, rtype))
self.assertEqual(
self.reactor.list_reactors(tag),
self.reaction_map[tag]
)
def test_reactions(self):
'''
Ensure that the correct reactions are built from the configured SLS
files and tag data.
'''
for schema in ('old', 'new'):
for rtype in REACTOR_DATA:
tag = '_'.join((schema, rtype))
log.debug('test_reactions: processing %s', tag)
reactors = self.reactor.list_reactors(tag)
log.debug('test_reactions: %s reactors: %s', tag, reactors)
# No globbing in our example SLS, and the files don't actually
# exist, so mock glob.glob to just return back the path passed
# to it.
with patch.object(
glob,
'glob',
MagicMock(side_effect=lambda x: [x])):
# The below four mocks are all so that
# salt.template.compile_template() will read the templates
# we've mocked up in the SLS global variable above.
with patch.object(
os.path, 'isfile',
MagicMock(return_value=True)):
with patch.object(
salt.utils, 'is_empty',
MagicMock(return_value=False)):
with patch.object(
codecs, 'open',
mock_open(read_data=SLS[reactors[0]])):
with patch.object(
salt.template, 'template_shebang',
MagicMock(return_value=self.render_pipe)):
reactions = self.reactor.reactions(
tag,
REACTOR_DATA[rtype],
reactors,
)
log.debug(
'test_reactions: %s reactions: %s',
tag, reactions
)
self.assertEqual(reactions, LOW_CHUNKS[tag])
@skipIf(NO_MOCK, NO_MOCK_REASON)
class TestReactWrap(TestCase, AdaptedConfigurationTestCaseMixin):
'''
Tests that we are formulating the wrapper calls properly
'''
@classmethod
def setUpClass(cls):
cls.wrap = reactor.ReactWrap(cls.get_temp_config('master'))
@classmethod
def tearDownClass(cls):
del cls.wrap
def test_runner(self):
'''
Test runner reactions using both the old and new config schema
'''
for schema in ('old', 'new'):
tag = '_'.join((schema, 'runner'))
chunk = LOW_CHUNKS[tag][0]
thread_pool = Mock()
thread_pool.fire_async = Mock()
with patch.object(self.wrap, 'pool', thread_pool):
self.wrap.run(chunk)
thread_pool.fire_async.assert_called_with(
self.wrap.client_cache['runner'].low,
args=WRAPPER_CALLS[tag]
)
def test_wheel(self):
'''
Test wheel reactions using both the old and new config schema
'''
for schema in ('old', 'new'):
tag = '_'.join((schema, 'wheel'))
chunk = LOW_CHUNKS[tag][0]
thread_pool = Mock()
thread_pool.fire_async = Mock()
with patch.object(self.wrap, 'pool', thread_pool):
self.wrap.run(chunk)
thread_pool.fire_async.assert_called_with(
self.wrap.client_cache['wheel'].low,
args=WRAPPER_CALLS[tag]
)
def test_local(self):
'''
Test local reactions using both the old and new config schema
'''
for schema in ('old', 'new'):
tag = '_'.join((schema, 'local'))
chunk = LOW_CHUNKS[tag][0]
client_cache = {'local': Mock()}
client_cache['local'].cmd_async = Mock()
with patch.object(self.wrap, 'client_cache', client_cache):
self.wrap.run(chunk)
client_cache['local'].cmd_async.assert_called_with(
*WRAPPER_CALLS[tag]['args'],
**WRAPPER_CALLS[tag]['kwargs']
)
def test_cmd(self):
'''
Test cmd reactions (alias for 'local') using both the old and new
config schema
'''
for schema in ('old', 'new'):
tag = '_'.join((schema, 'cmd'))
chunk = LOW_CHUNKS[tag][0]
client_cache = {'local': Mock()}
client_cache['local'].cmd_async = Mock()
with patch.object(self.wrap, 'client_cache', client_cache):
self.wrap.run(chunk)
client_cache['local'].cmd_async.assert_called_with(
*WRAPPER_CALLS[tag]['args'],
**WRAPPER_CALLS[tag]['kwargs']
)
def test_caller(self):
'''
Test caller reactions using both the old and new config schema
'''
for schema in ('old', 'new'):
tag = '_'.join((schema, 'caller'))
chunk = LOW_CHUNKS[tag][0]
client_cache = {'caller': Mock()}
client_cache['caller'].cmd = Mock()
with patch.object(self.wrap, 'client_cache', client_cache):
self.wrap.run(chunk)
client_cache['caller'].cmd.assert_called_with(
*WRAPPER_CALLS[tag]['args'],
**WRAPPER_CALLS[tag]['kwargs']
)

View File

@ -10,10 +10,15 @@ import os
import sys
import stat
import shutil
import resource
import tempfile
import socket
# Import third party libs
if sys.platform.startswith('win'):
import win32file
else:
import resource
# Import Salt Testing libs
from tests.support.unit import skipIf, TestCase
from tests.support.paths import TMP
@ -82,7 +87,10 @@ class TestVerify(TestCase):
writer = FakeWriter()
sys.stderr = writer
# Now run the test
self.assertFalse(check_user('nouser'))
if sys.platform.startswith('win'):
self.assertTrue(check_user('nouser'))
else:
self.assertFalse(check_user('nouser'))
# Restore sys.stderr
sys.stderr = stderr
if writer.output != 'CRITICAL: User not found: "nouser"\n':
@ -118,7 +126,6 @@ class TestVerify(TestCase):
# not support IPv6.
pass
@skipIf(True, 'Skipping until we can find why Jenkins is bailing out')
def test_max_open_files(self):
with TestsLoggingHandler() as handler:
logmsg_dbg = (
@ -139,15 +146,31 @@ class TestVerify(TestCase):
'raise the salt\'s max_open_files setting. Please consider '
'raising this value.'
)
if sys.platform.startswith('win'):
logmsg_crash = (
'{0}:The number of accepted minion keys({1}) should be lower '
'than 1/4 of the max open files soft setting({2}). '
'salt-master will crash pretty soon! Please consider '
'raising this value.'
)
mof_s, mof_h = resource.getrlimit(resource.RLIMIT_NOFILE)
if sys.platform.startswith('win'):
# Check the Windows API for more detail on this
# http://msdn.microsoft.com/en-us/library/xt874334(v=vs.71).aspx
# and the python binding http://timgolden.me.uk/pywin32-docs/win32file.html
mof_s = mof_h = win32file._getmaxstdio()
else:
mof_s, mof_h = resource.getrlimit(resource.RLIMIT_NOFILE)
tempdir = tempfile.mkdtemp(prefix='fake-keys')
keys_dir = os.path.join(tempdir, 'minions')
os.makedirs(keys_dir)
mof_test = 256
resource.setrlimit(resource.RLIMIT_NOFILE, (mof_test, mof_h))
if sys.platform.startswith('win'):
win32file._setmaxstdio(mof_test)
else:
resource.setrlimit(resource.RLIMIT_NOFILE, (mof_test, mof_h))
try:
prev = 0
@ -181,7 +204,7 @@ class TestVerify(TestCase):
level,
newmax,
mof_test,
mof_h - newmax,
mof_test - newmax if sys.platform.startswith('win') else mof_h - newmax,
),
handler.messages
)
@ -206,7 +229,7 @@ class TestVerify(TestCase):
'CRITICAL',
newmax,
mof_test,
mof_h - newmax,
mof_test - newmax if sys.platform.startswith('win') else mof_h - newmax,
),
handler.messages
)
@ -218,7 +241,10 @@ class TestVerify(TestCase):
raise
finally:
shutil.rmtree(tempdir)
resource.setrlimit(resource.RLIMIT_NOFILE, (mof_s, mof_h))
if sys.platform.startswith('win'):
win32file._setmaxstdio(mof_h)
else:
resource.setrlimit(resource.RLIMIT_NOFILE, (mof_s, mof_h))
@skipIf(NO_MOCK, NO_MOCK_REASON)
def test_verify_log(self):