Merge branch '2017.7' into 43560_update_linux_acl_documentation

This commit is contained in:
garethgreenaway 2017-09-18 11:12:37 -07:00 committed by GitHub
commit e63fae4c91
7 changed files with 1072 additions and 405 deletions

View File

@ -4091,7 +4091,9 @@ information.
.. code-block:: yaml
reactor: []
reactor:
- 'salt/minion/*/start':
- salt://reactor/startup_tasks.sls
.. conf_master:: reactor_refresh_interval

View File

@ -253,9 +253,8 @@ in ``/etc/salt/master.d/reactor.conf``:
.. note::
You can have only one top level ``reactor`` section, so if one already
exists, add this code to the existing section. See :ref:`Understanding the
Structure of Reactor Formulas <reactor-structure>` to learn more about
reactor SLS syntax.
exists, add this code to the existing section. See :ref:`here
<reactor-sls>` to learn more about reactor SLS syntax.
Start the Salt Master in Debug Mode

View File

@ -27,9 +27,9 @@ event bus is an open system used for sending information notifying Salt and
other systems about operations.
The event system fires events with a very specific criteria. Every event has a
:strong:`tag`. Event tags allow for fast top level filtering of events. In
addition to the tag, each event has a data structure. This data structure is a
dict, which contains information about the event.
**tag**. Event tags allow for fast top-level filtering of events. In addition
to the tag, each event has a data structure. This data structure is a
dictionary, which contains information about the event.
.. _reactor-mapping-events:
@ -65,15 +65,12 @@ and each event tag has a list of reactor SLS files to be run.
the :ref:`querystring syntax <querystring-syntax>` (e.g.
``salt://reactor/mycustom.sls?saltenv=reactor``).
Reactor sls files are similar to state and pillar sls files. They are
by default yaml + Jinja templates and are passed familiar context variables.
Reactor SLS files are similar to State and Pillar SLS files. They are by
default YAML + Jinja templates and are passed familiar context variables.
Click :ref:`here <reactor-jinja-context>` for more detailed information on the
variables availble in Jinja templating.
They differ because of the addition of the ``tag`` and ``data`` variables.
- The ``tag`` variable is just the tag in the fired event.
- The ``data`` variable is the event's data dict.
Here is a simple reactor sls:
Here is the SLS for a simple reaction:
.. code-block:: jinja
@ -90,71 +87,278 @@ data structure and compiler used for the state system is used for the reactor
system. The only difference is that the data is matched up to the salt command
API and the runner system. In this example, a command is published to the
``mysql1`` minion with a function of :py:func:`state.apply
<salt.modules.state.apply_>`. Similarly, a runner can be called:
<salt.modules.state.apply_>`, which performs a :ref:`highstate
<running-highstate>`. Similarly, a runner can be called:
.. code-block:: jinja
{% if data['data']['custom_var'] == 'runit' %}
call_runit_orch:
runner.state.orchestrate:
- mods: _orch.runit
- args:
- mods: orchestrate.runit
{% endif %}
This example will execute the state.orchestrate runner and intiate an execution
of the runit orchestrator located at ``/srv/salt/_orch/runit.sls``. Using
``_orch/`` is any arbitrary path but it is recommended to avoid using "orchestrate"
as this is most likely to cause confusion.
of the ``runit`` orchestrator located at ``/srv/salt/orchestrate/runit.sls``.
Writing SLS Files
-----------------
Types of Reactions
==================
Reactor SLS files are stored in the same location as State SLS files. This means
that both ``file_roots`` and ``gitfs_remotes`` impact what SLS files are
available to the reactor and orchestrator.
============================== ==================================================================================
Name Description
============================== ==================================================================================
:ref:`local <reactor-local>` Runs a :ref:`remote-execution function <all-salt.modules>` on targeted minions
:ref:`runner <reactor-runner>` Executes a :ref:`runner function <all-salt.runners>`
:ref:`wheel <reactor-wheel>` Executes a :ref:`wheel function <all-salt.wheel>` on the master
:ref:`caller <reactor-caller>` Runs a :ref:`remote-execution function <all-salt.modules>` on a masterless minion
============================== ==================================================================================
It is recommended to keep reactor and orchestrator SLS files in their own uniquely
named subdirectories such as ``_orch/``, ``orch/``, ``_orchestrate/``, ``react/``,
``_reactor/``, etc. Keeping a unique name helps prevent confusion when trying to
read through this a few years down the road.
.. note::
The ``local`` and ``caller`` reaction types will be renamed for the Oxygen
release. These reaction types were named after Salt's internal client
interfaces, and are not intuitively named. Both ``local`` and ``caller``
will continue to work in Reactor SLS files, but for the Oxygen release the
documentation will be updated to reflect the new preferred naming.
The Goal of Writing Reactor SLS Files
=====================================
Where to Put Reactor SLS Files
==============================
Reactor SLS files share the familiar syntax from Salt States but there are
important differences. The goal of a Reactor file is to process a Salt event as
quickly as possible and then to optionally start a **new** process in response.
Reactor SLS files can come both from files local to the master, and from any of
backends enabled via the :conf_master:`fileserver_backend` config option. Files
placed in the Salt fileserver can be referenced using a ``salt://`` URL, just
like they can in State SLS files.
1. The Salt Reactor watches Salt's event bus for new events.
2. The event tag is matched against the list of event tags under the
``reactor`` section in the Salt Master config.
3. The SLS files for any matches are Rendered into a data structure that
represents one or more function calls.
4. That data structure is given to a pool of worker threads for execution.
It is recommended to place reactor and orchestrator SLS files in their own
uniquely-named subdirectories such as ``orch/``, ``orchestrate/``, ``react/``,
``reactor/``, etc., to keep them organized.
.. _reactor-sls:
Writing Reactor SLS
===================
The different reaction types were developed separately and have historically
had different methods for passing arguments. For the 2017.7.2 release a new,
unified configuration schema has been introduced, which applies to all reaction
types.
The old config schema will continue to be supported, and there is no plan to
deprecate it at this time.
.. _reactor-local:
Local Reactions
---------------
A ``local`` reaction runs a :ref:`remote-execution function <all-salt.modules>`
on the targeted minions.
The old config schema required the positional and keyword arguments to be
manually separated by the user under ``arg`` and ``kwarg`` parameters. However,
this is not very user-friendly, as it forces the user to distinguish which type
of argument is which, and make sure that positional arguments are ordered
properly. Therefore, the new config schema is recommended if the master is
running a supported release.
The below two examples are equivalent:
+---------------------------------+-----------------------------+
| Supported in 2017.7.2 and later | Supported in all releases |
+=================================+=============================+
| :: | :: |
| | |
| install_zsh: | install_zsh: |
| local.state.single: | local.state.single: |
| - tgt: 'kernel:Linux' | - tgt: 'kernel:Linux' |
| - tgt_type: grain | - tgt_type: grain |
| - args: | - arg: |
| - fun: pkg.installed | - pkg.installed |
| - name: zsh | - zsh |
| - fromrepo: updates | - kwarg: |
| | fromrepo: updates |
+---------------------------------+-----------------------------+
This reaction would be equvalent to running the following Salt command:
.. code-block:: bash
salt -G 'kernel:Linux' state.single pkg.installed name=zsh fromrepo=updates
.. note::
Any other parameters in the :py:meth:`LocalClient().cmd_async()
<salt.client.LocalClient.cmd_async>` method can be passed at the same
indentation level as ``tgt``.
.. note::
``tgt_type`` is only required when the target expression defined in ``tgt``
uses a :ref:`target type <targeting>` other than a minion ID glob.
The ``tgt_type`` argument was named ``expr_form`` in releases prior to
2017.7.0.
.. _reactor-runner:
Runner Reactions
----------------
Runner reactions execute :ref:`runner functions <all-salt.runners>` locally on
the master.
The old config schema called for passing arguments to the reaction directly
under the name of the runner function. However, this can cause unpredictable
interactions with the Reactor system's internal arguments. It is also possible
to pass positional and keyword arguments under ``arg`` and ``kwarg`` like above
in :ref:`local reactions <reactor-local>`, but as noted above this is not very
user-friendly. Therefore, the new config schema is recommended if the master
is running a supported release.
The below two examples are equivalent:
+-------------------------------------------------+-------------------------------------------------+
| Supported in 2017.7.2 and later | Supported in all releases |
+=================================================+=================================================+
| :: | :: |
| | |
| deploy_app: | deploy_app: |
| runner.state.orchestrate: | runner.state.orchestrate: |
| - args: | - mods: orchestrate.deploy_app |
| - mods: orchestrate.deploy_app | - kwarg: |
| - pillar: | pillar: |
| event_tag: {{ tag }} | event_tag: {{ tag }} |
| event_data: {{ data['data']|json }} | event_data: {{ data['data']|json }} |
+-------------------------------------------------+-------------------------------------------------+
Assuming that the event tag is ``foo``, and the data passed to the event is
``{'bar': 'baz'}``, then this reaction is equvalent to running the following
Salt command:
.. code-block:: bash
salt-run state.orchestrate mods=orchestrate.deploy_app pillar='{"event_tag": "foo", "event_data": {"bar": "baz"}}'
.. _reactor-wheel:
Wheel Reactions
---------------
Wheel reactions run :ref:`wheel functions <all-salt.wheel>` locally on the
master.
Like :ref:`runner reactions <reactor-runner>`, the old config schema called for
wheel reactions to have arguments passed directly under the name of the
:ref:`wheel function <all-salt.wheel>` (or in ``arg`` or ``kwarg`` parameters).
The below two examples are equivalent:
+-----------------------------------+---------------------------------+
| Supported in 2017.7.2 and later | Supported in all releases |
+===================================+=================================+
| :: | :: |
| | |
| remove_key: | remove_key: |
| wheel.key.delete: | wheel.key.delete: |
| - args: | - match: {{ data['id'] }} |
| - match: {{ data['id'] }} | |
+-----------------------------------+---------------------------------+
.. _reactor-caller:
Caller Reactions
----------------
Caller reactions run :ref:`remote-execution functions <all-salt.modules>` on a
minion daemon's Reactor system. To run a Reactor on the minion, it is necessary
to configure the :mod:`Reactor Engine <salt.engines.reactor>` in the minion
config file, and then setup your watched events in a ``reactor`` section in the
minion config file as well.
.. note:: Masterless Minions use this Reactor
This is the only way to run the Reactor if you use masterless minions.
Both the old and new config schemas involve passing arguments under an ``args``
parameter. However, the old config schema only supports positional arguments.
Therefore, the new config schema is recommended if the masterless minion is
running a supported release.
The below two examples are equivalent:
+---------------------------------+---------------------------+
| Supported in 2017.7.2 and later | Supported in all releases |
+=================================+===========================+
| :: | :: |
| | |
| touch_file: | touch_file: |
| caller.file.touch: | caller.file.touch: |
| - args: | - args: |
| - name: /tmp/foo | - /tmp/foo |
+---------------------------------+---------------------------+
This reaction is equvalent to running the following Salt command:
.. code-block:: bash
salt-call file.touch name=/tmp/foo
Best Practices for Writing Reactor SLS Files
============================================
The Reactor works as follows:
1. The Salt Reactor watches Salt's event bus for new events.
2. Each event's tag is matched against the list of event tags configured under
the :conf_master:`reactor` section in the Salt Master config.
3. The SLS files for any matches are rendered into a data structure that
represents one or more function calls.
4. That data structure is given to a pool of worker threads for execution.
Matching and rendering Reactor SLS files is done sequentially in a single
process. Complex Jinja that calls out to slow Execution or Runner modules slows
down the rendering and causes other reactions to pile up behind the current
one. The worker pool is designed to handle complex and long-running processes
such as Salt Orchestrate.
process. For that reason, reactor SLS files should contain few individual
reactions (one, if at all possible). Also, keep in mind that reactions are
fired asynchronously (with the exception of :ref:`caller <reactor-caller>`) and
do *not* support :ref:`requisites <requisites>`.
tl;dr: Rendering Reactor SLS files MUST be simple and quick. The new process
started by the worker threads can be long-running. Using the reactor to fire
an orchestrate runner would be ideal.
Complex Jinja templating that calls out to slow :ref:`remote-execution
<all-salt.modules>` or :ref:`runner <all-salt.runners>` functions slows down
the rendering and causes other reactions to pile up behind the current one. The
worker pool is designed to handle complex and long-running processes like
:ref:`orchestration <orchestrate-runner>` jobs.
Therefore, when complex tasks are in order, :ref:`orchestration
<orchestrate-runner>` is a natural fit. Orchestration SLS files can be more
complex, and use requisites. Performing a complex task using orchestration lets
the Reactor system fire off the orchestration job and proceed with processing
other reactions.
.. _reactor-jinja-context:
Jinja Context
-------------
=============
Reactor files only have access to a minimal Jinja context. ``grains`` and
``pillar`` are not available. The ``salt`` object is available for calling
Runner and Execution modules but it should be used sparingly and only for quick
tasks for the reasons mentioned above.
Reactor SLS files only have access to a minimal Jinja context. ``grains`` and
``pillar`` are *not* available. The ``salt`` object is available for calling
:ref:`remote-execution <all-salt.modules>` or :ref:`runner <all-salt.runners>`
functions, but it should be used sparingly and only for quick tasks for the
reasons mentioned above.
In addition to the ``salt`` object, the following variables are available in
the Jinja context:
- ``tag`` - the tag from the event that triggered execution of the Reactor SLS
file
- ``data`` - the event's data dictionary
The ``data`` dict will contain an ``id`` key containing the minion ID, if the
event was fired from a minion, and a ``data`` key containing the data passed to
the event.
Advanced State System Capabilities
----------------------------------
==================================
Reactor SLS files, by design, do not support Requisites, ordering,
``onlyif``/``unless`` conditionals and most other powerful constructs from
Salt's State system.
Reactor SLS files, by design, do not support :ref:`requisites <requisites>`,
ordering, ``onlyif``/``unless`` conditionals and most other powerful constructs
from Salt's State system.
Complex Master-side operations are best performed by Salt's Orchestrate system
so using the Reactor to kick off an Orchestrate run is a very common pairing.
@ -166,7 +370,7 @@ For example:
# /etc/salt/master.d/reactor.conf
# A custom event containing: {"foo": "Foo!", "bar: "bar*", "baz": "Baz!"}
reactor:
- myco/custom/event:
- my/custom/event:
- /srv/reactor/some_event.sls
.. code-block:: jinja
@ -174,15 +378,15 @@ For example:
# /srv/reactor/some_event.sls
invoke_orchestrate_file:
runner.state.orchestrate:
- mods: _orch.do_complex_thing # /srv/salt/_orch/do_complex_thing.sls
- kwarg:
pillar:
event_tag: {{ tag }}
event_data: {{ data|json() }}
- args:
- mods: orchestrate.do_complex_thing
- pillar:
event_tag: {{ tag }}
event_data: {{ data|json }}
.. code-block:: jinja
# /srv/salt/_orch/do_complex_thing.sls
# /srv/salt/orchestrate/do_complex_thing.sls
{% set tag = salt.pillar.get('event_tag') %}
{% set data = salt.pillar.get('event_data') %}
@ -209,7 +413,7 @@ For example:
.. _beacons-and-reactors:
Beacons and Reactors
--------------------
====================
An event initiated by a beacon, when it arrives at the master will be wrapped
inside a second event, such that the data object containing the beacon
@ -219,27 +423,52 @@ For example, to access the ``id`` field of the beacon event in a reactor file,
you will need to reference ``{{ data['data']['id'] }}`` rather than ``{{
data['id'] }}`` as for events initiated directly on the event bus.
Similarly, the data dictionary attached to the event would be located in
``{{ data['data']['data'] }}`` instead of ``{{ data['data'] }}``.
See the :ref:`beacon documentation <beacon-example>` for examples.
Fire an event
=============
Manually Firing an Event
========================
To fire an event from a minion call ``event.send``
From the Master
---------------
Use the :py:func:`event.send <salt.runners.event.send>` runner:
.. code-block:: bash
salt-call event.send 'foo' '{orchestrate: refresh}'
salt-run event.send foo '{orchestrate: refresh}'
After this is called, any reactor sls files matching event tag ``foo`` will
execute with ``{{ data['data']['orchestrate'] }}`` equal to ``'refresh'``.
From the Minion
---------------
See :py:mod:`salt.modules.event` for more information.
To fire an event to the master from a minion, call :py:func:`event.send
<salt.modules.event.send>`:
Knowing what event is being fired
=================================
.. code-block:: bash
The best way to see exactly what events are fired and what data is available in
each event is to use the :py:func:`state.event runner
salt-call event.send foo '{orchestrate: refresh}'
To fire an event to the minion's local event bus, call :py:func:`event.fire
<salt.modules.event.fire>`:
.. code-block:: bash
salt-call event.fire '{orchestrate: refresh}' foo
Referencing Data Passed in Events
---------------------------------
Assuming any of the above examples, any reactor SLS files triggered by watching
the event tag ``foo`` will execute with ``{{ data['data']['orchestrate'] }}``
equal to ``'refresh'``.
Getting Information About Events
================================
The best way to see exactly what events have been fired and what data is
available in each event is to use the :py:func:`state.event runner
<salt.runners.state.event>`.
.. seealso:: :ref:`Common Salt Events <event-master_events>`
@ -308,156 +537,10 @@ rendered SLS file (or any errors generated while rendering the SLS file).
view the result of referencing Jinja variables. If the result is empty then
Jinja produced an empty result and the Reactor will ignore it.
.. _reactor-structure:
Passing Event Data to Minions or Orchestration as Pillar
--------------------------------------------------------
Understanding the Structure of Reactor Formulas
===============================================
**I.e., when to use `arg` and `kwarg` and when to specify the function
arguments directly.**
While the reactor system uses the same basic data structure as the state
system, the functions that will be called using that data structure are
different functions than are called via Salt's state system. The Reactor can
call Runner modules using the `runner` prefix, Wheel modules using the `wheel`
prefix, and can also cause minions to run Execution modules using the `local`
prefix.
.. versionchanged:: 2014.7.0
The ``cmd`` prefix was renamed to ``local`` for consistency with other
parts of Salt. A backward-compatible alias was added for ``cmd``.
The Reactor runs on the master and calls functions that exist on the master. In
the case of Runner and Wheel functions the Reactor can just call those
functions directly since they exist on the master and are run on the master.
In the case of functions that exist on minions and are run on minions, the
Reactor still needs to call a function on the master in order to send the
necessary data to the minion so the minion can execute that function.
The Reactor calls functions exposed in :ref:`Salt's Python API documentation
<client-apis>`. and thus the structure of Reactor files very transparently
reflects the function signatures of those functions.
Calling Execution modules on Minions
------------------------------------
The Reactor sends commands down to minions in the exact same way Salt's CLI
interface does. It calls a function locally on the master that sends the name
of the function as well as a list of any arguments and a dictionary of any
keyword arguments that the minion should use to execute that function.
Specifically, the Reactor calls the async version of :py:meth:`this function
<salt.client.LocalClient.cmd>`. You can see that function has 'arg' and 'kwarg'
parameters which are both values that are sent down to the minion.
Executing remote commands maps to the :strong:`LocalClient` interface which is
used by the :strong:`salt` command. This interface more specifically maps to
the :strong:`cmd_async` method inside of the :strong:`LocalClient` class. This
means that the arguments passed are being passed to the :strong:`cmd_async`
method, not the remote method. A field starts with :strong:`local` to use the
:strong:`LocalClient` subsystem. The result is, to execute a remote command,
a reactor formula would look like this:
.. code-block:: yaml
clean_tmp:
local.cmd.run:
- tgt: '*'
- arg:
- rm -rf /tmp/*
The ``arg`` option takes a list of arguments as they would be presented on the
command line, so the above declaration is the same as running this salt
command:
.. code-block:: bash
salt '*' cmd.run 'rm -rf /tmp/*'
Use the ``tgt_type`` argument to specify a matcher:
.. code-block:: yaml
clean_tmp:
local.cmd.run:
- tgt: 'os:Ubuntu'
- tgt_type: grain
- arg:
- rm -rf /tmp/*
clean_tmp:
local.cmd.run:
- tgt: 'G@roles:hbase_master'
- tgt_type: compound
- arg:
- rm -rf /tmp/*
.. note::
The ``tgt_type`` argument was named ``expr_form`` in releases prior to
2017.7.0 (2016.11.x and earlier).
Any other parameters in the :py:meth:`LocalClient().cmd()
<salt.client.LocalClient.cmd>` method can be specified as well.
Executing Reactors from the Minion
----------------------------------
The minion can be setup to use the Reactor via a reactor engine. This just
sets up and listens to the minions event bus, instead of to the masters.
The biggest difference is that you have to use the caller method on the
Reactor, which is the equivalent of salt-call, to run your commands.
:mod:`Reactor Engine setup <salt.engines.reactor>`
.. code-block:: yaml
clean_tmp:
caller.cmd.run:
- arg:
- rm -rf /tmp/*
.. note:: Masterless Minions use this Reactor
This is the only way to run the Reactor if you use masterless minions.
Calling Runner modules and Wheel modules
----------------------------------------
Calling Runner modules and Wheel modules from the Reactor uses a more direct
syntax since the function is being executed locally instead of sending a
command to a remote system to be executed there. There are no 'arg' or 'kwarg'
parameters (unless the Runner function or Wheel function accepts a parameter
with either of those names.)
For example:
.. code-block:: yaml
clear_the_grains_cache_for_all_minions:
runner.cache.clear_grains
If the :py:func:`the runner takes arguments <salt.runners.cloud.profile>` then
they must be specified as keyword arguments.
.. code-block:: yaml
spin_up_more_web_machines:
runner.cloud.profile:
- prof: centos_6
- instances:
- web11 # These VM names would be generated via Jinja in a
- web12 # real-world example.
To determine the proper names for the arguments, check the documentation
or source code for the runner function you wish to call.
Passing event data to Minions or Orchestrate as Pillar
------------------------------------------------------
An interesting trick to pass data from the Reactor script to
An interesting trick to pass data from the Reactor SLS file to
:py:func:`state.apply <salt.modules.state.apply_>` is to pass it as inline
Pillar data since both functions take a keyword argument named ``pillar``.
@ -484,10 +567,9 @@ from the event to the state file via inline Pillar.
add_new_minion_to_pool:
local.state.apply:
- tgt: 'haproxy*'
- arg:
- haproxy.refresh_pool
- kwarg:
pillar:
- args:
- mods: haproxy.refresh_pool
- pillar:
new_minion: {{ data['id'] }}
{% endif %}
@ -503,17 +585,16 @@ This works with Orchestrate files as well:
call_some_orchestrate_file:
runner.state.orchestrate:
- mods: _orch.some_orchestrate_file
- pillar:
stuff: things
- args:
- mods: orchestrate.some_orchestrate_file
- pillar:
stuff: things
Which is equivalent to the following command at the CLI:
.. code-block:: bash
salt-run state.orchestrate _orch.some_orchestrate_file pillar='{stuff: things}'
This expects to find a file at /srv/salt/_orch/some_orchestrate_file.sls.
salt-run state.orchestrate orchestrate.some_orchestrate_file pillar='{stuff: things}'
Finally, that data is available in the state file using the normal Pillar
lookup syntax. The following example is grabbing web server names and IP
@ -564,7 +645,7 @@ includes the minion id, which we can use for matching.
- 'salt/minion/ink*/start':
- /srv/reactor/auth-complete.sls
In this sls file, we say that if the key was rejected we will delete the key on
In this SLS file, we say that if the key was rejected we will delete the key on
the master and then also tell the master to ssh in to the minion and tell it to
restart the minion, since a minion process will die if the key is rejected.
@ -580,19 +661,21 @@ authentication every ten seconds by default.
{% if not data['result'] and data['id'].startswith('ink') %}
minion_remove:
wheel.key.delete:
- match: {{ data['id'] }}
- args:
- match: {{ data['id'] }}
minion_rejoin:
local.cmd.run:
- tgt: salt-master.domain.tld
- arg:
- ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no "{{ data['id'] }}" 'sleep 10 && /etc/init.d/salt-minion restart'
- args:
- cmd: ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no "{{ data['id'] }}" 'sleep 10 && /etc/init.d/salt-minion restart'
{% endif %}
{# Ink server is sending new key -- accept this key #}
{% if 'act' in data and data['act'] == 'pend' and data['id'].startswith('ink') %}
minion_add:
wheel.key.accept:
- match: {{ data['id'] }}
- args:
- match: {{ data['id'] }}
{% endif %}
No if statements are needed here because we already limited this action to just

View File

@ -359,29 +359,19 @@ class SyncClientMixin(object):
# packed into the top level object. The plan is to move away from
# that since the caller knows what is an arg vs a kwarg, but while
# we make the transition we will load "kwargs" using format_call if
# there are no kwargs in the low object passed in
f_call = None
if 'arg' not in low:
# there are no kwargs in the low object passed in.
if 'arg' in low and 'kwarg' in low:
args = low['arg']
kwargs = low['kwarg']
else:
f_call = salt.utils.format_call(
self.functions[fun],
low,
expected_extra_kws=CLIENT_INTERNAL_KEYWORDS
)
args = f_call.get('args', ())
else:
args = low['arg']
if 'kwarg' not in low:
log.critical(
'kwargs must be passed inside the low data within the '
'\'kwarg\' key. See usage of '
'salt.utils.args.parse_input() and '
'salt.minion.load_args_and_kwargs() elsewhere in the '
'codebase.'
)
kwargs = {}
else:
kwargs = low['kwarg']
kwargs = f_call.get('kwargs', {})
# Update the event data with loaded args and kwargs
data['fun_args'] = list(args) + ([kwargs] if kwargs else [])

View File

@ -1600,13 +1600,24 @@ class Minion(MinionBase):
minion side execution.
'''
salt.utils.appendproctitle('{0}._thread_multi_return {1}'.format(cls.__name__, data['jid']))
ret = {
'return': {},
'retcode': {},
'success': {}
}
for ind in range(0, len(data['fun'])):
ret['success'][data['fun'][ind]] = False
multifunc_ordered = opts.get('multifunc_ordered', False)
num_funcs = len(data['fun'])
if multifunc_ordered:
ret = {
'return': [None] * num_funcs,
'retcode': [None] * num_funcs,
'success': [False] * num_funcs
}
else:
ret = {
'return': {},
'retcode': {},
'success': {}
}
for ind in range(0, num_funcs):
if not multifunc_ordered:
ret['success'][data['fun'][ind]] = False
try:
if minion_instance.connected and minion_instance.opts['pillar'].get('minion_blackout', False):
# this minion is blacked out. Only allow saltutil.refresh_pillar
@ -1621,12 +1632,20 @@ class Minion(MinionBase):
data['arg'][ind],
data)
minion_instance.functions.pack['__context__']['retcode'] = 0
ret['return'][data['fun'][ind]] = func(*args, **kwargs)
ret['retcode'][data['fun'][ind]] = minion_instance.functions.pack['__context__'].get(
'retcode',
0
)
ret['success'][data['fun'][ind]] = True
if multifunc_ordered:
ret['return'][ind] = func(*args, **kwargs)
ret['retcode'][ind] = minion_instance.functions.pack['__context__'].get(
'retcode',
0
)
ret['success'][ind] = True
else:
ret['return'][data['fun'][ind]] = func(*args, **kwargs)
ret['retcode'][data['fun'][ind]] = minion_instance.functions.pack['__context__'].get(
'retcode',
0
)
ret['success'][data['fun'][ind]] = True
except Exception as exc:
trb = traceback.format_exc()
log.warning(
@ -1634,7 +1653,10 @@ class Minion(MinionBase):
exc
)
)
ret['return'][data['fun'][ind]] = trb
if multifunc_ordered:
ret['return'][ind] = trb
else:
ret['return'][data['fun'][ind]] = trb
ret['jid'] = data['jid']
ret['fun'] = data['fun']
ret['fun_args'] = data['arg']
@ -2589,6 +2611,8 @@ class SyndicManager(MinionBase):
'''
if kwargs is None:
kwargs = {}
successful = False
# Call for each master
for master, syndic_future in self.iter_master_options(master_id):
if not syndic_future.done() or syndic_future.exception():
log.error('Unable to call {0} on {1}, that syndic is not connected'.format(func, master))
@ -2596,12 +2620,12 @@ class SyndicManager(MinionBase):
try:
getattr(syndic_future.result(), func)(*args, **kwargs)
return
successful = True
except SaltClientError:
log.error('Unable to call {0} on {1}, trying another...'.format(func, master))
self._mark_master_dead(master)
continue
log.critical('Unable to call {0} on any masters!'.format(func))
if not successful:
log.critical('Unable to call {0} on any masters!'.format(func))
def _return_pub_syndic(self, values, master_id=None):
'''

View File

@ -7,12 +7,14 @@ import glob
import logging
# Import salt libs
import salt.client
import salt.runner
import salt.state
import salt.utils
import salt.utils.cache
import salt.utils.event
import salt.utils.process
import salt.wheel
import salt.defaults.exitcodes
# Import 3rd-party libs
@ -21,6 +23,15 @@ import salt.ext.six as six
log = logging.getLogger(__name__)
REACTOR_INTERNAL_KEYWORDS = frozenset([
'__id__',
'__sls__',
'name',
'order',
'fun',
'state',
])
class Reactor(salt.utils.process.SignalHandlingMultiprocessingProcess, salt.state.Compiler):
'''
@ -29,6 +40,10 @@ class Reactor(salt.utils.process.SignalHandlingMultiprocessingProcess, salt.stat
The reactor has the capability to execute pre-programmed executions
as reactions to events
'''
aliases = {
'cmd': 'local',
}
def __init__(self, opts, log_queue=None):
super(Reactor, self).__init__(log_queue=log_queue)
local_minion_opts = opts.copy()
@ -171,6 +186,16 @@ class Reactor(salt.utils.process.SignalHandlingMultiprocessingProcess, salt.stat
return {'status': False, 'comment': 'Reactor does not exists.'}
def resolve_aliases(self, chunks):
'''
Preserve backward compatibility by rewriting the 'state' key in the low
chunks if it is using a legacy type.
'''
for idx, _ in enumerate(chunks):
new_state = self.aliases.get(chunks[idx]['state'])
if new_state is not None:
chunks[idx]['state'] = new_state
def reactions(self, tag, data, reactors):
'''
Render a list of reactor files and returns a reaction struct
@ -191,6 +216,7 @@ class Reactor(salt.utils.process.SignalHandlingMultiprocessingProcess, salt.stat
except Exception as exc:
log.error('Exception trying to compile reactions: {0}'.format(exc), exc_info=True)
self.resolve_aliases(chunks)
return chunks
def call_reactions(self, chunks):
@ -248,12 +274,19 @@ class Reactor(salt.utils.process.SignalHandlingMultiprocessingProcess, salt.stat
class ReactWrap(object):
'''
Create a wrapper that executes low data for the reaction system
Wrapper that executes low data for the Reactor System
'''
# class-wide cache of clients
client_cache = None
event_user = 'Reactor'
reaction_class = {
'local': salt.client.LocalClient,
'runner': salt.runner.RunnerClient,
'wheel': salt.wheel.Wheel,
'caller': salt.client.Caller,
}
def __init__(self, opts):
self.opts = opts
if ReactWrap.client_cache is None:
@ -264,21 +297,49 @@ class ReactWrap(object):
queue_size=self.opts['reactor_worker_hwm'] # queue size for those workers
)
def populate_client_cache(self, low):
'''
Populate the client cache with an instance of the specified type
'''
reaction_type = low['state']
if reaction_type not in self.client_cache:
log.debug('Reactor is populating %s client cache', reaction_type)
if reaction_type in ('runner', 'wheel'):
# Reaction types that run locally on the master want the full
# opts passed.
self.client_cache[reaction_type] = \
self.reaction_class[reaction_type](self.opts)
# The len() function will cause the module functions to load if
# they aren't already loaded. We want to load them so that the
# spawned threads don't need to load them. Loading in the
# spawned threads creates race conditions such as sometimes not
# finding the required function because another thread is in
# the middle of loading the functions.
len(self.client_cache[reaction_type].functions)
else:
# Reactions which use remote pubs only need the conf file when
# instantiating a client instance.
self.client_cache[reaction_type] = \
self.reaction_class[reaction_type](self.opts['conf_file'])
def run(self, low):
'''
Execute the specified function in the specified state by passing the
low data
Execute a reaction by invoking the proper wrapper func
'''
l_fun = getattr(self, low['state'])
self.populate_client_cache(low)
try:
f_call = salt.utils.format_call(l_fun, low)
kwargs = f_call.get('kwargs', {})
if 'arg' not in kwargs:
kwargs['arg'] = []
if 'kwarg' not in kwargs:
kwargs['kwarg'] = {}
l_fun = getattr(self, low['state'])
except AttributeError:
log.error(
'ReactWrap is missing a wrapper function for \'%s\'',
low['state']
)
# TODO: Setting the user doesn't seem to work for actual remote publishes
try:
wrap_call = salt.utils.format_call(l_fun, low)
args = wrap_call.get('args', ())
kwargs = wrap_call.get('kwargs', {})
# TODO: Setting user doesn't seem to work for actual remote pubs
if low['state'] in ('runner', 'wheel'):
# Update called function's low data with event user to
# segregate events fired by reactor and avoid reaction loops
@ -286,80 +347,106 @@ class ReactWrap(object):
# Replace ``state`` kwarg which comes from high data compiler.
# It breaks some runner functions and seems unnecessary.
kwargs['__state__'] = kwargs.pop('state')
# NOTE: if any additional keys are added here, they will also
# need to be added to filter_kwargs()
l_fun(*f_call.get('args', ()), **kwargs)
if 'args' in kwargs:
# New configuration
reactor_args = kwargs.pop('args')
for item in ('arg', 'kwarg'):
if item in low:
log.warning(
'Reactor \'%s\' is ignoring \'%s\' param %s due to '
'presence of \'args\' param. Check the Reactor System '
'documentation for the correct argument format.',
low['__id__'], item, low[item]
)
if low['state'] == 'caller' \
and isinstance(reactor_args, list) \
and not salt.utils.is_dictlist(reactor_args):
# Legacy 'caller' reactors were already using the 'args'
# param, but only supported a list of positional arguments.
# If low['args'] is a list but is *not* a dictlist, then
# this is actually using the legacy configuration. So, put
# the reactor args into kwarg['arg'] so that the wrapper
# interprets them as positional args.
kwargs['arg'] = reactor_args
kwargs['kwarg'] = {}
else:
kwargs['arg'] = ()
kwargs['kwarg'] = reactor_args
if not isinstance(kwargs['kwarg'], dict):
kwargs['kwarg'] = salt.utils.repack_dictlist(kwargs['kwarg'])
if not kwargs['kwarg']:
log.error(
'Reactor \'%s\' failed to execute %s \'%s\': '
'Incorrect argument format, check the Reactor System '
'documentation for the correct format.',
low['__id__'], low['state'], low['fun']
)
return
else:
# Legacy configuration
react_call = {}
if low['state'] in ('runner', 'wheel'):
if 'arg' not in kwargs or 'kwarg' not in kwargs:
# Runner/wheel execute on the master, so we can use
# format_call to get the functions args/kwargs
react_fun = self.client_cache[low['state']].functions.get(low['fun'])
if react_fun is None:
log.error(
'Reactor \'%s\' failed to execute %s \'%s\': '
'function not available',
low['__id__'], low['state'], low['fun']
)
return
react_call = salt.utils.format_call(
react_fun,
low,
expected_extra_kws=REACTOR_INTERNAL_KEYWORDS
)
if 'arg' not in kwargs:
kwargs['arg'] = react_call.get('args', ())
if 'kwarg' not in kwargs:
kwargs['kwarg'] = react_call.get('kwargs', {})
# Execute the wrapper with the proper args/kwargs. kwargs['arg']
# and kwargs['kwarg'] contain the positional and keyword arguments
# that will be passed to the client interface to execute the
# desired runner/wheel/remote-exec/etc. function.
l_fun(*args, **kwargs)
except SystemExit:
log.warning(
'Reactor \'%s\' attempted to exit. Ignored.', low['__id__']
)
except Exception:
log.error(
'Failed to execute {0}: {1}\n'.format(low['state'], l_fun),
exc_info=True
)
def local(self, *args, **kwargs):
'''
Wrap LocalClient for running :ref:`execution modules <all-salt.modules>`
'''
if 'local' not in self.client_cache:
self.client_cache['local'] = salt.client.LocalClient(self.opts['conf_file'])
try:
self.client_cache['local'].cmd_async(*args, **kwargs)
except SystemExit:
log.warning('Attempt to exit reactor. Ignored.')
except Exception as exc:
log.warning('Exception caught by reactor: {0}'.format(exc))
cmd = local
'Reactor \'%s\' failed to execute %s \'%s\'',
low['__id__'], low['state'], low['fun'], exc_info=True
)
def runner(self, fun, **kwargs):
'''
Wrap RunnerClient for executing :ref:`runner modules <all-salt.runners>`
'''
if 'runner' not in self.client_cache:
self.client_cache['runner'] = salt.runner.RunnerClient(self.opts)
# The len() function will cause the module functions to load if
# they aren't already loaded. We want to load them so that the
# spawned threads don't need to load them. Loading in the spawned
# threads creates race conditions such as sometimes not finding
# the required function because another thread is in the middle
# of loading the functions.
len(self.client_cache['runner'].functions)
try:
self.pool.fire_async(self.client_cache['runner'].low, args=(fun, kwargs))
except SystemExit:
log.warning('Attempt to exit in reactor by runner. Ignored')
except Exception as exc:
log.warning('Exception caught by reactor: {0}'.format(exc))
self.pool.fire_async(self.client_cache['runner'].low, args=(fun, kwargs))
def wheel(self, fun, **kwargs):
'''
Wrap Wheel to enable executing :ref:`wheel modules <all-salt.wheel>`
'''
if 'wheel' not in self.client_cache:
self.client_cache['wheel'] = salt.wheel.Wheel(self.opts)
# The len() function will cause the module functions to load if
# they aren't already loaded. We want to load them so that the
# spawned threads don't need to load them. Loading in the spawned
# threads creates race conditions such as sometimes not finding
# the required function because another thread is in the middle
# of loading the functions.
len(self.client_cache['wheel'].functions)
try:
self.pool.fire_async(self.client_cache['wheel'].low, args=(fun, kwargs))
except SystemExit:
log.warning('Attempt to in reactor by whell. Ignored.')
except Exception as exc:
log.warning('Exception caught by reactor: {0}'.format(exc))
self.pool.fire_async(self.client_cache['wheel'].low, args=(fun, kwargs))
def caller(self, fun, *args, **kwargs):
def local(self, fun, tgt, **kwargs):
'''
Wrap Caller to enable executing :ref:`caller modules <all-salt.caller>`
Wrap LocalClient for running :ref:`execution modules <all-salt.modules>`
'''
log.debug("in caller with fun {0} args {1} kwargs {2}".format(fun, args, kwargs))
args = kwargs.get('args', [])
if 'caller' not in self.client_cache:
self.client_cache['caller'] = salt.client.Caller(self.opts['conf_file'])
try:
self.client_cache['caller'].function(fun, *args)
except SystemExit:
log.warning('Attempt to exit reactor. Ignored.')
except Exception as exc:
log.warning('Exception caught by reactor: {0}'.format(exc))
self.client_cache['local'].cmd_async(tgt, fun, **kwargs)
def caller(self, fun, **kwargs):
'''
Wrap LocalCaller to execute remote exec functions locally on the Minion
'''
self.client_cache['caller'].cmd(fun, *kwargs['arg'], **kwargs['kwarg'])

View File

@ -1,74 +1,556 @@
# -*- coding: utf-8 -*-
from __future__ import absolute_import
import time
import shutil
import tempfile
import codecs
import glob
import logging
import os
from contextlib import contextmanager
import textwrap
import yaml
import salt.utils
from salt.utils.process import clean_proc
import salt.loader
import salt.utils.reactor as reactor
from tests.integration import AdaptedConfigurationTestCaseMixin
from tests.support.paths import TMP
from tests.support.unit import TestCase, skipIf
from tests.support.mock import patch, MagicMock
from tests.support.mixins import AdaptedConfigurationTestCaseMixin
from tests.support.mock import (
NO_MOCK,
NO_MOCK_REASON,
patch,
MagicMock,
Mock,
mock_open,
)
REACTOR_CONFIG = '''\
reactor:
- old_runner:
- /srv/reactor/old_runner.sls
- old_wheel:
- /srv/reactor/old_wheel.sls
- old_local:
- /srv/reactor/old_local.sls
- old_cmd:
- /srv/reactor/old_cmd.sls
- old_caller:
- /srv/reactor/old_caller.sls
- new_runner:
- /srv/reactor/new_runner.sls
- new_wheel:
- /srv/reactor/new_wheel.sls
- new_local:
- /srv/reactor/new_local.sls
- new_cmd:
- /srv/reactor/new_cmd.sls
- new_caller:
- /srv/reactor/new_caller.sls
'''
REACTOR_DATA = {
'runner': {'data': {'message': 'This is an error'}},
'wheel': {'data': {'id': 'foo'}},
'local': {'data': {'pkg': 'zsh', 'repo': 'updates'}},
'cmd': {'data': {'pkg': 'zsh', 'repo': 'updates'}},
'caller': {'data': {'path': '/tmp/foo'}},
}
SLS = {
'/srv/reactor/old_runner.sls': textwrap.dedent('''\
raise_error:
runner.error.error:
- name: Exception
- message: {{ data['data']['message'] }}
'''),
'/srv/reactor/old_wheel.sls': textwrap.dedent('''\
remove_key:
wheel.key.delete:
- match: {{ data['data']['id'] }}
'''),
'/srv/reactor/old_local.sls': textwrap.dedent('''\
install_zsh:
local.state.single:
- tgt: test
- arg:
- pkg.installed
- {{ data['data']['pkg'] }}
- kwarg:
fromrepo: {{ data['data']['repo'] }}
'''),
'/srv/reactor/old_cmd.sls': textwrap.dedent('''\
install_zsh:
cmd.state.single:
- tgt: test
- arg:
- pkg.installed
- {{ data['data']['pkg'] }}
- kwarg:
fromrepo: {{ data['data']['repo'] }}
'''),
'/srv/reactor/old_caller.sls': textwrap.dedent('''\
touch_file:
caller.file.touch:
- args:
- {{ data['data']['path'] }}
'''),
'/srv/reactor/new_runner.sls': textwrap.dedent('''\
raise_error:
runner.error.error:
- args:
- name: Exception
- message: {{ data['data']['message'] }}
'''),
'/srv/reactor/new_wheel.sls': textwrap.dedent('''\
remove_key:
wheel.key.delete:
- args:
- match: {{ data['data']['id'] }}
'''),
'/srv/reactor/new_local.sls': textwrap.dedent('''\
install_zsh:
local.state.single:
- tgt: test
- args:
- fun: pkg.installed
- name: {{ data['data']['pkg'] }}
- fromrepo: {{ data['data']['repo'] }}
'''),
'/srv/reactor/new_cmd.sls': textwrap.dedent('''\
install_zsh:
cmd.state.single:
- tgt: test
- args:
- fun: pkg.installed
- name: {{ data['data']['pkg'] }}
- fromrepo: {{ data['data']['repo'] }}
'''),
'/srv/reactor/new_caller.sls': textwrap.dedent('''\
touch_file:
caller.file.touch:
- args:
- name: {{ data['data']['path'] }}
'''),
}
LOW_CHUNKS = {
# Note that the "name" value in the chunk has been overwritten by the
# "name" argument in the SLS. This is one reason why the new schema was
# needed.
'old_runner': [{
'state': 'runner',
'__id__': 'raise_error',
'__sls__': '/srv/reactor/old_runner.sls',
'order': 1,
'fun': 'error.error',
'name': 'Exception',
'message': 'This is an error',
}],
'old_wheel': [{
'state': 'wheel',
'__id__': 'remove_key',
'name': 'remove_key',
'__sls__': '/srv/reactor/old_wheel.sls',
'order': 1,
'fun': 'key.delete',
'match': 'foo',
}],
'old_local': [{
'state': 'local',
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/old_local.sls',
'order': 1,
'tgt': 'test',
'fun': 'state.single',
'arg': ['pkg.installed', 'zsh'],
'kwarg': {'fromrepo': 'updates'},
}],
'old_cmd': [{
'state': 'local', # 'cmd' should be aliased to 'local'
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/old_cmd.sls',
'order': 1,
'tgt': 'test',
'fun': 'state.single',
'arg': ['pkg.installed', 'zsh'],
'kwarg': {'fromrepo': 'updates'},
}],
'old_caller': [{
'state': 'caller',
'__id__': 'touch_file',
'name': 'touch_file',
'__sls__': '/srv/reactor/old_caller.sls',
'order': 1,
'fun': 'file.touch',
'args': ['/tmp/foo'],
}],
'new_runner': [{
'state': 'runner',
'__id__': 'raise_error',
'name': 'raise_error',
'__sls__': '/srv/reactor/new_runner.sls',
'order': 1,
'fun': 'error.error',
'args': [
{'name': 'Exception'},
{'message': 'This is an error'},
],
}],
'new_wheel': [{
'state': 'wheel',
'__id__': 'remove_key',
'name': 'remove_key',
'__sls__': '/srv/reactor/new_wheel.sls',
'order': 1,
'fun': 'key.delete',
'args': [
{'match': 'foo'},
],
}],
'new_local': [{
'state': 'local',
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/new_local.sls',
'order': 1,
'tgt': 'test',
'fun': 'state.single',
'args': [
{'fun': 'pkg.installed'},
{'name': 'zsh'},
{'fromrepo': 'updates'},
],
}],
'new_cmd': [{
'state': 'local',
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/new_cmd.sls',
'order': 1,
'tgt': 'test',
'fun': 'state.single',
'args': [
{'fun': 'pkg.installed'},
{'name': 'zsh'},
{'fromrepo': 'updates'},
],
}],
'new_caller': [{
'state': 'caller',
'__id__': 'touch_file',
'name': 'touch_file',
'__sls__': '/srv/reactor/new_caller.sls',
'order': 1,
'fun': 'file.touch',
'args': [
{'name': '/tmp/foo'},
],
}],
}
WRAPPER_CALLS = {
'old_runner': (
'error.error',
{
'__state__': 'runner',
'__id__': 'raise_error',
'__sls__': '/srv/reactor/old_runner.sls',
'__user__': 'Reactor',
'order': 1,
'arg': [],
'kwarg': {
'name': 'Exception',
'message': 'This is an error',
},
'name': 'Exception',
'message': 'This is an error',
},
),
'old_wheel': (
'key.delete',
{
'__state__': 'wheel',
'__id__': 'remove_key',
'name': 'remove_key',
'__sls__': '/srv/reactor/old_wheel.sls',
'order': 1,
'__user__': 'Reactor',
'arg': ['foo'],
'kwarg': {},
'match': 'foo',
},
),
'old_local': {
'args': ('test', 'state.single'),
'kwargs': {
'state': 'local',
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/old_local.sls',
'order': 1,
'arg': ['pkg.installed', 'zsh'],
'kwarg': {'fromrepo': 'updates'},
},
},
'old_cmd': {
'args': ('test', 'state.single'),
'kwargs': {
'state': 'local',
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/old_cmd.sls',
'order': 1,
'arg': ['pkg.installed', 'zsh'],
'kwarg': {'fromrepo': 'updates'},
},
},
'old_caller': {
'args': ('file.touch', '/tmp/foo'),
'kwargs': {},
},
'new_runner': (
'error.error',
{
'__state__': 'runner',
'__id__': 'raise_error',
'name': 'raise_error',
'__sls__': '/srv/reactor/new_runner.sls',
'__user__': 'Reactor',
'order': 1,
'arg': (),
'kwarg': {
'name': 'Exception',
'message': 'This is an error',
},
},
),
'new_wheel': (
'key.delete',
{
'__state__': 'wheel',
'__id__': 'remove_key',
'name': 'remove_key',
'__sls__': '/srv/reactor/new_wheel.sls',
'order': 1,
'__user__': 'Reactor',
'arg': (),
'kwarg': {'match': 'foo'},
},
),
'new_local': {
'args': ('test', 'state.single'),
'kwargs': {
'state': 'local',
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/new_local.sls',
'order': 1,
'arg': (),
'kwarg': {
'fun': 'pkg.installed',
'name': 'zsh',
'fromrepo': 'updates',
},
},
},
'new_cmd': {
'args': ('test', 'state.single'),
'kwargs': {
'state': 'local',
'__id__': 'install_zsh',
'name': 'install_zsh',
'__sls__': '/srv/reactor/new_cmd.sls',
'order': 1,
'arg': (),
'kwarg': {
'fun': 'pkg.installed',
'name': 'zsh',
'fromrepo': 'updates',
},
},
},
'new_caller': {
'args': ('file.touch',),
'kwargs': {'name': '/tmp/foo'},
},
}
log = logging.getLogger(__name__)
@contextmanager
def reactor_process(opts, reactor):
opts = dict(opts)
opts['reactor'] = reactor
proc = reactor.Reactor(opts)
proc.start()
try:
if os.environ.get('TRAVIS_PYTHON_VERSION', None) is not None:
# Travis is slow
time.sleep(10)
else:
time.sleep(2)
yield
finally:
clean_proc(proc)
def _args_sideffect(*args, **kwargs):
return args, kwargs
@skipIf(True, 'Skipping until its clear what and how is this supposed to be testing')
@skipIf(NO_MOCK, NO_MOCK_REASON)
class TestReactor(TestCase, AdaptedConfigurationTestCaseMixin):
def setUp(self):
self.opts = self.get_temp_config('master')
self.tempdir = tempfile.mkdtemp(dir=TMP)
self.sls_name = os.path.join(self.tempdir, 'test.sls')
with salt.utils.fopen(self.sls_name, 'w') as fh:
fh.write('''
update_fileserver:
runner.fileserver.update
''')
'''
Tests for constructing the low chunks to be executed via the Reactor
'''
@classmethod
def setUpClass(cls):
'''
Load the reactor config for mocking
'''
cls.opts = cls.get_temp_config('master')
reactor_config = yaml.safe_load(REACTOR_CONFIG)
cls.opts.update(reactor_config)
cls.reactor = reactor.Reactor(cls.opts)
cls.reaction_map = salt.utils.repack_dictlist(reactor_config['reactor'])
renderers = salt.loader.render(cls.opts, {})
cls.render_pipe = [(renderers[x], '') for x in ('jinja', 'yaml')]
def tearDown(self):
if os.path.isdir(self.tempdir):
shutil.rmtree(self.tempdir)
del self.opts
del self.tempdir
del self.sls_name
@classmethod
def tearDownClass(cls):
del cls.opts
del cls.reactor
del cls.render_pipe
def test_basic(self):
reactor_config = [
{'salt/tagA': ['/srv/reactor/A.sls']},
{'salt/tagB': ['/srv/reactor/B.sls']},
{'*': ['/srv/reactor/all.sls']},
]
wrap = reactor.ReactWrap(self.opts)
with patch.object(reactor.ReactWrap, 'local', MagicMock(side_effect=_args_sideffect)):
ret = wrap.run({'fun': 'test.ping',
'state': 'local',
'order': 1,
'name': 'foo_action',
'__id__': 'foo_action'})
raise Exception(ret)
def test_list_reactors(self):
'''
Ensure that list_reactors() returns the correct list of reactor SLS
files for each tag.
'''
for schema in ('old', 'new'):
for rtype in REACTOR_DATA:
tag = '_'.join((schema, rtype))
self.assertEqual(
self.reactor.list_reactors(tag),
self.reaction_map[tag]
)
def test_reactions(self):
'''
Ensure that the correct reactions are built from the configured SLS
files and tag data.
'''
for schema in ('old', 'new'):
for rtype in REACTOR_DATA:
tag = '_'.join((schema, rtype))
log.debug('test_reactions: processing %s', tag)
reactors = self.reactor.list_reactors(tag)
log.debug('test_reactions: %s reactors: %s', tag, reactors)
# No globbing in our example SLS, and the files don't actually
# exist, so mock glob.glob to just return back the path passed
# to it.
with patch.object(
glob,
'glob',
MagicMock(side_effect=lambda x: [x])):
# The below four mocks are all so that
# salt.template.compile_template() will read the templates
# we've mocked up in the SLS global variable above.
with patch.object(
os.path, 'isfile',
MagicMock(return_value=True)):
with patch.object(
salt.utils, 'is_empty',
MagicMock(return_value=False)):
with patch.object(
codecs, 'open',
mock_open(read_data=SLS[reactors[0]])):
with patch.object(
salt.template, 'template_shebang',
MagicMock(return_value=self.render_pipe)):
reactions = self.reactor.reactions(
tag,
REACTOR_DATA[rtype],
reactors,
)
log.debug(
'test_reactions: %s reactions: %s',
tag, reactions
)
self.assertEqual(reactions, LOW_CHUNKS[tag])
@skipIf(NO_MOCK, NO_MOCK_REASON)
class TestReactWrap(TestCase, AdaptedConfigurationTestCaseMixin):
'''
Tests that we are formulating the wrapper calls properly
'''
@classmethod
def setUpClass(cls):
cls.wrap = reactor.ReactWrap(cls.get_temp_config('master'))
@classmethod
def tearDownClass(cls):
del cls.wrap
def test_runner(self):
'''
Test runner reactions using both the old and new config schema
'''
for schema in ('old', 'new'):
tag = '_'.join((schema, 'runner'))
chunk = LOW_CHUNKS[tag][0]
thread_pool = Mock()
thread_pool.fire_async = Mock()
with patch.object(self.wrap, 'pool', thread_pool):
self.wrap.run(chunk)
thread_pool.fire_async.assert_called_with(
self.wrap.client_cache['runner'].low,
args=WRAPPER_CALLS[tag]
)
def test_wheel(self):
'''
Test wheel reactions using both the old and new config schema
'''
for schema in ('old', 'new'):
tag = '_'.join((schema, 'wheel'))
chunk = LOW_CHUNKS[tag][0]
thread_pool = Mock()
thread_pool.fire_async = Mock()
with patch.object(self.wrap, 'pool', thread_pool):
self.wrap.run(chunk)
thread_pool.fire_async.assert_called_with(
self.wrap.client_cache['wheel'].low,
args=WRAPPER_CALLS[tag]
)
def test_local(self):
'''
Test local reactions using both the old and new config schema
'''
for schema in ('old', 'new'):
tag = '_'.join((schema, 'local'))
chunk = LOW_CHUNKS[tag][0]
client_cache = {'local': Mock()}
client_cache['local'].cmd_async = Mock()
with patch.object(self.wrap, 'client_cache', client_cache):
self.wrap.run(chunk)
client_cache['local'].cmd_async.assert_called_with(
*WRAPPER_CALLS[tag]['args'],
**WRAPPER_CALLS[tag]['kwargs']
)
def test_cmd(self):
'''
Test cmd reactions (alias for 'local') using both the old and new
config schema
'''
for schema in ('old', 'new'):
tag = '_'.join((schema, 'cmd'))
chunk = LOW_CHUNKS[tag][0]
client_cache = {'local': Mock()}
client_cache['local'].cmd_async = Mock()
with patch.object(self.wrap, 'client_cache', client_cache):
self.wrap.run(chunk)
client_cache['local'].cmd_async.assert_called_with(
*WRAPPER_CALLS[tag]['args'],
**WRAPPER_CALLS[tag]['kwargs']
)
def test_caller(self):
'''
Test caller reactions using both the old and new config schema
'''
for schema in ('old', 'new'):
tag = '_'.join((schema, 'caller'))
chunk = LOW_CHUNKS[tag][0]
client_cache = {'caller': Mock()}
client_cache['caller'].cmd = Mock()
with patch.object(self.wrap, 'client_cache', client_cache):
self.wrap.run(chunk)
client_cache['caller'].cmd.assert_called_with(
*WRAPPER_CALLS[tag]['args'],
**WRAPPER_CALLS[tag]['kwargs']
)