Merge remote-tracking branch 'upstream/develop' into etcd_mods

This commit is contained in:
Khris Richardson 2014-06-05 21:47:08 -05:00
commit 0d7366ee70
38 changed files with 847 additions and 390 deletions

View File

@ -69,7 +69,8 @@ disable=R,
E8125, E8125,
E8126, E8126,
E8127, E8127,
E8128 E8128,
E8265
# Disabled: # Disabled:
# R* [refactoring suggestions & reports] # R* [refactoring suggestions & reports]
@ -103,6 +104,7 @@ disable=R,
# F0401 (import-error) # F0401 (import-error)
# #
# E812* All PEP8 E12* # E812* All PEP8 E12*
# E8265 PEP8 E265 - block comment should start with "# "
# E8501 PEP8 line too long # E8501 PEP8 line too long
[REPORTS] [REPORTS]

View File

@ -358,6 +358,20 @@ only the cache for the mine system.
enforce_mine_cache: False enforce_mine_cache: False
``max_minions``
---------------
Default: 0
The number of minions the master should allow to connect. Use this to accomodate
the number of minions per master if you have different types of hardware serving
your minions. The default of ``0`` means unlimited connections. Please note, that
this can slow down the authentication process a bit in large setups.
.. code-block:: yaml
max_minions: 100
``presence_events`` ``presence_events``
---------------------- ----------------------

View File

@ -37,7 +37,7 @@ Default: ``salt``
The master is, by default, staticaly configured by the `master` setting, but The master is, by default, staticaly configured by the `master` setting, but
if desired, the master can be dynamically configured. The `master` value can if desired, the master can be dynamically configured. The `master` value can
be set to a module function will will be executed and will assume that the be set to a module function which will be executed and will assume that the
returning value is the ip or hostname of the desired master. In addition to returning value is the ip or hostname of the desired master. In addition to
specifying the function to execute to detect the master the specifying the function to execute to detect the master the
:conf_minion:`master_type`, option must be set to 'func'. :conf_minion:`master_type`, option must be set to 'func'.
@ -46,6 +46,19 @@ specifying the function to execute to detect the master the
master: module.function master: module.function
The `master` can also be a list of masters the minion should try to connect
to. If the first master fails or rejects the minions connection (for example
when too many minions are already connected), the minion will try the next
master in the given list. For this to work, :conf_minion:`master_type` must
be set to 'failover'. If `master_type` is not set, the minion will be in
multimaster mode: :ref:`multi master <topics-tutorials-multimaster>`
.. code-block:: yaml
master:
- address1
- address2
.. conf_minion:: master_type .. conf_minion:: master_type
@ -54,16 +67,40 @@ specifying the function to execute to detect the master the
Default: ``str`` Default: ``str``
The type of the :conf_minion:`master` variable. If the master needs to be The type of the :conf_minion:`master` variable. Can be either 'func' or
dynamically assigned by executing a function instead of reading in the static 'failover'.
master value, set this to 'func'. This can be used to manage the minion's
master setting from an execution module. By simply changeing the algorithm If the master needs to be dynamically assigned by executing a function
in the module to return a new master ip/fqdn, restart the minion and it will instead of reading in the static master value, set this to 'func'.
connect to the new master. This can be used to manage the minion's master setting from an execution
module. By simply changing the algorithm in the module to return a new
master ip/fqdn, restart the minion and it will connect to the new master.
.. code-block:: yaml .. code-block:: yaml
master_type: str master_type: 'func'
If it is set to 'failover', :conf_minion:`master` has to be a list of master
addresses. The minion will then try one master after the other, until it
successfully connects.
.. code-block:: yaml
master_type: 'failover'
``master_shuffle``
---------------
Default: ``False``
If :conf_minion:`master` is list of addresses, shuffle them before trying
to connect to distribute the minions over all available masters. This uses
pythons random.shuffle method.
.. code-block:: yaml
master_shuffle: True
.. conf_minion:: master_port .. conf_minion:: master_port

View File

@ -144,151 +144,13 @@ more requisites. Both requisite types can also be separately declared:
In this example, the httpd service is only going to be started if the package, In this example, the httpd service is only going to be started if the package,
user, group and file are executed successfully. user, group and file are executed successfully.
The Require Requisite
---------------------
The foundation of the requisite system is the ``require`` requisite. The Requisite Documentation
require requisite ensures that the required state(s) are executed before the -----------------------
requiring state. So, if a state is declared that sets down a vimrc, then it
would be pertinent to make sure that the vimrc file would only be set down if
the vim package has been installed:
.. code-block:: yaml For detailed information on each of the individual requisites, :ref:`please
look here. <requisites>`
vim:
pkg:
- installed
file.managed:
- source: salt://vim/vimrc
- require:
- pkg: vim
In this case, the vimrc file will only be applied by Salt if and after the vim
package is installed.
The Watch Requisite
-------------------
The ``watch`` requisite is more advanced than the ``require`` requisite. The
watch requisite executes the same logic as require (therefore if something is
watched it does not need to also be required) with the addition of executing
logic if the required states have changed in some way.
The watch requisite checks to see if the watched states have returned any
changes. If the watched state returns changes, and the watched states execute
successfully, then the watching state will execute a function that reacts to
the changes in the watched states.
Perhaps an example can better explain the behavior:
.. code-block:: yaml
redis:
pkg:
- latest
file.managed:
- source: salt://redis/redis.conf
- name: /etc/redis.conf
- require:
- pkg: redis
service.running:
- enable: True
- watch:
- file: /etc/redis.conf
- pkg: redis
In this example, the redis service will only be started if the file
/etc/redis.conf is applied, and the file is only applied if the package is
installed. This is normal require behavior, but if the watched file changes,
or the watched package is installed or upgraded, then the redis service is
restarted.
.. note::
To reiterate: watch does not alter the original behavior of a function in
any way. The original behavior stays, but additional behavior (defined by
mod_watch as explored below) will be run if there are changes in the
watched state. This is why, for example, we have to have a ``cmd.wait``
state for watching purposes. If you examine the source code, you'll see
that ``cmd.wait`` is an empty function. However, you'll notice that
``mod_watch`` is actually just an alias of ``cmd.run``. So if there are
changes, we run the command, otherwise, we do nothing.
Watch and the mod_watch Function
--------------------------------
The watch requisite is based on the ``mod_watch`` function. Python state
modules can include a function called ``mod_watch`` which is then called
if the watch call is invoked. When ``mod_watch`` is called depends on the
execution of the watched state, which:
- If no changes then just run the watching state itself as usual.
``mod_watch`` is not called. This behavior is same as using a ``require``.
- If changes then run the watching state *AND* if that changes nothing then
react by calling ``mod_watch``.
When reacting, in the case of the service module the underlying service is
restarted. In the case of the cmd state the command is executed.
The ``mod_watch`` function for the service state looks like this:
.. code-block:: python
def mod_watch(name, sig=None, reload=False, full_restart=False):
'''
The service watcher, called to invoke the watch command.
name
The name of the init or rc script used to manage the service
sig
The string to search for when looking for the service process with ps
'''
if __salt__['service.status'](name, sig):
if 'service.reload' in __salt__ and reload:
restart_func = __salt__['service.reload']
elif 'service.full_restart' in __salt__ and full_restart:
restart_func = __salt__['service.full_restart']
else:
restart_func = __salt__['service.restart']
else:
restart_func = __salt__['service.start']
result = restart_func(name)
return {'name': name,
'changes': {name: result},
'result': result,
'comment': 'Service restarted' if result else \
'Failed to restart the service'
}
The watch requisite only works if the state that is watching has a
``mod_watch`` function written. If watch is set on a state that does not have
a ``mod_watch`` function (like pkg), then the listed states will behave only
as if they were under a ``require`` statement.
Also notice that a ``mod_watch`` may accept additional keyword arguments,
which, in the sls file, will be taken from the same set of arguments specified
for the state that includes the ``watch`` requisite. This means, for the
earlier ``service.running`` example above, the service can be set to
``reload`` instead of restart like this:
.. code-block:: yaml
redis:
# ... other state declarations omitted ...
service.running:
- enable: True
- reload: True
- watch:
- file: /etc/redis.conf
- pkg: redis
.. _ordering_order:
The Order Option The Order Option
================ ================

View File

@ -8,11 +8,11 @@ The Salt requisite system is used to create relationships between states. The
core idea being that, when one state is dependent somehow on another, that core idea being that, when one state is dependent somehow on another, that
inter-dependency can be easily defined. inter-dependency can be easily defined.
Requisites come in two types: Direct requisites (such as ``require`` and ``watch``), Requisites come in two types: Direct requisites (such as ``require``),
and requisite_ins (such as ``require_in`` and ``watch_in``). The relationships are and requisite_ins (such as ``require_in``). The relationships are
directional: a direct requisite requires something from another state, while directional: a direct requisite requires something from another state.
requisite_ins operate in the other direction. A requisite_in contains something that However, a requisite_in inserts a requisite into the targeted state pointing to
is required by another state. The following example demonstrates a direct requisite: the targeting state. The following example demonstrates a direct requisite:
.. code-block:: yaml .. code-block:: yaml
@ -43,7 +43,8 @@ something", requisite_ins say "Someone depends on me":
So here, with a requisite_in, the same thing is accomplished as in the first So here, with a requisite_in, the same thing is accomplished as in the first
example, but the other way around. The vim package is saying "/etc/vimrc depends example, but the other way around. The vim package is saying "/etc/vimrc depends
on me". on me". This will result in a ``require`` being inserted into the
``/etc/vimrc`` state which targets the ``vim`` state.
In the end, a single dependency map is created and everything is executed in a In the end, a single dependency map is created and everything is executed in a
finite and predictable order. finite and predictable order.
@ -65,11 +66,12 @@ finite and predictable order.
Direct Requisite and Requisite_in types Direct Requisite and Requisite_in types
======================================= =======================================
There are four direct requisite statements that can be used in Salt: ``require``, There are six direct requisite statements that can be used in Salt:
``watch``, ``prereq``, and ``use``. Each direct requisite also has a corresponding ``require``, ``watch``, ``prereq``, ``use``, ``onchanges``, and ``onfail``.
requisite_in: ``require_in``, ``watch_in``, ``prereq_in`` and ``use_in``. All of the Each direct requisite also has a corresponding requisite_in: ``require_in``,
requisites define specific relationships and always work with the dependency ``watch_in``, ``prereq_in``, ``use_in``, ``onchanges_in``, and ``onfail_in``.
logic defined above. All of the requisites define specific relationships and always work with the
dependency logic defined above.
require require
------- -------
@ -268,8 +270,8 @@ onchanges
.. versionadded:: Helium .. versionadded:: Helium
The ``onchanges`` requisite makes a state only apply if the required states The ``onchanges`` requisite makes a state only apply if the required states
generate changes. This can be a useful way to execute a post hook after generate changes, and if the watched state's "result" is ``True``. This can be
changing aspects of a system. a useful way to execute a post hook after changing aspects of a system.
use use
--- ---
@ -304,13 +306,22 @@ targeted state. This means also a chain of ``use`` requisites would not
inherit inherited options. inherit inherited options.
.. _requisites-require-in: .. _requisites-require-in:
.. _requisites-watch-in:
require_in The _in versions of requisites
---------- ------------------------------
The ``require_in`` requisite is the literal reverse of ``require``. If All of the requisites also have corresponding requisite_in versions, which do
a state declaration needs to be required by another state declaration then the reverse of their normal counterparts. The examples below all use
require_in can accommodate it. Therefore, these two sls files would be the ``require_in`` as the example, but note that all of the ``_in`` requisites work
the same way: They result in a normal requisite in the targeted state, which
targets the state which has defines the requisite_in. Thus, a ``require_in``
causes the target state to ``require`` the targeting state. Similarly, a
``watch_in`` causes the target state to ``watch`` the targeting state. This
pattern continues for the rest of the requisites.
If a state declaration needs to be required by another state declaration then
``require_in`` can accommodate it. Therefore, these two sls files would be the
same in the end: same in the end:
Using ``require`` Using ``require``
@ -383,73 +394,6 @@ mod_python.sls
Now the httpd server will only start if php or mod_python are first verified to Now the httpd server will only start if php or mod_python are first verified to
be installed. Thus allowing for a requisite to be defined "after the fact". be installed. Thus allowing for a requisite to be defined "after the fact".
.. _requisites-watch-in:
watch_in
--------
``watch_in`` functions the same way as ``require_in``, but applies
a ``watch`` statement rather than a ``require`` statement to the external state
declaration.
A good example of when to use ``watch_in`` versus ``watch`` is in regards to writing
an Apache state in conjunction with a git state for a Django application. On the most
basic level, using either the ``watch`` or the ``watch_in`` requisites, the resulting
behavior will be the same: Apache restarts each time the Django git state changes.
.. code-block:: yaml
apache:
pkg:
- installed
- name: httpd
service:
- watch:
- git: django_git
django_git:
git:
- latest
- name: git@github.com/example/mydjangoproject.git
However, by using ``watch_in``, the approach is improved. By writing ``watch_in`` in
the depending states (such as the Django state and any other states that require Apache
to restart), the dependent state (Apache state) is de-coupled from the depending states:
.. code-block:: yaml
apache:
pkg:
- installed
- name: httpd
django_git:
git:
- latest
- name: git@github.com/example/mydjangoproject.git
- watch_in:
- service: apache
prereq_in
---------
The ``prereq_in`` requisite_in follows the same assignment logic as the
``require_in`` requisite_in. The ``prereq_in`` call simply assigns
``prereq`` to the state referenced. The above example for ``prereq`` can
be modified to function in the same way using ``prereq_in``:
.. code-block:: yaml
graceful-down:
cmd.run:
- name: service apache graceful
site-code:
file.recurse:
- name: /opt/site_code
- source: salt://site/code
- prereq_in:
- cmd: graceful-down
Altering Statefulness Altering Statefulness
===================== =====================

View File

@ -106,8 +106,8 @@ Bootstrapping a new master in the map is as simple as:
.. code-block:: yaml .. code-block:: yaml
fedora_small: fedora_small:
- web1 - web1:
make_master: True make_master: True
- web2 - web2
- web3 - web3
@ -120,11 +120,11 @@ as opposed to the newly created salt-master, as an example:
.. code-block:: yaml .. code-block:: yaml
fedora_small: fedora_small:
- web1 - web1:
make_master: True make_master: True
minion: minion:
master: <the local master ip address> master: <the local master ip address>
local_master: True local_master: True
- web2 - web2
- web3 - web3
@ -137,13 +137,13 @@ Another example:
.. code-block:: yaml .. code-block:: yaml
fedora_small: fedora_small:
- web1 - web1:
make_master: True make_master: True
- web2 - web2
- web3 - web3:
minion: minion:
master: <the local master ip address> master: <the local master ip address>
local_master: True local_master: True
The above example makes the ``web3`` minion answer to the local master, not the The above example makes the ``web3`` minion answer to the local master, not the
newly created master. newly created master.

View File

@ -31,6 +31,14 @@ For wheezy, the following line is needed in either
deb http://debian.saltstack.com/debian wheezy-saltstack main deb http://debian.saltstack.com/debian wheezy-saltstack main
Jessie (Testing)
~~~~~~~~~~~~~~~
For jessie, the following line is needed in either
``/etc/apt/sources.list`` or a file in ``/etc/apt/sources.list.d``::
deb http://debian.saltstack.com/debian jessie-saltstack main
Sid (Unstable) Sid (Unstable)
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~

View File

@ -36,12 +36,10 @@ fact that the data is uniform and not deeply nested.
Nested Dicts (key=value) Nested Dicts (key=value)
------------------------ ------------------------
When :ref:`dicts <python2:typesmapping>` are more deeply nested, they no longer When :ref:`dicts <python2:typesmapping>` are nested within other data
follow the same indentation logic. This is rarely something that comes up in structures (particularly lists), the indentation logic sometimes changes.
Salt, since deeply nested options like these are discouraged when making State Examples of where this might happen include ``context`` and ``default`` options
modules, but some do exist. A good example of this can be found in the from the :doc:`file.managed </ref/states/all/salt.states.file>` state:
``context`` and ``default`` options from the :doc:`file.managed
</ref/states/all/salt.states.file>` state:
.. code-block:: yaml .. code-block:: yaml
@ -61,8 +59,9 @@ modules, but some do exist. A good example of this can be found in the
Notice that while the indentation is two spaces per level, for the values under Notice that while the indentation is two spaces per level, for the values under
the ``context`` and ``defaults`` options there is a four-space indent. If only the ``context`` and ``defaults`` options there is a four-space indent. If only
two spaces are used to indent, then the information will not be loaded two spaces are used to indent, then those keys will be considered part of the
correctly. If using a double indent is not desirable, then a deeply-nested dict same dictionary that contains the ``context`` key, and so the data will not be
loaded correctly. If using a double indent is not desirable, then a deeply-nested dict
can be declared with curly braces: can be declared with curly braces:
.. code-block:: yaml .. code-block:: yaml
@ -81,6 +80,28 @@ can be declared with curly braces:
custom_var: "default value", custom_var: "default value",
other_var: 123 } other_var: 123 }
Here is a more concrete example of how YAML actually handles these
indentations, using the Python interpreter on the command line:
.. code-block:: python
>>> import yaml
>>> yaml.safe_load('''mystate:
... file.managed:
... - context:
... some: var''')
{'mystate': {'file.managed': [{'context': {'some': 'var'}}]}}
>>> yaml.safe_load('''mystate:
... file.managed:
... - context:
... some: var''')
{'mystate': {'file.managed': [{'some': 'var', 'context': None}]}}
Note that in the second example, ``some`` is added as another key in the same
dictionary, whereas in the first example, it's the start of a new dictionary.
That's the distinction. ``context`` is a common example because it is a keyword
arg for many functions, and should contain a dictionary.
True/False, Yes/No, On/Off True/False, Yes/No, On/Off
========================== ==========================

View File

@ -229,7 +229,10 @@ class Minion(parsers.MinionOptionParser):
self.daemonize_if_required() self.daemonize_if_required()
self.set_pidfile() self.set_pidfile()
if isinstance(self.config.get('master'), list): if isinstance(self.config.get('master'), list):
self.minion = salt.minion.MultiMinion(self.config) if self.config.get('master_type') == 'failover':
self.minion = salt.minion.Minion(self.config)
else:
self.minion = salt.minion.MultiMinion(self.config)
else: else:
self.minion = salt.minion.Minion(self.config) self.minion = salt.minion.Minion(self.config)
else: else:

View File

@ -25,6 +25,7 @@ import time
import copy import copy
import logging import logging
from datetime import datetime from datetime import datetime
from salt._compat import string_types
# Import salt libs # Import salt libs
import salt.config import salt.config
@ -174,7 +175,7 @@ class LocalClient(object):
return self.opts['timeout'] return self.opts['timeout']
if isinstance(timeout, int): if isinstance(timeout, int):
return timeout return timeout
if isinstance(timeout, str): if isinstance(timeout, string_types):
try: try:
return int(timeout) return int(timeout)
except ValueError: except ValueError:

View File

@ -55,6 +55,7 @@ VALID_OPTS = {
'master_port': int, 'master_port': int,
'master_type': str, 'master_type': str,
'master_finger': str, 'master_finger': str,
'master_shuffle': bool,
'syndic_finger': str, 'syndic_finger': str,
'user': str, 'user': str,
'root_dir': str, 'root_dir': str,
@ -237,6 +238,7 @@ VALID_OPTS = {
'restart_on_error': bool, 'restart_on_error': bool,
'ping_interval': int, 'ping_interval': int,
'cli_summary': bool, 'cli_summary': bool,
'max_minions': int,
} }
# default configurations # default configurations
@ -246,6 +248,7 @@ DEFAULT_MINION_OPTS = {
'master_type': 'str', 'master_type': 'str',
'master_port': '4506', 'master_port': '4506',
'master_finger': '', 'master_finger': '',
'master_shuffle': False,
'syndic_finger': '', 'syndic_finger': '',
'user': 'root', 'user': 'root',
'root_dir': salt.syspaths.ROOT_DIR, 'root_dir': salt.syspaths.ROOT_DIR,
@ -505,6 +508,7 @@ DEFAULT_MASTER_OPTS = {
'sqlite_queue_dir': os.path.join(salt.syspaths.CACHE_DIR, 'master', 'queues'), 'sqlite_queue_dir': os.path.join(salt.syspaths.CACHE_DIR, 'master', 'queues'),
'queue_dirs': [], 'queue_dirs': [],
'cli_summary': False, 'cli_summary': False,
'max_minions': 0,
} }
# ----- Salt Cloud Configuration Defaults -----------------------------------> # ----- Salt Cloud Configuration Defaults ----------------------------------->

View File

@ -298,12 +298,13 @@ class Auth(object):
def verify_master(self, payload): def verify_master(self, payload):
''' '''
Verify that the master is the same one that was previously accepted Verify that the master is the same one that was previously accepted.
''' '''
m_pub_fn = os.path.join(self.opts['pki_dir'], self.mpub) m_pub_fn = os.path.join(self.opts['pki_dir'], self.mpub)
if os.path.isfile(m_pub_fn) and not self.opts['open_mode']: if os.path.isfile(m_pub_fn) and not self.opts['open_mode']:
local_master_pub = salt.utils.fopen(m_pub_fn).read() local_master_pub = salt.utils.fopen(m_pub_fn).read()
if payload['pub_key'] != local_master_pub: if payload['pub_key'] != local_master_pub:
# This is not the last master we connected to # This is not the last master we connected to
log.error('The master key has changed, the salt master could ' log.error('The master key has changed, the salt master could '
'have been subverted, verify salt master\'s public ' 'have been subverted, verify salt master\'s public '
@ -372,6 +373,9 @@ class Auth(object):
'clean out the keys. The Salt Minion will now exit.' 'clean out the keys. The Salt Minion will now exit.'
) )
sys.exit(os.EX_OK) sys.exit(os.EX_OK)
# has the master returned that its maxed out with minions?
elif payload['load']['ret'] == 'full':
return 'full'
else: else:
log.error( log.error(
'The Salt Master has cached the public key for this ' 'The Salt Master has cached the public key for this '

View File

@ -345,9 +345,10 @@ class Schedule(ioflo.base.deeding.Deed):
self.schedule.eval() self.schedule.eval()
class Setup(ioflo.base.deeding.Deed): class SaltManorLaneSetup(ioflo.base.deeding.Deed):
''' '''
Only intended to be called once at the top of the house Only intended to be called once at the top of the manor house
Sets of the LaneStack for the main yard
FloScript: FloScript:
do setup at enter do setup at enter
@ -362,19 +363,33 @@ class Setup(ioflo.base.deeding.Deed):
'event': '.salt.event.events', 'event': '.salt.event.events',
'event_req': '.salt.event.event_req', 'event_req': '.salt.event.event_req',
'workers': '.salt.track.workers', 'workers': '.salt.track.workers',
'uxd_stack': '.salt.uxd.stack.stack'} 'inode': '.salt.uxd.stack.',
'stack': 'stack',
'local': {'ipath': 'local',
'ival': {'name': 'master',
'localname': 'master',
'yid': 0,
'lanename': 'master'}}
}
def postinitio(self): def postinitio(self):
''' '''
Set up required objects and queues Set up required objects and queues
''' '''
self.uxd_stack.value = LaneStack( name = self.opts.value.get('id', self.local.data.name)
name='yard', localname = self.opts.value.get('id', self.local.data.localname)
lanename=self.opts.value.get('id', 'master'), lanename = self.opts.value.get('id', self.local.data.lanename)
yid=0, yid = self.local.data.yid
sockdirpath=self.opts.value['sock_dir'], basedirpath = os.path.abspath(
dirpath=self.opts.value['cachedir']) os.path.join(self.opts.value['cachedir'], 'raet'))
self.uxd_stack.value.Pk = raeting.packKinds.pack self.stack.value = LaneStack(
name=name,
#localname=localname,
lanename=lanename,
yid=0,
sockdirpath=self.opts.value['sock_dir'],
basedirpath=basedirpath)
self.stack.value.Pk = raeting.packKinds.pack
self.event_yards.value = set() self.event_yards.value = set()
self.local_cmd.value = deque() self.local_cmd.value = deque()
self.remote_cmd.value = deque() self.remote_cmd.value = deque()
@ -389,6 +404,26 @@ class Setup(ioflo.base.deeding.Deed):
self.workers.value = itertools.cycle(worker_seed) self.workers.value = itertools.cycle(worker_seed)
class SaltRaetLaneStackCloser(ioflo.base.deeding.Deed): # pylint: disable=W0232
'''
Closes lane stack server socket connection
FloScript:
do raet lane stack closer at exit
'''
Ioinits = odict(
inode=".salt.uxd.stack",
stack='stack',)
def action(self, **kwa):
'''
Close uxd socket
'''
if self.stack.value and isinstance(self.stack.value, LaneStack):
self.stack.value.server.close()
class SaltRoadService(ioflo.base.deeding.Deed): class SaltRoadService(ioflo.base.deeding.Deed):
''' '''
Process the udp traffic Process the udp traffic
@ -425,8 +460,8 @@ class Rx(ioflo.base.deeding.Deed):
''' '''
Process inboud queues Process inboud queues
''' '''
self.udp_stack.value.serviceAll() self.udp_stack.value.serviceAllRx()
self.uxd_stack.value.serviceAll() self.uxd_stack.value.serviceAllRx()
class Tx(ioflo.base.deeding.Deed): class Tx(ioflo.base.deeding.Deed):
@ -448,8 +483,8 @@ class Tx(ioflo.base.deeding.Deed):
''' '''
Process inbound queues Process inbound queues
''' '''
self.uxd_stack.value.serviceAll() self.uxd_stack.value.serviceAllTx()
self.udp_stack.value.serviceAll() self.udp_stack.value.serviceAllTx()
class Router(ioflo.base.deeding.Deed): class Router(ioflo.base.deeding.Deed):

View File

@ -4,10 +4,14 @@ house master
init .raet.udp.stack.local to eid 1 main true name "master" localname "master" init .raet.udp.stack.local to eid 1 main true name "master" localname "master"
init .salt.uxd.stack.local to yid 0 name "master" localname "master" lanename "master"
framer masterudpstack be active first setup framer masterudpstack be active first setup
frame setup frame setup
enter enter
do setup do salt manor lane setup
# go spawnmaint # go spawnmaint
# frame spawnmaint # frame spawnmaint
# enter # enter
@ -21,6 +25,7 @@ framer masterudpstack be active first setup
do salt raet road stack per inode ".raet.udp.stack" do salt raet road stack per inode ".raet.udp.stack"
exit exit
do salt raet road stack closer per inode ".raet.udp.stack." do salt raet road stack closer per inode ".raet.udp.stack."
do salt raet lane stack closer per inode ".salt.uxd.stack."
framer inbound be active first start framer inbound be active first start
frame start frame start

View File

@ -5,16 +5,18 @@ house minion
init name in .raet.udp.stack.local from value in .salt.etc.id init name in .raet.udp.stack.local from value in .salt.etc.id
init localname in .raet.udp.stack.local from value in .salt.etc.id init localname in .raet.udp.stack.local from value in .salt.etc.id
init .salt.uxd.stack.local to yid 0 name "minion" localname "minion" lanename "minion"
framer minionudpstack be active first setup framer minionudpstack be active first setup
frame setup frame setup
enter enter
do setup do salt manor lane setup
go start go start
frame start frame start
do salt raet road stack per inode ".raet.udp.stack" do salt raet road stack per inode ".raet.udp.stack"
exit exit
do salt raet road stack closer per inode ".raet.udp.stack." do salt raet road stack closer per inode ".raet.udp.stack."
do salt raet lane stack closer per inode ".salt.uxd.stack."
framer inbound be active first start framer inbound be active first start
frame start frame start

View File

@ -10,3 +10,5 @@ framer uxdrouter be active first setup
go start go start
frame start frame start
do worker router do worker router
exit
do salt raet lane stack closer per inode ".salt.uxd.stack."

View File

@ -5,6 +5,7 @@ The core bahaviuors ued by minion and master
# pylint: disable=W0232 # pylint: disable=W0232
# Import python libs # Import python libs
import os
import multiprocessing import multiprocessing
# Import salt libs # Import salt libs
@ -80,40 +81,57 @@ class WorkerSetup(ioflo.base.deeding.Deed):
''' '''
Ioinits = { Ioinits = {
'uxd_stack': '.salt.uxd.stack.stack',
'opts': '.salt.opts', 'opts': '.salt.opts',
'yid': '.salt.yid', 'yid': '.salt.yid',
'access_keys': '.salt.access_keys', 'access_keys': '.salt.access_keys',
'remote': '.salt.loader.remote', 'remote': '.salt.loader.remote',
'local': '.salt.loader.local', 'local': '.salt.loader.local',
'inode': '.salt.uxd.stack.',
'stack': 'stack',
'main': {'ipath': 'main',
'ival': {'name': 'master',
'localname': 'master',
'yid': 0,
'lanename': 'master'}}
} }
def action(self): def action(self):
''' '''
Set up the uxd stack and behaviors Set up the uxd stack and behaviors
''' '''
self.uxd_stack.value = LaneStack( name = "{0}{1}{2}".format(self.opts.value.get('id', self.main.data.name),
lanename=self.opts.value.get('id', 'master'), 'worker',
yid=self.yid.value, self.yid.value)
sockdirpath=self.opts.value['sock_dir']) localname = name
self.uxd_stack.value.Pk = raeting.packKinds.pack lanename = self.opts.value.get('id', self.main.data.lanename)
basedirpath = os.path.abspath(
os.path.join(self.opts.value['cachedir'], 'raet'))
self.stack.value = LaneStack(
name=name,
#localname=localname,
basedirpath=basedirpath,
lanename=lanename,
yid=self.yid.value,
sockdirpath=self.opts.value['sock_dir'])
self.stack.value.Pk = raeting.packKinds.pack
manor_yard = RemoteYard( manor_yard = RemoteYard(
stack=self.uxd_stack.value, stack=self.stack.value,
yid=0, yid=0,
lanename=self.opts.value.get('id', 'master'), lanename=lanename,
dirpath=self.opts.value['sock_dir']) dirpath=self.opts.value['sock_dir'])
self.uxd_stack.value.addRemote(manor_yard) self.stack.value.addRemote(manor_yard)
self.remote.value = salt.daemons.masterapi.RemoteFuncs(self.opts.value) self.remote.value = salt.daemons.masterapi.RemoteFuncs(self.opts.value)
self.local.value = salt.daemons.masterapi.LocalFuncs( self.local.value = salt.daemons.masterapi.LocalFuncs(
self.opts.value, self.opts.value,
self.access_keys.value) self.access_keys.value)
init = {} init = {}
init['route'] = { init['route'] = {
'src': (None, self.uxd_stack.value.local.name, None), 'src': (None, self.stack.value.local.name, None),
'dst': (None, 'yard0', 'worker_req') 'dst': (None, manor_yard.name, 'worker_req')
} }
self.uxd_stack.value.transmit(init, self.uxd_stack.value.uids.get('yard0')) self.stack.value.transmit(init, self.stack.value.uids.get(manor_yard.name))
self.uxd_stack.value.serviceAll() self.stack.value.serviceAll()
class WorkerRouter(ioflo.base.deeding.Deed): class WorkerRouter(ioflo.base.deeding.Deed):

View File

@ -5,9 +5,10 @@ involves preparing the three listeners and the workers needed by the master.
''' '''
# Import python libs # Import python libs
import fnmatch
import logging
import os import os
import re import re
import logging
import time import time
try: try:
import pwd import pwd
@ -76,7 +77,7 @@ def clean_fsbackend(opts):
''' '''
Clean out the old fileserver backends Clean out the old fileserver backends
''' '''
# Clear remote fileserver backend env cache so it gets recreated # Clear remote fileserver backend caches so they get recreated
for backend in ('git', 'hg', 'svn'): for backend in ('git', 'hg', 'svn'):
if backend in opts['fileserver_backend']: if backend in opts['fileserver_backend']:
env_cache = os.path.join( env_cache = os.path.join(
@ -94,6 +95,25 @@ def clean_fsbackend(opts):
.format(env_cache, exc) .format(env_cache, exc)
) )
file_lists_dir = os.path.join(
opts['cachedir'],
'file_lists',
'{0}fs'.format(backend)
)
try:
file_lists_caches = os.listdir(file_lists_dir)
except OSError:
continue
for file_lists_cache in fnmatch.filter(file_lists_caches, '*.p'):
cache_file = os.path.join(file_lists_dir, file_lists_cache)
try:
os.remove(cache_file)
except (IOError, OSError) as exc:
log.critical(
'Unable to file_lists cache file {0}: {1}'
.format(cache_file, exc)
)
def clean_expired_tokens(opts): def clean_expired_tokens(opts):
''' '''
@ -678,7 +698,7 @@ class RemoteFuncs(object):
pub_load['timeout'] = int(load['timeout']) pub_load['timeout'] = int(load['timeout'])
except ValueError: except ValueError:
msg = 'Failed to parse timeout value: {0}'.format( msg = 'Failed to parse timeout value: {0}'.format(
load['tmo']) load['timeout'])
log.warn(msg) log.warn(msg)
return {} return {}
if 'tgt_type' in load: if 'tgt_type' in load:
@ -704,7 +724,7 @@ class RemoteFuncs(object):
if 'jid' in minion: if 'jid' in minion:
ret['__jid__'] = minion['jid'] ret['__jid__'] = minion['jid']
for key, val in self.local.get_cache_returns(ret['__jid__']).items(): for key, val in self.local.get_cache_returns(ret['__jid__']).items():
if not key in ret: if key not in ret:
ret[key] = val ret[key] = val
if load.get('form', '') != 'full': if load.get('form', '') != 'full':
ret.pop('__jid__') ret.pop('__jid__')

View File

@ -590,6 +590,9 @@ def init():
'hash': repo_hash, 'hash': repo_hash,
'cachedir': rp_ 'cachedir': rp_
}) })
# Strip trailing slashes from the gitfs root as these cause
# path searches to fail.
repo_conf['root'] = repo_conf['root'].rstrip(os.path.sep)
repos.append(repo_conf) repos.append(repo_conf)
except Exception as exc: except Exception as exc:

View File

@ -778,11 +778,13 @@ class Loader(object):
for mod in self.modules: for mod in self.modules:
if not hasattr(mod, '__salt__') or ( if not hasattr(mod, '__salt__') or (
not in_pack(pack, '__salt__') and not in_pack(pack, '__salt__') and
not str(mod.__name__).startswith('salt.loaded.int.grain') (not str(mod.__name__).startswith('salt.loaded.int.grain') and
not str(mod.__name__).startswith('salt.loaded.ext.grain'))
): ):
mod.__salt__ = funcs mod.__salt__ = funcs
elif not in_pack(pack, '__salt__') and \ elif not in_pack(pack, '__salt__') and \
str(mod.__name__).startswith('salt.loaded.int.grain'): (str(mod.__name__).startswith('salt.loaded.int.grain') or
str(mod.__name__).startswith('salt.loaded.int.grain')):
mod.__salt__.update(funcs) mod.__salt__.update(funcs)
return funcs return funcs

View File

@ -1748,6 +1748,29 @@ class ClearFuncs(object):
'load': {'ret': False}} 'load': {'ret': False}}
log.info('Authentication request from {id}'.format(**load)) log.info('Authentication request from {id}'.format(**load))
minions = salt.utils.minions.CkMinions(self.opts).connected_ids()
# 0 is default which should be 'unlimited'
if self.opts['max_minions'] > 0:
if not len(minions) < self.opts['max_minions']:
# we reject new minions, minions that are already
# connected must be allowed for the mine, highstate, etc.
if load['id'] not in minions:
msg = ('Too many minions connected (max_minions={0}). '
'Rejecting connection from id '
'{1}'.format(self.opts['max_minions'],
load['id'])
)
log.info(msg)
eload = {'result': False,
'act': 'full',
'id': load['id'],
'pub': load['pub']}
self.event.fire_event(eload, tagify(prefix='auth'))
return {'enc': 'clear',
'load': {'ret': 'full'}}
# Check if key is configured to be auto-rejected/signed # Check if key is configured to be auto-rejected/signed
auto_reject = self.__check_autoreject(load['id']) auto_reject = self.__check_autoreject(load['id'])
auto_sign = self.__check_autosign(load['id']) auto_sign = self.__check_autosign(load['id'])

View File

@ -575,8 +575,7 @@ class Minion(MinionBase):
# module # module
opts['grains'] = salt.loader.grains(opts) opts['grains'] = salt.loader.grains(opts)
# if master_type was changed, we might want to load our # check if master_type was altered from its default
# master-variable from a user defined modules function
if opts['master_type'] != 'str': if opts['master_type'] != 'str':
# check for a valid keyword # check for a valid keyword
if opts['master_type'] == 'func': if opts['master_type'] == 'func':
@ -593,16 +592,64 @@ class Minion(MinionBase):
'module \'{0}\''.format(opts['master'])) 'module \'{0}\''.format(opts['master']))
log.error(msg) log.error(msg)
sys.exit(1) sys.exit(1)
log.info('Evaluated master from module: {0}'.format(opts['master'])) log.info('Evaluated master from module: {0}'.format(master_mod))
# if failover is set, master has to be of type list
elif opts['master_type'] == 'failover':
if type(opts['master']) is list:
log.info('Got list of available master addresses:'
' {0}'.format(opts['master']))
else:
msg = ('master_type set to \'failover\' but \'master\' '
'is not of type list but of type '
'{0}'.format(type(opts['master'])))
log.error(msg)
sys.exit(1)
else: else:
msg = ('Invalid keyword \'{0}\' for variable ' msg = ('Invalid keyword \'{0}\' for variable '
'\'master_type\''.format(opts['master_type'])) '\'master_type\''.format(opts['master_type']))
log.error(msg) log.error(msg)
sys.exit(1) sys.exit(1)
opts.update(resolve_dns(opts)) # if we have a list of masters, loop through them and be
super(Minion, self).__init__(opts) # happy with the first one that allows us to connect
self.authenticate(timeout, safe) if type(opts['master']) is list:
conn = False
# shuffle the masters and then loop through them
local_masters = copy.copy(opts['master'])
if opts['master_shuffle']:
from random import shuffle
shuffle(local_masters)
for master in local_masters:
opts['master'] = master
opts.update(resolve_dns(opts))
super(Minion, self).__init__(opts)
try:
if self.authenticate(timeout, safe) != 'full':
conn = True
break
except SaltClientError:
msg = ('Master {0} could not be reached, trying '
'next master (if any)'.format(opts['master']))
log.info(msg)
continue
if not conn:
msg = ('No master could be reached or all masters denied '
'the minions connection attempt.')
log.error(msg)
# single master sign in
else:
opts.update(resolve_dns(opts))
super(Minion, self).__init__(opts)
if self.authenticate(timeout, safe) == 'full':
msg = ('master {0} rejected the minions connection because too '
'many minions are already connected.'.format(opts['master']))
log.error(msg)
sys.exit(1)
self.opts['pillar'] = salt.pillar.get_pillar( self.opts['pillar'] = salt.pillar.get_pillar(
opts, opts,
opts['grains'], opts['grains'],
@ -1216,7 +1263,9 @@ class Minion(MinionBase):
safe = self.opts.get('auth_safemode', safe) safe = self.opts.get('auth_safemode', safe)
while True: while True:
creds = auth.sign_in(timeout, safe, tries) creds = auth.sign_in(timeout, safe, tries)
if creds != 'retry': if creds == 'full':
return creds
elif creds != 'retry':
log.info('Authentication with master successful!') log.info('Authentication with master successful!')
break break
log.info('Waiting for minion key to be accepted by the master.') log.info('Waiting for minion key to be accepted by the master.')
@ -1254,6 +1303,25 @@ class Minion(MinionBase):
).compile_pillar() ).compile_pillar()
self.module_refresh() self.module_refresh()
def manage_schedule(self, package):
'''
Refresh the functions and returners.
'''
tag, data = salt.utils.event.MinionEvent.unpack(package)
func = data.get('func', None)
if func == 'delete':
job = data.get('job', None)
self.schedule.delete_job(job)
elif func == 'add':
name = data.get('name', None)
schedule = data.get('schedule', None)
self.schedule.add_job(name, schedule)
elif func == 'modify':
name = data.get('name', None)
schedule = data.get('schedule', None)
self.schedule.modify_job(name, schedule)
def environ_setenv(self, package): def environ_setenv(self, package):
''' '''
Set the salt-minion main process environment according to Set the salt-minion main process environment according to
@ -1400,6 +1468,8 @@ class Minion(MinionBase):
self.module_refresh() self.module_refresh()
elif package.startswith('pillar_refresh'): elif package.startswith('pillar_refresh'):
self.pillar_refresh() self.pillar_refresh()
elif package.startswith('manage_schedule'):
self.manage_schedule(package)
elif package.startswith('grains_refresh'): elif package.startswith('grains_refresh'):
if self.grains_cache != self.opts['grains']: if self.grains_cache != self.opts['grains']:
self.pillar_refresh() self.pillar_refresh()

View File

@ -129,7 +129,7 @@ def available():
for root, dirs, files in os.walk(mod_dir): for root, dirs, files in os.walk(mod_dir):
for fn_ in files: for fn_ in files:
if '.ko' in fn_: if '.ko' in fn_:
ret.append(fn_[:fn_.index('.ko')]) ret.append(fn_[:fn_.index('.ko')].replace('-', '_'))
return sorted(list(ret)) return sorted(list(ret))

289
salt/modules/schedule.py Normal file
View File

@ -0,0 +1,289 @@
# -*- coding: utf-8 -*-
'''
Module for manging the Salt schedule on a minion
.. versionadded:: Helium
'''
# Import Python libs
import os
import yaml
import salt.utils
__proxyenabled__ = ['*']
import logging
log = logging.getLogger(__name__)
SCHEDULE_CONF = [
'function',
'splay',
'range',
'when',
'returner',
'jid_include',
'args',
'kwargs',
'_seconds',
'seconds',
'minutes',
'hours',
'days'
]
def list():
'''
List the jobs currently scheduled on the minion
CLI Example:
.. code-block:: bash
salt '*' schedule.list
'''
schedule = __opts__['schedule']
for job in schedule.keys():
if job.startswith('_'):
del schedule[job]
continue
for item in schedule[job].keys():
if not item in SCHEDULE_CONF:
del schedule[job][item]
continue
if schedule[job][item] == 'true':
schedule[job][item] = True
if schedule[job][item] == 'false':
schedule[job][item] = False
if '_seconds' in schedule[job].keys():
schedule[job]['seconds'] = schedule[job]['_seconds']
del schedule[job]['_seconds']
if schedule:
tmp = {'schedule': schedule}
yaml_out = yaml.safe_dump(tmp, default_flow_style=False)
return yaml_out
else:
return None
def purge():
'''
Purge all the jobs currently scheduled on the minion
CLI Example:
.. code-block:: bash
salt '*' schedule.purge
'''
ret = {'comment': [],
'result': True}
schedule = __opts__['schedule']
for job in schedule.keys():
if job.startswith('_'):
continue
out = __salt__['event.fire']({'job': job, 'func': 'delete'}, 'manage_schedule')
if out:
ret['comment'].append('Deleted job: {0} from schedule.'.format(job))
else:
ret['comment'].append('Failed to delete job {0} from schedule.'.format(job))
ret['result'] = False
return ret
def delete(name):
'''
Delete a job from the minion's schedule
CLI Example:
.. code-block:: bash
salt '*' schedule.delete job1
'''
ret = {'comment': [],
'result': True}
if not name:
ret['comment'] = 'Job name is required.'
ret['result'] = False
if name in __opts__['schedule']:
out = __salt__['event.fire']({'job': name, 'func': 'delete'}, 'manage_schedule')
if out:
ret['comment'] = 'Deleted Job {0} from schedule.'.format(name)
else:
ret['comment'] = 'Failed to delete job {0} from schedule.'.format(name)
ret['result'] = False
else:
ret['comment'] = 'Job {0} does not exist.'.format(name)
ret['result'] = False
return ret
def add(name, **kwargs):
'''
Add a job to the schedule
CLI Example:
.. code-block:: bash
salt '*' schedule.add job1 function='test.ping' seconds=3600
'''
ret = {'comment': [],
'result': True}
if name in __opts__['schedule']:
ret['comment'] = 'Job {0} already exists in schedule.'.format(name)
ret['result'] = True
return ret
if not name:
ret['comment'] = 'Job name is required.'
ret['result'] = False
schedule = {'function': kwargs['function']}
time_conflict = False
for item in ['seconds', 'minutes', 'hours', 'days']:
if item in kwargs and 'when' in kwargs:
time_conflict = True
if time_conflict:
return 'Error: Unable to use "seconds", "minutes", "hours", or "days" with "when" option.'
for item in ['seconds', 'minutes', 'hours', 'days']:
if item in kwargs:
schedule[item] = kwargs[item]
if 'job_args' in kwargs:
schedule['args'] = kwargs['job_args']
if 'job_kwargs' in kwargs:
schedule['kwargs'] = kwargs['job_kwargs']
for item in ['splay', 'range', 'when', 'returner', 'jid_include']:
if item in kwargs:
schedule[item] = kwargs[item]
out = __salt__['event.fire']({'name': name, 'schedule': schedule, 'func': 'add'}, 'manage_schedule')
if out:
ret['comment'] = 'Added job: {0} to schedule.'.format(name)
else:
ret['comment'] = 'Failed to modify job {0} to schedule.'.format(name)
ret['result'] = False
return ret
def modify(name, **kwargs):
'''
Modify an existing job in the schedule
CLI Example:
.. code-block:: bash
salt '*' schedule.modify job1 function='test.ping' seconds=3600
'''
ret = {'comment': [],
'result': True}
if not name in __opts__['schedule']:
ret['comment'] = 'Job {0} does not exist in schedule.'.format(name)
ret['result'] = False
return ret
schedule = {'function': kwargs['function']}
time_conflict = False
for item in ['seconds', 'minutes', 'hours', 'days']:
if item in kwargs and 'when' in kwargs:
time_conflict = True
if time_conflict:
return 'Error: Unable to use "seconds", "minutes", "hours", or "days" with "when" option.'
for item in ['seconds', 'minutes', 'hours', 'days']:
if item in kwargs:
schedule[item] = kwargs[item]
if 'job_args' in kwargs:
schedule['args'] = kwargs['job_args']
if 'job_kwargs' in kwargs:
schedule['kwargs'] = kwargs['job_kwargs']
for item in ['splay', 'range', 'when', 'returner', 'jid_include']:
if item in kwargs:
schedule[item] = kwargs[item]
out = __salt__['event.fire']({'name': name, 'schedule': schedule, 'func': 'modify'}, 'manage_schedule')
if out:
ret['comment'] = 'Modified job: {0} in schedule.'.format(name)
else:
ret['comment'] = 'Failed to modify job {0} in schedule.'.format(name)
ret['result'] = False
return ret
def save():
'''
CLI Example:
.. code-block:: bash
salt '*' schedule.save
'''
ret = {'comment': [],
'result': True}
schedule = __opts__['schedule']
for job in schedule.keys():
if job.startswith('_'):
del schedule[job]
continue
for item in schedule[job].keys():
if not item in SCHEDULE_CONF:
del schedule[job][item]
continue
if schedule[job][item] == 'true':
schedule[job][item] = True
if schedule[job][item] == 'false':
schedule[job][item] = False
if '_seconds' in schedule[job].keys():
schedule[job]['seconds'] = schedule[job]['_seconds']
del schedule[job]['_seconds']
# move this file into an configurable opt
sfn = '{0}/{1}/schedule.conf'.format(__opts__['config_dir'], os.path.dirname(__opts__['default_include']))
if schedule:
tmp = {'schedule': schedule}
yaml_out = yaml.safe_dump(tmp, default_flow_style=False)
else:
yaml_out = ''
try:
with salt.utils.fopen(sfn, 'w+') as fp_:
fp_.write(yaml_out)
ret['comment'] = 'Schedule saved to {0}.'.format(sfn)
except (IOError, OSError):
ret['comment'] = 'Unable to write to schedule file at {0}. Check permissions.'.format(sfn)
ret['result'] = False
return ret

View File

@ -169,6 +169,22 @@ def gen_password(password, crypt_salt=None, algorithm='sha512'):
return salt.utils.pycrypto.gen_hash(crypt_salt, password, algorithm) return salt.utils.pycrypto.gen_hash(crypt_salt, password, algorithm)
def del_password(name):
'''
Delete the password from name user
CLI Example:
.. code-block:: bash
salt '*' shadow.del_password username
'''
cmd = 'passwd -d {0}'.format(name)
__salt__['cmd.run'](cmd, output_loglevel='quiet')
uinfo = info(name)
return not uinfo['passwd']
def set_password(name, password, use_usermod=False): def set_password(name, password, use_usermod=False):
''' '''
Set the password for a named user. The password must be a properly defined Set the password for a named user. The password must be a properly defined

View File

@ -130,7 +130,7 @@ def _get_repo_options(**kwargs):
repo_arg = '' repo_arg = ''
if fromrepo: if fromrepo:
log.info('Restricting to repo {0!r}'.format(fromrepo)) log.info('Restricting to repo {0!r}'.format(fromrepo))
repo_arg = ('--disablerepo={0!r} --enablerepo={1!r}' repo_arg = ('--disablerepo={0!r} --enablerepo={1!r} '
.format('*', fromrepo)) .format('*', fromrepo))
else: else:
repo_arg = '' repo_arg = ''
@ -726,6 +726,10 @@ def install(name=None,
Disable exclude from main, for a repo or for everything. Disable exclude from main, for a repo or for everything.
(e.g., ``yum --disableexcludes='main'``) (e.g., ``yum --disableexcludes='main'``)
branch
Specifies the branch on YUM server.
(e.g., ``yum --branch='test'``)
.. versionadded:: Helium .. versionadded:: Helium
@ -785,6 +789,10 @@ def install(name=None,
'package targets') 'package targets')
repo_arg = _get_repo_options(fromrepo=fromrepo, **kwargs) repo_arg = _get_repo_options(fromrepo=fromrepo, **kwargs)
# Support branch parameter for yum
branch = kwargs.get('branch', '')
if branch:
repo_arg += '--branch={0!r}'.format(branch)
exclude_arg = _get_excludes_option(**kwargs) exclude_arg = _get_excludes_option(**kwargs)
old = list_pkgs() old = list_pkgs()

View File

@ -55,6 +55,7 @@ def _changes(name,
createhome=True, createhome=True,
password=None, password=None,
enforce_password=True, enforce_password=True,
empty_password=False,
shell=None, shell=None,
fullname='', fullname='',
roomnumber='', roomnumber='',
@ -160,6 +161,7 @@ def present(name,
createhome=True, createhome=True,
password=None, password=None,
enforce_password=True, enforce_password=True,
empty_password=False,
shell=None, shell=None,
unique=True, unique=True,
system=False, system=False,
@ -229,6 +231,9 @@ def present(name,
"password" field. This option will be ignored if "password" is not "password" field. This option will be ignored if "password" is not
specified. specified.
empty_password
Set to True to enable no password-less login for user
shell shell
The login shell, defaults to the system default shell The login shell, defaults to the system default shell
@ -325,6 +330,9 @@ def present(name,
if gid_from_name: if gid_from_name:
gid = __salt__['file.group_to_gid'](name) gid = __salt__['file.group_to_gid'](name)
if empty_password:
__salt__['shadow.del_password'](name)
changes = _changes(name, changes = _changes(name,
uid, uid,
gid, gid,
@ -335,6 +343,7 @@ def present(name,
createhome, createhome,
password, password,
enforce_password, enforce_password,
empty_password,
shell, shell,
fullname, fullname,
roomnumber, roomnumber,
@ -360,7 +369,7 @@ def present(name,
lshad = __salt__['shadow.info'](name) lshad = __salt__['shadow.info'](name)
pre = __salt__['user.info'](name) pre = __salt__['user.info'](name)
for key, val in changes.items(): for key, val in changes.items():
if key == 'passwd': if key == 'passwd' and not empty_password:
__salt__['shadow.set_password'](name, password) __salt__['shadow.set_password'](name, password)
continue continue
if key == 'date': if key == 'date':
@ -419,6 +428,7 @@ def present(name,
createhome, createhome,
password, password,
enforce_password, enforce_password,
empty_password,
shell, shell,
fullname, fullname,
roomnumber, roomnumber,
@ -464,7 +474,7 @@ def present(name,
ret['comment'] = 'New user {0} created'.format(name) ret['comment'] = 'New user {0} created'.format(name)
ret['changes'] = __salt__['user.info'](name) ret['changes'] = __salt__['user.info'](name)
if 'shadow.info' in __salt__ and not salt.utils.is_windows(): if 'shadow.info' in __salt__ and not salt.utils.is_windows():
if password: if password and not empty_password:
__salt__['shadow.set_password'](name, password) __salt__['shadow.set_password'](name, password)
spost = __salt__['shadow.info'](name) spost = __salt__['shadow.info'](name)
if spost['passwd'] != password: if spost['passwd'] != password:

View File

@ -112,7 +112,10 @@ def get_event(node, sock_dir=None, transport='zeromq', opts=None, listen=True):
return SaltEvent(node, sock_dir, opts) return SaltEvent(node, sock_dir, opts)
elif transport == 'raet': elif transport == 'raet':
import salt.utils.raetevent import salt.utils.raetevent
return salt.utils.raetevent.SaltEvent(node, sock_dir, listen) return salt.utils.raetevent.SaltEvent(node,
sock_dir=sock_dir,
listen=listen,
opts=opts)
def tagify(suffix='', prefix='', base=SALT): def tagify(suffix='', prefix='', base=SALT):

View File

@ -6,6 +6,7 @@ This module is used to manage events via RAET
''' '''
# Import python libs # Import python libs
import os
import logging import logging
import time import time
from collections import MutableMapping from collections import MutableMapping
@ -15,6 +16,7 @@ import salt.payload
import salt.loader import salt.loader
import salt.state import salt.state
import salt.utils.event import salt.utils.event
from salt import syspaths
from raet import raeting from raet import raeting
from raet.lane.stacking import LaneStack from raet.lane.stacking import LaneStack
from raet.lane.yarding import RemoteYard from raet.lane.yarding import RemoteYard
@ -26,21 +28,30 @@ class SaltEvent(object):
''' '''
The base class used to manage salt events The base class used to manage salt events
''' '''
def __init__(self, node, sock_dir=None, listen=True): def __init__(self, node, sock_dir=None, listen=True, opts=None):
''' '''
Set up the stack and remote yard Set up the stack and remote yard
''' '''
self.node = node self.node = node
self.sock_dir = sock_dir self.sock_dir = sock_dir
self.listen = listen self.listen = listen
if opts is None:
opts = {}
self.opts = opts
self.__prep_stack() self.__prep_stack()
def __prep_stack(self): def __prep_stack(self):
self.yid = salt.utils.gen_jid() self.yid = salt.utils.gen_jid()
name = 'event' + self.yid
cachedir = self.opts.get('cachedir', os.path.join(syspaths.CACHE_DIR, self.node))
basedirpath = os.path.abspath(
os.path.join(cachedir, 'raet'))
self.connected = False self.connected = False
self.stack = LaneStack( self.stack = LaneStack(
name=name,
yid=self.yid, yid=self.yid,
lanename=self.node, lanename=self.node,
basedirpath=basedirpath,
sockdirpath=self.sock_dir) sockdirpath=self.sock_dir)
self.stack.Pk = raeting.packKinds.pack self.stack.Pk = raeting.packKinds.pack
self.router_yard = RemoteYard( self.router_yard = RemoteYard(

View File

@ -189,6 +189,23 @@ class Schedule(object):
return self.functions['config.merge'](opt, {}, omit_master=True) return self.functions['config.merge'](opt, {}, omit_master=True)
return self.opts.get(opt, {}) return self.opts.get(opt, {})
def delete_job(self, name):
# ensure job exists, then delete it
if name in self.opts['schedule']:
del self.opts['schedule'][name]
# remove from self.intervals
if name in self.intervals:
del self.intervals[name]
def add_job(self, name, schedule):
self.opts['schedule'][name] = schedule
def modify_job(self, name, schedule):
if name in self.opts['schedule']:
self.delete_job(name)
self.opts['schedule'][name] = schedule
def handle_func(self, func, data): def handle_func(self, func, data):
''' '''
Execute this method in a multiprocess or thread Execute this method in a multiprocess or thread
@ -311,6 +328,7 @@ class Schedule(object):
Evaluate and execute the schedule Evaluate and execute the schedule
''' '''
schedule = self.option('schedule') schedule = self.option('schedule')
#log.debug('calling eval {0}'.format(schedule))
if not isinstance(schedule, dict): if not isinstance(schedule, dict):
return return
for job, data in schedule.items(): for job, data in schedule.items():
@ -335,8 +353,12 @@ class Schedule(object):
when = 0 when = 0
seconds = 0 seconds = 0
# clean this up time_conflict = False
if ('seconds' in data or 'hours' in data or 'minutes' in data or 'days' in data) and 'when' in data: for item in ['seconds', 'minutes', 'hours', 'days']:
if item in data and 'when' in data:
time_conflict = True
if time_conflict:
log.info('Unable to use "seconds", "minutes", "hours", or "days" with "when" option. Ignoring.') log.info('Unable to use "seconds", "minutes", "hours", or "days" with "when" option. Ignoring.')
continue continue
@ -440,6 +462,7 @@ class Schedule(object):
else: else:
if now - self.intervals[job] >= seconds: if now - self.intervals[job] >= seconds:
run = True run = True
else: else:
if 'splay' in data: if 'splay' in data:
if 'when' in data: if 'when' in data:

View File

@ -282,7 +282,7 @@ class TestDaemon(object):
self.pre_setup_minions() self.pre_setup_minions()
self.setup_minions() self.setup_minions()
if self.parser.options.ssh: if getattr(self.parser.options, 'ssh', False):
self.prep_ssh() self.prep_ssh()
if self.parser.options.sysinfo: if self.parser.options.sysinfo:

View File

@ -7,7 +7,7 @@
from salttesting.unit import skipIf from salttesting.unit import skipIf
from salttesting.helpers import ensure_in_syspath from salttesting.helpers import ensure_in_syspath
from salttesting.mock import MagicMock, patch, NO_MOCK, NO_MOCK_REASON from salttesting.mock import MagicMock, patch, NO_MOCK, NO_MOCK_REASON
ensure_in_syspath('../') ensure_in_syspath('../..')
# Import salt libs # Import salt libs
import integration import integration

View File

@ -8,7 +8,7 @@ from salttesting import skipIf
from salttesting.helpers import ensure_in_syspath from salttesting.helpers import ensure_in_syspath
from salttesting.mock import patch, NO_MOCK, NO_MOCK_REASON from salttesting.mock import patch, NO_MOCK, NO_MOCK_REASON
ensure_in_syspath('../') ensure_in_syspath('../..')
# Import Python libs # Import Python libs
import os import os
@ -23,6 +23,10 @@ gitfs.__opts__ = {'gitfs_remotes': [''],
'fileserver_backend': 'gitfs', 'fileserver_backend': 'gitfs',
'gitfs_base': 'master', 'gitfs_base': 'master',
'fileserver_events': True, 'fileserver_events': True,
'transport': 'zeromq',
'gitfs_mountpoint': '',
'gitfs_env_whitelist': [],
'gitfs_env_blacklist': []
} }
load = {'saltenv': 'base'} load = {'saltenv': 'base'}
@ -88,7 +92,11 @@ class GitFSTest(integration.ModuleCase):
'gitfs_remotes': ['file://' + self.tmp_repo_git], 'gitfs_remotes': ['file://' + self.tmp_repo_git],
'sock_dir': self.master_opts['sock_dir']}): 'sock_dir': self.master_opts['sock_dir']}):
ret = gitfs.find_file('testfile') ret = gitfs.find_file('testfile')
expected_ret = {'path': '/tmp/salttest/cache/gitfs/refs/master/testfile', expected_ret = {'path': os.path.join(self.master_opts['cachedir'],
'gitfs',
'refs',
'base',
'testfile'),
'rel': 'testfile'} 'rel': 'testfile'}
self.assertDictEqual(ret, expected_ret) self.assertDictEqual(ret, expected_ret)
@ -140,8 +148,18 @@ class GitFSTest(integration.ModuleCase):
ret = gitfs.serve_file(load, fnd) ret = gitfs.serve_file(load, fnd)
self.assertDictEqual({ self.assertDictEqual({
'data': 'Scene 24\n\n \n OLD MAN: Ah, hee he he ha!\n ARTHUR: And this enchanter of whom you speak, he has seen the grail?\n OLD MAN: Ha ha he he he he!\n ARTHUR: Where does he live? Old man, where does he live?\n OLD MAN: He knows of a cave, a cave which no man has entered.\n ARTHUR: And the Grail... The Grail is there?\n OLD MAN: Very much danger, for beyond the cave lies the Gorge\n of Eternal Peril, which no man has ever crossed.\n ARTHUR: But the Grail! Where is the Grail!?\n OLD MAN: Seek you the Bridge of Death.\n ARTHUR: The Bridge of Death, which leads to the Grail?\n OLD MAN: Hee hee ha ha!\n\n', 'data': 'Scene 24\n\n \n OLD MAN: Ah, hee he he ha!\n ARTHUR: '
'dest': 'testfile'}, ret) 'And this enchanter of whom you speak, he has seen the grail?\n '
'OLD MAN: Ha ha he he he he!\n ARTHUR: Where does he live? '
'Old man, where does he live?\n OLD MAN: He knows of a cave, '
'a cave which no man has entered.\n ARTHUR: And the Grail... '
'The Grail is there?\n OLD MAN: Very much danger, for beyond '
'the cave lies the Gorge\n of Eternal Peril, which no man '
'has ever crossed.\n ARTHUR: But the Grail! Where is the Grail!?\n '
'OLD MAN: Seek you the Bridge of Death.\n ARTHUR: The Bridge of '
'Death, which leads to the Grail?\n OLD MAN: Hee hee ha ha!\n\n',
'dest': 'testfile'},
ret)
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -7,7 +7,7 @@
from salttesting import skipIf from salttesting import skipIf
from salttesting.helpers import ensure_in_syspath from salttesting.helpers import ensure_in_syspath
from salttesting.mock import patch, NO_MOCK, NO_MOCK_REASON from salttesting.mock import patch, NO_MOCK, NO_MOCK_REASON
ensure_in_syspath('../') ensure_in_syspath('../..')
# Import salt libs # Import salt libs
import integration import integration
@ -22,11 +22,15 @@ import os
@skipIf(NO_MOCK, NO_MOCK_REASON) @skipIf(NO_MOCK, NO_MOCK_REASON)
class RootsTest(integration.ModuleCase): class RootsTest(integration.ModuleCase):
def setUp(self): def setUp(self):
self.master_opts['file_roots']['base'] = [os.path.join(integration.FILES, 'file', 'base')] if integration.TMP_STATE_TREE not in self.master_opts['file_roots']['base']:
# We need to setup the file roots
self.master_opts['file_roots']['base'] = [os.path.join(integration.FILES, 'file', 'base')]
def test_file_list(self): def test_file_list(self):
with patch.dict(roots.__opts__, {'file_roots': self.master_opts['file_roots'], with patch.dict(roots.__opts__, {'cachedir': self.master_opts['cachedir'],
'file_roots': self.master_opts['file_roots'],
'fileserver_ignoresymlinks': False, 'fileserver_ignoresymlinks': False,
'fileserver_followsymlinks': False, 'fileserver_followsymlinks': False,
'file_ignore_regex': False, 'file_ignore_regex': False,
@ -102,7 +106,10 @@ class RootsTest(integration.ModuleCase):
self.assertDictEqual(ret, {'hsum': '98aa509006628302ce38ce521a7f805f', 'hash_type': 'md5'}) self.assertDictEqual(ret, {'hsum': '98aa509006628302ce38ce521a7f805f', 'hash_type': 'md5'})
def test_file_list_emptydirs(self): def test_file_list_emptydirs(self):
with patch.dict(roots.__opts__, {'file_roots': self.master_opts['file_roots'], if integration.TMP_STATE_TREE not in self.master_opts['file_roots']['base']:
self.skipTest('This test fails when using tests/runtests.py. salt-runtests will be available soon.')
with patch.dict(roots.__opts__, {'cachedir': self.master_opts['cachedir'],
'file_roots': self.master_opts['file_roots'],
'fileserver_ignoresymlinks': False, 'fileserver_ignoresymlinks': False,
'fileserver_followsymlinks': False, 'fileserver_followsymlinks': False,
'file_ignore_regex': False, 'file_ignore_regex': False,
@ -111,11 +118,14 @@ class RootsTest(integration.ModuleCase):
self.assertIn('empty_dir', ret) self.assertIn('empty_dir', ret)
def test_dir_list(self): def test_dir_list(self):
with patch.dict(roots.__opts__, {'file_roots': self.master_opts['file_roots'], if integration.TMP_STATE_TREE not in self.master_opts['file_roots']['base']:
'fileserver_ignoresymlinks': False, self.skipTest('This test fails when using tests/runtests.py. salt-runtests will be available soon.')
'fileserver_followsymlinks': False, with patch.dict(roots.__opts__, {'cachedir': self.master_opts['cachedir'],
'file_ignore_regex': False, 'file_roots': self.master_opts['file_roots'],
'file_ignore_glob': False}): 'fileserver_ignoresymlinks': False,
'fileserver_followsymlinks': False,
'file_ignore_regex': False,
'file_ignore_glob': False}):
ret = roots.dir_list({'saltenv': 'base'}) ret = roots.dir_list({'saltenv': 'base'})
self.assertIn('empty_dir', ret) self.assertIn('empty_dir', ret)

View File

@ -85,34 +85,8 @@ class CallTest(integration.ShellCase, integration.ShellCaseCommonTestsMixIn):
@skipIf(sys.platform.startswith('win'), 'This test does not apply on Win') @skipIf(sys.platform.startswith('win'), 'This test does not apply on Win')
def test_return(self): def test_return(self):
config_dir = '/tmp/salttest' self.run_call('-c {0} cmd.run "echo returnTOmaster"'.format(self.get_config_dir()))
minion_config_file = os.path.join(config_dir, 'minion') jobs = [a for a in self.run_run('-c {0} jobs.list_jobs'.format(self.get_config_dir()))]
minion_config = {
'id': 'minion_test_issue_2731',
'master': 'localhost',
'master_port': 64506,
'root_dir': '/tmp/salttest',
'pki_dir': 'pki',
'cachedir': 'cachedir',
'sock_dir': 'minion_sock',
'open_mode': True,
'log_file': '/tmp/salttest/minion_test_issue_2731',
'log_level': 'quiet',
'log_level_logfile': 'info'
}
# Remove existing logfile
if os.path.isfile('/tmp/salttest/minion_test_issue_2731'):
os.unlink('/tmp/salttest/minion_test_issue_2731')
# Let's first test with a master running
open(minion_config_file, 'w').write(
yaml.dump(minion_config, default_flow_style=False)
)
out = self.run_call('-c {0} cmd.run "echo returnTOmaster"'.format(
os.path.join(integration.INTEGRATION_TEST_DIR, 'files', 'conf')))
jobs = [a for a in self.run_run('-c {0} jobs.list_jobs'.format(
os.path.join(integration.INTEGRATION_TEST_DIR, 'files', 'conf')))]
self.assertTrue(True in ['returnTOmaster' in j for j in jobs]) self.assertTrue(True in ['returnTOmaster' in j for j in jobs])
# lookback jid # lookback jid
@ -129,38 +103,43 @@ class CallTest(integration.ShellCase, integration.ShellCaseCommonTestsMixIn):
assert idx > 0 assert idx > 0
assert jid assert jid
master_out = [ master_out = [
a for a in self.run_run('-c {0} jobs.lookup_jid {1}'.format( a for a in self.run_run('-c {0} jobs.lookup_jid {1}'.format(self.get_config_dir(), jid))
os.path.join(integration.INTEGRATION_TEST_DIR, ]
'files',
'conf'),
jid))]
self.assertTrue(True in ['returnTOmaster' in a for a in master_out]) self.assertTrue(True in ['returnTOmaster' in a for a in master_out])
@skipIf(sys.platform.startswith('win'), 'This test does not apply on Win') @skipIf(sys.platform.startswith('win'), 'This test does not apply on Win')
def test_issue_2731_masterless(self): def test_issue_2731_masterless(self):
config_dir = '/tmp/salttest' root_dir = os.path.join(integration.TMP, 'issue-2731')
config_dir = os.path.join(root_dir, 'conf')
minion_config_file = os.path.join(config_dir, 'minion') minion_config_file = os.path.join(config_dir, 'minion')
logfile = os.path.join(root_dir, 'minion_test_issue_2731')
if not os.path.isdir(config_dir):
os.makedirs(config_dir)
master_config = yaml.load(open(self.get_config_file_path('master')).read())
master_root_dir = master_config['root_dir']
this_minion_key = os.path.join( this_minion_key = os.path.join(
config_dir, 'pki', 'minions', 'minion_test_issue_2731' master_root_dir, 'pki', 'minions', 'minion_test_issue_2731'
) )
minion_config = { minion_config = {
'id': 'minion_test_issue_2731', 'id': 'minion_test_issue_2731',
'master': 'localhost', 'master': 'localhost',
'master_port': 64506, 'master_port': 64506,
'root_dir': '/tmp/salttest', 'root_dir': master_root_dir,
'pki_dir': 'pki', 'pki_dir': 'pki',
'cachedir': 'cachedir', 'cachedir': 'cachedir',
'sock_dir': 'minion_sock', 'sock_dir': 'minion_sock',
'open_mode': True, 'open_mode': True,
'log_file': '/tmp/salttest/minion_test_issue_2731', 'log_file': logfile,
'log_level': 'quiet', 'log_level': 'quiet',
'log_level_logfile': 'info' 'log_level_logfile': 'info'
} }
# Remove existing logfile # Remove existing logfile
if os.path.isfile('/tmp/salttest/minion_test_issue_2731'): if os.path.isfile(logfile):
os.unlink('/tmp/salttest/minion_test_issue_2731') os.unlink(logfile)
start = datetime.now() start = datetime.now()
# Let's first test with a master running # Let's first test with a master running

View File

@ -109,6 +109,13 @@ class SaltTestsuiteParser(SaltCoverageTestingParser):
action='store_true', action='store_true',
help='Run unit tests' help='Run unit tests'
) )
self.test_selection_group.add_option(
'--fileserver-tests',
dest='fileserver',
default=False,
action='store_true',
help='Run Fileserver tests'
)
self.test_selection_group.add_option( self.test_selection_group.add_option(
'-o', '-o',
'--outputter', '--outputter',
@ -137,7 +144,8 @@ class SaltTestsuiteParser(SaltCoverageTestingParser):
self.options.module, self.options.client, self.options.shell, self.options.module, self.options.client, self.options.shell,
self.options.unit, self.options.state, self.options.runner, self.options.unit, self.options.state, self.options.runner,
self.options.loader, self.options.name, self.options.outputter, self.options.loader, self.options.name, self.options.outputter,
os.geteuid() != 0, not self.options.run_destructive)): self.options.fileserver, os.geteuid() != 0,
not self.options.run_destructive)):
self.error( self.error(
'No sense in generating the tests coverage report when ' 'No sense in generating the tests coverage report when '
'not running the full test suite, including the ' 'not running the full test suite, including the '
@ -149,7 +157,8 @@ class SaltTestsuiteParser(SaltCoverageTestingParser):
if not any((self.options.module, self.options.client, if not any((self.options.module, self.options.client,
self.options.shell, self.options.unit, self.options.state, self.options.shell, self.options.unit, self.options.state,
self.options.runner, self.options.loader, self.options.runner, self.options.loader,
self.options.name, self.options.outputter)): self.options.name, self.options.outputter,
self.options.fileserver)):
self.options.module = True self.options.module = True
self.options.client = True self.options.client = True
self.options.shell = True self.options.shell = True
@ -158,6 +167,7 @@ class SaltTestsuiteParser(SaltCoverageTestingParser):
self.options.state = True self.options.state = True
self.options.loader = True self.options.loader = True
self.options.outputter = True self.options.outputter = True
self.options.fileserver = True
self.start_coverage( self.start_coverage(
branch=True, branch=True,
@ -192,6 +202,7 @@ class SaltTestsuiteParser(SaltCoverageTestingParser):
self.options.client or self.options.client or
self.options.loader or self.options.loader or
self.options.outputter or self.options.outputter or
self.options.fileserver or
named_tests): named_tests):
# We're either not running any of runner, state, module and client # We're either not running any of runner, state, module and client
# tests, or, we're only running unittests by passing --unit or by # tests, or, we're only running unittests by passing --unit or by
@ -240,7 +251,8 @@ class SaltTestsuiteParser(SaltCoverageTestingParser):
if not any([self.options.client, self.options.module, if not any([self.options.client, self.options.module,
self.options.runner, self.options.shell, self.options.runner, self.options.shell,
self.options.state, self.options.loader, self.options.state, self.options.loader,
self.options.outputter, self.options.name]): self.options.outputter, self.options.name,
self.options.fileserver]):
return status return status
with TestDaemon(self): with TestDaemon(self):
@ -264,6 +276,8 @@ class SaltTestsuiteParser(SaltCoverageTestingParser):
status.append(self.run_integration_suite('shell', 'Shell')) status.append(self.run_integration_suite('shell', 'Shell'))
if self.options.outputter: if self.options.outputter:
status.append(self.run_integration_suite('output', 'Outputter')) status.append(self.run_integration_suite('output', 'Outputter'))
if self.options.fileserver:
status.append(self.run_integration_suite('fileserver', 'Fileserver'))
return status return status
def run_unit_tests(self): def run_unit_tests(self):

View File

@ -14,7 +14,7 @@ ensure_in_syspath('../')
# Import Salt libs # Import Salt libs
import integration import integration
from salt import client from salt import client, config
from salt.exceptions import EauthAuthenticationError, SaltInvocationError from salt.exceptions import EauthAuthenticationError, SaltInvocationError
@ -22,15 +22,14 @@ from salt.exceptions import EauthAuthenticationError, SaltInvocationError
class LocalClientTestCase(TestCase, class LocalClientTestCase(TestCase,
integration.AdaptedConfigurationTestCaseMixIn): integration.AdaptedConfigurationTestCaseMixIn):
def setUp(self): def setUp(self):
if not os.path.exists('/tmp/salttest'): master_config_path = self.get_config_file_path('master')
# This path is hardcoded in the configuration file master_config = config.master_config(master_config_path)
os.makedirs('/tmp/salttest/cache') if not os.path.exists(master_config['cachedir']):
os.makedirs(master_config['cachedir'])
if not os.path.exists(integration.TMP_CONF_DIR): if not os.path.exists(integration.TMP_CONF_DIR):
os.makedirs(integration.TMP_CONF_DIR) os.makedirs(integration.TMP_CONF_DIR)
self.local_client = client.LocalClient( self.local_client = client.LocalClient(mopts=master_config)
self.get_config_file_path('master')
)
def test_create_local_client(self): def test_create_local_client(self):
local_client = client.LocalClient(self.get_config_file_path('master')) local_client = client.LocalClient(self.get_config_file_path('master'))

View File

@ -93,7 +93,7 @@ def _fopen_side_effect_etc_hosts(filename):
_unhandled_mock_read(filename) _unhandled_mock_read(filename)
class ConfigTestCase(TestCase): class ConfigTestCase(TestCase, integration.AdaptedConfigurationTestCaseMixIn):
def test_proper_path_joining(self): def test_proper_path_joining(self):
fpath = tempfile.mktemp() fpath = tempfile.mktemp()
try: try:
@ -335,31 +335,28 @@ class ConfigTestCase(TestCase):
shutil.rmtree(tempdir) shutil.rmtree(tempdir)
def test_syndic_config(self): def test_syndic_config(self):
syndic_conf_path = os.path.join( syndic_conf_path = self.get_config_file_path('syndic')
integration.INTEGRATION_TEST_DIR, 'files', 'conf', 'syndic' minion_conf_path = self.get_config_file_path('minion')
)
minion_config_path = os.path.join(
integration.INTEGRATION_TEST_DIR, 'files', 'conf', 'minion'
)
syndic_opts = sconfig.syndic_config( syndic_opts = sconfig.syndic_config(
syndic_conf_path, minion_config_path syndic_conf_path, minion_conf_path
) )
syndic_opts.update(salt.minion.resolve_dns(syndic_opts)) syndic_opts.update(salt.minion.resolve_dns(syndic_opts))
root_dir = syndic_opts['root_dir']
# id & pki dir are shared & so configured on the minion side # id & pki dir are shared & so configured on the minion side
self.assertEqual(syndic_opts['id'], 'minion') self.assertEqual(syndic_opts['id'], 'minion')
self.assertEqual(syndic_opts['pki_dir'], '/tmp/salttest/pki') self.assertEqual(syndic_opts['pki_dir'], os.path.join(root_dir, 'pki'))
# the rest is configured master side # the rest is configured master side
self.assertEqual(syndic_opts['master_uri'], 'tcp://127.0.0.1:54506') self.assertEqual(syndic_opts['master_uri'], 'tcp://127.0.0.1:54506')
self.assertEqual(syndic_opts['master_port'], 54506) self.assertEqual(syndic_opts['master_port'], 54506)
self.assertEqual(syndic_opts['master_ip'], '127.0.0.1') self.assertEqual(syndic_opts['master_ip'], '127.0.0.1')
self.assertEqual(syndic_opts['master'], 'localhost') self.assertEqual(syndic_opts['master'], 'localhost')
self.assertEqual(syndic_opts['sock_dir'], '/tmp/salttest/minion_sock') self.assertEqual(syndic_opts['sock_dir'], os.path.join(root_dir, 'minion_sock'))
self.assertEqual(syndic_opts['cachedir'], '/tmp/salttest/cachedir') self.assertEqual(syndic_opts['cachedir'], os.path.join(root_dir, 'cachedir'))
self.assertEqual(syndic_opts['log_file'], '/tmp/salttest/osyndic.log') self.assertEqual(syndic_opts['log_file'], os.path.join(root_dir, 'osyndic.log'))
self.assertEqual(syndic_opts['pidfile'], '/tmp/salttest/osyndic.pid') self.assertEqual(syndic_opts['pidfile'], os.path.join(root_dir, 'osyndic.pid'))
# Show that the options of localclient that repub to local master # Show that the options of localclient that repub to local master
# are not merged with syndic ones # are not merged with syndic ones
self.assertEqual(syndic_opts['_master_conf_file'], minion_config_path) self.assertEqual(syndic_opts['_master_conf_file'], minion_conf_path)
self.assertEqual(syndic_opts['_minion_conf_file'], syndic_conf_path) self.assertEqual(syndic_opts['_minion_conf_file'], syndic_conf_path)
def test_check_dns_deprecation_warning(self): def test_check_dns_deprecation_warning(self):