mirror of
https://github.com/valitydev/salt.git
synced 2024-11-07 08:58:59 +00:00
commit
500afb29fe
@ -450,7 +450,7 @@
|
||||
#state_output: full
|
||||
|
||||
# The state_output_diff setting changes whether or not the output from
|
||||
# sucessful states is returned. Useful when even the terse output of these
|
||||
# successful states is returned. Useful when even the terse output of these
|
||||
# states is cluttering the logs. Set it to True to ignore them.
|
||||
#state_output_diff: False
|
||||
|
||||
|
@ -4186,7 +4186,7 @@ setDocument = Sizzle.setDocument = function( node ) {
|
||||
// Regex strategy adopted from Diego Perini
|
||||
assert(function( div ) {
|
||||
// Select is set to empty string on purpose
|
||||
// This is to test IE's treatment of not explictly
|
||||
// This is to test IE's treatment of not explicitly
|
||||
// setting a boolean content attribute,
|
||||
// since its presence should be enough
|
||||
// http://bugs.jquery.com/ticket/12359
|
||||
|
@ -90,7 +90,7 @@ ReqServer
|
||||
---------
|
||||
|
||||
The Salt request server takes requests and distributes them to available MWorker
|
||||
processes for processing. It also recieves replies back from minions.
|
||||
processes for processing. It also receives replies back from minions.
|
||||
|
||||
The ReqServer is bound to the following:
|
||||
* TCP: 4506
|
||||
@ -199,7 +199,7 @@ minion.
|
||||
Job Flow
|
||||
--------
|
||||
|
||||
When a salt minion starts up, it attempts to connect to the Pubisher and the
|
||||
When a salt minion starts up, it attempts to connect to the Publisher and the
|
||||
ReqServer on the salt master. It then attempts to authenticate and once the
|
||||
minion has successfully authenticated, it simply listens for jobs.
|
||||
|
||||
|
@ -73,13 +73,13 @@ and other syndics that are bound to them further down in the hierarchy. When
|
||||
events and job return data are generated by minions, they aggregated back,
|
||||
through the same syndic(s), to the master which issued the command.
|
||||
|
||||
The master sitting at the top of the hierachy (the Master of Masters) will *not*
|
||||
The master sitting at the top of the hierarchy (the Master of Masters) will *not*
|
||||
be running the ``salt-syndic`` daemon. It will have the ``salt-master``
|
||||
daemon running, and optionally, the ``salt-minion`` daemon. Each syndic
|
||||
connected to an upper-level master will have both the ``salt-master`` and the
|
||||
``salt-syndic`` daemon running, and optionally, the ``salt-minion`` daemon.
|
||||
|
||||
Nodes on the lowest points of the hierarchy (minions which do not propogate
|
||||
Nodes on the lowest points of the hierarchy (minions which do not propagate
|
||||
data to another level) will only have the ``salt-minion`` daemon running. There
|
||||
is no need for either ``salt-master`` or ``salt-syndic`` to be running on a
|
||||
standard minion.
|
||||
|
@ -66,13 +66,13 @@ Limitations
|
||||
===========
|
||||
|
||||
The 2014.7 release of RAET is not complete! The Syndic and Multi Master have
|
||||
not been completed yet and these are slated for completetion in the Lithium
|
||||
not been completed yet and these are slated for completion in the Lithium
|
||||
release.
|
||||
|
||||
Also, Salt-Raet allows for more control over the client but these hooks have
|
||||
not been implimented yet, thereforre the client still uses the same system
|
||||
not been implemented yet, thereforre the client still uses the same system
|
||||
as the ZeroMQ client. This means that the extra reliability that RAET exposes
|
||||
has not yet been implimented in the CLI client.
|
||||
has not yet been implemented in the CLI client.
|
||||
|
||||
Why?
|
||||
====
|
||||
@ -95,7 +95,7 @@ which out ZeroMQ topologies can't match.
|
||||
|
||||
Many of the proposed features are still under development and will be
|
||||
announced as they enter proff of concept phases, but these features include
|
||||
`salt-fuse` - a filesystem over salt, `salt-vt` - a paralell api driven shell
|
||||
`salt-fuse` - a filesystem over salt, `salt-vt` - a parallel api driven shell
|
||||
over the salt transport and many others.
|
||||
|
||||
RAET Reliability
|
||||
|
@ -11,7 +11,7 @@ presents and queueing api, all messages in RAET are made available to via
|
||||
queues. This is the single most differentiating factor with RAET vs other
|
||||
networking libraries, instead of making a socket, a stack is created.
|
||||
Instead of calling send() or recv(), messages are placed on the stack to be
|
||||
sent and messages that are recived appear on the stack.
|
||||
sent and messages that are received appear on the stack.
|
||||
|
||||
Different kinds of stacks are also available, currently two stacks exist,
|
||||
the UDP stack, and the UXD stack. The UDP stack is used to communicate over
|
||||
|
@ -166,7 +166,7 @@ once with
|
||||
$ salt * test.ping
|
||||
|
||||
it may cause thousands of minions trying to return their data to the salt-master
|
||||
open port 4506. Also causing a flood of syn-flood if the master cant handle that many
|
||||
open port 4506. Also causing a flood of syn-flood if the master can't handle that many
|
||||
returns at once.
|
||||
|
||||
This can be easily avoided with salts batch mode:
|
||||
|
@ -369,7 +369,7 @@ another key-pair has to be added to the setup. Its default name is:
|
||||
The combination of the master.* and master_sign.* key-pairs give the
|
||||
possibility of generating signatures. The signature of a given message
|
||||
is unique and can be verified, if the public-key of the signing-key-pair
|
||||
is available to the recepient (the minion).
|
||||
is available to the recipient (the minion).
|
||||
|
||||
The signature of the masters public-key in master.pub is computed with
|
||||
|
||||
|
@ -194,7 +194,7 @@ To execute a function, use :mod:`salt.function <salt.states.saltmod.function>`:
|
||||
Triggering a Highstate
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Wheras with the OverState, a Highstate is run by simply omitting an ``sls`` or
|
||||
Whereas with the OverState, a Highstate is run by simply omitting an ``sls`` or
|
||||
``function`` argument, with the Orchestrate Runner the Highstate must
|
||||
explicitly be requested by using ``highstate: True``:
|
||||
|
||||
|
@ -220,7 +220,7 @@ class LocalClient(object):
|
||||
raise EauthAuthenticationError(
|
||||
'Failed to authenticate! This is most likely because this '
|
||||
'user is not permitted to execute commands, but there is a '
|
||||
'small possibility that a disk error ocurred (check '
|
||||
'small possibility that a disk error occurred (check '
|
||||
'disk/inode usage).'
|
||||
)
|
||||
|
||||
@ -834,14 +834,13 @@ class LocalClient(object):
|
||||
if timeout is None:
|
||||
timeout = self.opts['timeout']
|
||||
start = int(time.time())
|
||||
timeout_at = start + timeout
|
||||
|
||||
# timeouts per minion, id_ -> timeout time
|
||||
minion_timeouts = {}
|
||||
|
||||
found = set()
|
||||
# Check to see if the jid is real, if not return the empty dict
|
||||
if not self.returners['{0}.get_load'.format(self.opts['master_job_cache'])](jid) != {}:
|
||||
if self.returners['{0}.get_load'.format(self.opts['master_job_cache'])](jid) == {}:
|
||||
log.warning('jid does not exist')
|
||||
yield {}
|
||||
# stop the iteration, since the jid is invalid
|
||||
@ -849,19 +848,19 @@ class LocalClient(object):
|
||||
# Wait for the hosts to check in
|
||||
syndic_wait = 0
|
||||
last_time = False
|
||||
# iterator for this job's return
|
||||
ret_iter = self.get_returns_no_block(jid)
|
||||
# iterator for the info of this job
|
||||
jinfo_iter = []
|
||||
timeout_at = time.time() + timeout
|
||||
# are there still minions running the job out there
|
||||
# start as True so that we ping at least once
|
||||
minions_running = True
|
||||
log.debug(
|
||||
'get_iter_returns for jid {0} sent to {1} will timeout at {2}'.format(
|
||||
jid, minions, datetime.fromtimestamp(timeout_at).time()
|
||||
)
|
||||
)
|
||||
# iterator for this job's return
|
||||
ret_iter = self.get_returns_no_block(jid)
|
||||
# iterator for the info of this job
|
||||
jinfo_iter = []
|
||||
jinfo_timeout = time.time() + timeout
|
||||
# are there still minions running the job out there
|
||||
# start as True so that we ping at least once
|
||||
minions_running = True
|
||||
while True:
|
||||
# Process events until timeout is reached or all minions have returned
|
||||
for raw in ret_iter:
|
||||
@ -888,17 +887,10 @@ class LocalClient(object):
|
||||
log.debug('jid {0} return from {1}'.format(jid, raw['data']['id']))
|
||||
yield ret
|
||||
|
||||
# if we have all of the returns, no need for anything fancy
|
||||
if len(found.intersection(minions)) >= len(minions):
|
||||
# if we have all of the returns (and we aren't a syndic), no need for anything fancy
|
||||
if len(found.intersection(minions)) >= len(minions) and not self.opts['order_masters']:
|
||||
# All minions have returned, break out of the loop
|
||||
log.debug('jid {0} found all minions {1}'.format(jid, found))
|
||||
if self.opts['order_masters']:
|
||||
if syndic_wait < self.opts.get('syndic_wait', 1):
|
||||
syndic_wait += 1
|
||||
timeout_at = int(time.time()) + 1
|
||||
log.debug('jid {0} syndic_wait {1} will now timeout at {2}'.format(
|
||||
jid, syndic_wait, datetime.fromtimestamp(timeout_at).time()))
|
||||
continue
|
||||
break
|
||||
|
||||
# let start the timeouts for all remaining minions
|
||||
@ -909,7 +901,7 @@ class LocalClient(object):
|
||||
|
||||
# if the jinfo has timed out and some minions are still running the job
|
||||
# re-do the ping
|
||||
if time.time() > jinfo_timeout and minions_running:
|
||||
if time.time() > timeout_at and minions_running:
|
||||
# need our own event listener, so we don't clobber the class one
|
||||
event = salt.utils.event.get_event(
|
||||
'master',
|
||||
@ -928,7 +920,10 @@ class LocalClient(object):
|
||||
jinfo_iter = []
|
||||
else:
|
||||
jinfo_iter = self.get_returns_no_block(jinfo['jid'], event=event)
|
||||
jinfo_timeout = time.time() + self.opts['gather_job_timeout']
|
||||
timeout_at = time.time() + self.opts['gather_job_timeout']
|
||||
# if you are a syndic, wait a little longer
|
||||
if self.opts['order_masters']:
|
||||
timeout_at += self.opts.get('syndic_wait', 1)
|
||||
|
||||
# check for minions that are running the job still
|
||||
for raw in jinfo_iter:
|
||||
@ -963,7 +958,7 @@ class LocalClient(object):
|
||||
now = time.time()
|
||||
# if we have finished waiting, and no minions are running the job
|
||||
# then we need to see if each minion has timedout
|
||||
done = (now > jinfo_timeout) and not minions_running
|
||||
done = (now > timeout_at) and not minions_running
|
||||
if done:
|
||||
# if all minions have timeod out
|
||||
for id_ in minions - found:
|
||||
@ -1002,7 +997,7 @@ class LocalClient(object):
|
||||
found = set()
|
||||
ret = {}
|
||||
# Check to see if the jid is real, if not return the empty dict
|
||||
if not self.returners['{0}.get_load'.format(self.opts['master_job_cache'])](jid) != {}:
|
||||
if self.returners['{0}.get_load'.format(self.opts['master_job_cache'])](jid) == {}:
|
||||
log.warning('jid does not exist')
|
||||
return ret
|
||||
|
||||
@ -1135,7 +1130,7 @@ class LocalClient(object):
|
||||
found = set()
|
||||
ret = {}
|
||||
# Check to see if the jid is real, if not return the empty dict
|
||||
if not self.returners['{0}.get_load'.format(self.opts['master_job_cache'])](jid) != {}:
|
||||
if self.returners['{0}.get_load'.format(self.opts['master_job_cache'])](jid) == {}:
|
||||
log.warning('jid does not exist')
|
||||
return ret
|
||||
# Wait for the hosts to check in
|
||||
@ -1241,7 +1236,7 @@ class LocalClient(object):
|
||||
|
||||
found = set()
|
||||
# Check to see if the jid is real, if not return the empty dict
|
||||
if not self.returners['{0}.get_load'.format(self.opts['master_job_cache'])](jid) != {}:
|
||||
if self.returners['{0}.get_load'.format(self.opts['master_job_cache'])](jid) == {}:
|
||||
log.warning('jid does not exist')
|
||||
yield {}
|
||||
# stop the iteration, since the jid is invalid
|
||||
|
@ -72,7 +72,7 @@ RSTR_RE = r'(?:^|\r?\n)' + RSTR + '(?:\r?\n|$)'
|
||||
# 1) Make the _thinnest_ /bin/sh shim (SSH_SH_SHIM) to find the python
|
||||
# interpreter and get it invoked
|
||||
# 2) Once a qualified python is found start it with the SSH_PY_SHIM
|
||||
# 3) The shim is converted to a single semicolon seperated line, so
|
||||
# 3) The shim is converted to a single semicolon separated line, so
|
||||
# some constructs are needed to keep it clean.
|
||||
|
||||
# NOTE:
|
||||
@ -734,7 +734,7 @@ class Single(object):
|
||||
except TypeError as exc:
|
||||
result = 'TypeError encountered executing {0}: {1}'.format(self.fun, exc)
|
||||
except Exception as exc:
|
||||
result = 'An Exception occured while executing {0}: {1}'.format(self.fun, exc)
|
||||
result = 'An Exception occurred while executing {0}: {1}'.format(self.fun, exc)
|
||||
# Mimic the json data-structure that "salt-call --local" will
|
||||
# emit (as seen in ssh_py_shim.py)
|
||||
if isinstance(result, dict) and 'local' in result:
|
||||
|
@ -5,7 +5,7 @@ then invoking thin.
|
||||
|
||||
This is not intended to be instantiated as a module, rather it is a
|
||||
helper script used by salt.client.ssh.Single. It is here, in a
|
||||
seperate file, for convenience of development.
|
||||
separate file, for convenience of development.
|
||||
'''
|
||||
|
||||
import hashlib
|
||||
|
@ -49,7 +49,7 @@ def get_url(path, dest, saltenv='base'):
|
||||
|
||||
def list_states(saltenv='base'):
|
||||
'''
|
||||
List all the avilable state modules in an environment
|
||||
List all the available state modules in an environment
|
||||
'''
|
||||
return __context__['fileclient'].list_states(saltenv)
|
||||
|
||||
|
@ -2207,7 +2207,7 @@ def run_parallel_map_providers_query(data, queue=None):
|
||||
return (data['alias'], data['driver'], ())
|
||||
|
||||
|
||||
# for pickle and multiprocessing, we cant use directly decorators
|
||||
# for pickle and multiprocessing, we can't use directly decorators
|
||||
def _run_parallel_map_providers_query(*args, **kw):
|
||||
return communicator(run_parallel_map_providers_query)(*args[0], **kw)
|
||||
|
||||
|
@ -623,7 +623,7 @@ def create(vm_):
|
||||
log.debug('VM {0} is now running'.format(public_ip))
|
||||
vm_['ssh_host'] = public_ip
|
||||
|
||||
# The instance is booted and accessable, let's Salt it!
|
||||
# The instance is booted and accessible, let's Salt it!
|
||||
ret = salt.utils.cloud.bootstrap(vm_, __opts__)
|
||||
ret.update(data.__dict__)
|
||||
|
||||
|
@ -1793,7 +1793,7 @@ def wait_for_instance(
|
||||
gateway=ssh_gateway_config
|
||||
):
|
||||
# If a known_hosts_file is configured, this instance will not be
|
||||
# accessable until it has a host key. Since this is provided on
|
||||
# accessible until it has a host key. Since this is provided on
|
||||
# supported instances by cloud-init, and viewable to us only from the
|
||||
# console output (which may take several minutes to become available,
|
||||
# we have some more waiting to do here.
|
||||
@ -2019,7 +2019,7 @@ def create(vm_=None, call=None):
|
||||
vm_, data, ip_address, display_ssh_output
|
||||
)
|
||||
|
||||
# The instance is booted and accessable, let's Salt it!
|
||||
# The instance is booted and accessible, let's Salt it!
|
||||
ret = salt.utils.cloud.bootstrap(vm_, __opts__)
|
||||
|
||||
log.info('Created Cloud VM {0[name]!r}'.format(vm_))
|
||||
|
@ -553,7 +553,7 @@ class Auth(object):
|
||||
'pub_key': The RSA public key of the sender.
|
||||
|
||||
:rtype: str
|
||||
:return: An empty string on verfication failure. On success, the decrypted AES message in the payload.
|
||||
:return: An empty string on verification failure. On success, the decrypted AES message in the payload.
|
||||
'''
|
||||
m_pub_fn = os.path.join(self.opts['pki_dir'], self.mpub)
|
||||
if os.path.isfile(m_pub_fn) and not self.opts['open_mode']:
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Salt Master Worker Floscript, this controlls a single worker proc,
|
||||
# Salt Master Worker Floscript, this controls a single worker proc,
|
||||
# many are started based on the value in worker_threads
|
||||
|
||||
house worker
|
||||
|
@ -223,7 +223,7 @@ def fileserver_update(fileserver):
|
||||
|
||||
class AutoKey(object):
|
||||
'''
|
||||
Impliment the methods to run auto key acceptance and rejection
|
||||
Implement the methods to run auto key acceptance and rejection
|
||||
'''
|
||||
def __init__(self, opts):
|
||||
self.opts = opts
|
||||
|
@ -1104,20 +1104,21 @@ def locale_info():
|
||||
defaultencoding
|
||||
'''
|
||||
grains = {}
|
||||
grains['locale_info'] = {}
|
||||
|
||||
if 'proxyminion' in __opts__:
|
||||
return grains
|
||||
|
||||
try:
|
||||
(
|
||||
grains['defaultlanguage'],
|
||||
grains['defaultencoding']
|
||||
grains['locale_info']['defaultlanguage'],
|
||||
grains['locale_info']['defaultencoding']
|
||||
) = locale.getdefaultlocale()
|
||||
except Exception:
|
||||
# locale.getdefaultlocale can ValueError!! Catch anything else it
|
||||
# might do, per #2205
|
||||
grains['defaultlanguage'] = 'unknown'
|
||||
grains['defaultencoding'] = 'unknown'
|
||||
grains['locale_info']['defaultlanguage'] = 'unknown'
|
||||
grains['locale_info']['defaultencoding'] = 'unknown'
|
||||
return grains
|
||||
|
||||
|
||||
|
@ -895,7 +895,7 @@ class RaetKey(Key):
|
||||
def check_master(self):
|
||||
'''
|
||||
Log if the master is not running
|
||||
NOT YET IMPLIMENTED
|
||||
NOT YET IMPLEMENTED
|
||||
'''
|
||||
return True
|
||||
|
||||
|
@ -120,7 +120,7 @@ class Master(SMaster):
|
||||
controller for the Salt master. This is where any data that needs to
|
||||
be cleanly maintained from the master is maintained.
|
||||
'''
|
||||
# TODO: move to a seperate class, with a better name
|
||||
# TODO: move to a separate class, with a better name
|
||||
salt.utils.appendproctitle('_clear_old_jobs')
|
||||
|
||||
# Set up search object
|
||||
|
@ -1620,7 +1620,7 @@ class Minion(MinionBase):
|
||||
log.info('Trying to tune in to next master from master-list')
|
||||
|
||||
# if eval_master finds a new master for us, self.connected
|
||||
# will be True again on successfull master authentication
|
||||
# will be True again on successful master authentication
|
||||
self.opts['master'] = self.eval_master(opts=self.opts,
|
||||
failed=True)
|
||||
if self.connected:
|
||||
@ -1653,7 +1653,7 @@ class Minion(MinionBase):
|
||||
schedule=schedule)
|
||||
|
||||
elif package.startswith('__master_connected'):
|
||||
# handle this event only once. otherwise it will polute the log
|
||||
# handle this event only once. otherwise it will pollute the log
|
||||
if not self.connected:
|
||||
log.info('Connection to master {0} re-established'.format(self.opts['master']))
|
||||
self.connected = True
|
||||
@ -2246,7 +2246,7 @@ class MultiSyndic(MinionBase):
|
||||
self.event_forward_timeout < time.time()):
|
||||
self._forward_events()
|
||||
# We don't handle ZMQErrors like the other minions
|
||||
# I've put explicit handling around the recieve calls
|
||||
# I've put explicit handling around the receive calls
|
||||
# in the process_*_socket methods. If we see any other
|
||||
# errors they may need some kind of handling so log them
|
||||
# for now.
|
||||
|
@ -3322,7 +3322,7 @@ def makedirs_(path,
|
||||
.. note::
|
||||
|
||||
The path must end with a trailing slash otherwise the directory/directories
|
||||
will be created upto the parent directory. For example if path is
|
||||
will be created up to the parent directory. For example if path is
|
||||
``/opt/code``, then it would be treated as ``/opt/`` but if the path
|
||||
ends with a trailing slash like ``/opt/code/``, then it would be
|
||||
treated as ``/opt/code/``.
|
||||
|
@ -602,7 +602,7 @@ class _LXCConfig(object):
|
||||
if self.path:
|
||||
content = self.as_string()
|
||||
# 2 step rendering to be sure not to open/wipe the config
|
||||
# before as_string suceeds.
|
||||
# before as_string succeeds.
|
||||
with open(self.path, 'w') as fic:
|
||||
fic.write(content)
|
||||
fic.flush()
|
||||
@ -2071,7 +2071,7 @@ def bootstrap(name, config=None, approve_key=True,
|
||||
__salt__['lxc.stop'](name)
|
||||
elif prior_state == 'frozen':
|
||||
__salt__['lxc.freeze'](name)
|
||||
# mark seeded upon sucessful install
|
||||
# mark seeded upon successful install
|
||||
if res:
|
||||
__salt__['lxc.run_cmd'](
|
||||
name, 'sh -c \'touch "{0}";\''.format(SEED_MARKER))
|
||||
|
@ -213,7 +213,7 @@ def _rpm_pkginfo(name):
|
||||
Parses RPM metadata and returns a pkginfo namedtuple
|
||||
'''
|
||||
# REPOID is not a valid tag for the rpm command. Remove it and replace it
|
||||
# witn "none"
|
||||
# with "none"
|
||||
queryformat = __QUERYFORMAT.replace('%{REPOID}', 'none')
|
||||
output = __salt__['cmd.run_stdout'](
|
||||
'rpm -qp --queryformat {0!r} {1}'.format(queryformat, name),
|
||||
|
@ -59,7 +59,7 @@ def display_output(data, out=None, opts=None):
|
||||
fdata = fdata.encode('utf-8')
|
||||
except (UnicodeDecodeError, UnicodeEncodeError):
|
||||
# try to let the stream write
|
||||
# even if we didnt encode it
|
||||
# even if we didn't encode it
|
||||
pass
|
||||
ofh.write(fdata)
|
||||
ofh.write('\n')
|
||||
|
@ -118,7 +118,7 @@ def _format_host(host, data):
|
||||
schanged, ctext = _format_changes(ret['changes'])
|
||||
nchanges += 1 if schanged else 0
|
||||
|
||||
# Skip this state if it was successfull & diff output was requested
|
||||
# Skip this state if it was successful & diff output was requested
|
||||
if __opts__.get('state_output_diff', False) and \
|
||||
ret['result'] and not schanged:
|
||||
continue
|
||||
|
@ -275,7 +275,7 @@ def _format_job_instance(job):
|
||||
|
||||
def _format_jid_instance(jid, job):
|
||||
'''
|
||||
Return a properly formated jid dict
|
||||
Return a properly formatted jid dict
|
||||
'''
|
||||
ret = _format_job_instance(job)
|
||||
ret.update({'StartTime': salt.utils.jid_to_time(jid)})
|
||||
|
@ -35,7 +35,7 @@ def prep_jid(nocache=False, passed_jid=None):
|
||||
Call both with prep_jid on all returners in multi_returner
|
||||
|
||||
TODO: finish this, what do do when you get different jids from 2 returners...
|
||||
since our jids are time based, this make this problem hard, beacuse they
|
||||
since our jids are time based, this make this problem hard, because they
|
||||
aren't unique, meaning that we have to make sure that no one else got the jid
|
||||
and if they did we spin to get a new one, which means "locking" the jid in 2
|
||||
returners is non-trivial
|
||||
|
@ -664,7 +664,7 @@ class State(object):
|
||||
|
||||
def _run_check_cmd(self, low_data):
|
||||
'''
|
||||
Alter the way a successfull state run is determined
|
||||
Alter the way a successful state run is determined
|
||||
'''
|
||||
ret = {'result': False}
|
||||
cmd_opts = {}
|
||||
|
@ -52,7 +52,7 @@ def _changes(name,
|
||||
change['gid'] = gid
|
||||
|
||||
if members:
|
||||
#-- if new memeber list if different than the current
|
||||
#-- if new member list if different than the current
|
||||
if set(lgrp['members']) ^ set(members):
|
||||
change['members'] = members
|
||||
|
||||
|
@ -931,7 +931,7 @@ def installed(
|
||||
changes[change_name]['old'] += '\n'
|
||||
changes[change_name]['old'] += '{0}'.format(i['changes']['old'])
|
||||
|
||||
# Any requested packages that were not targetted for install or reinstall
|
||||
# Any requested packages that were not targeted for install or reinstall
|
||||
if not_modified:
|
||||
if sources:
|
||||
summary = ', '.join(not_modified)
|
||||
|
@ -165,7 +165,7 @@ def configurable_test_state(name, changes=True, result=True, comment=''):
|
||||
Accepts True, False, and 'Random'
|
||||
Default is True
|
||||
result:
|
||||
Do we return sucessfuly or not?
|
||||
Do we return successfully or not?
|
||||
Accepts True, False, and 'Random'
|
||||
Default is True
|
||||
comment:
|
||||
|
@ -391,7 +391,7 @@ def installed(name, categories=None, includes=None, retries=10):
|
||||
|
||||
name
|
||||
If ``categories`` is left empty, it will be assumed that you are
|
||||
passing the category option through the name. These are seperate
|
||||
passing the category option through the name. These are separate
|
||||
because you can only have one name, but can have multiple categories.
|
||||
|
||||
categories
|
||||
@ -406,7 +406,7 @@ def installed(name, categories=None, includes=None, retries=10):
|
||||
* Update Rollups
|
||||
|
||||
includes
|
||||
A list of features of the updates to cull by. Availble features
|
||||
A list of features of the updates to cull by. Available features
|
||||
include:
|
||||
|
||||
* **UI** - User interaction required, skipped by default
|
||||
@ -467,7 +467,7 @@ def downloaded(name, categories=None, includes=None, retries=10):
|
||||
|
||||
name
|
||||
If ``categories`` is left empty, it will be assumed that you are
|
||||
passing the category option through the name. These are seperate
|
||||
passing the category option through the name. These are separate
|
||||
because you can only have one name, but can have multiple categories.
|
||||
|
||||
categories
|
||||
@ -482,7 +482,7 @@ def downloaded(name, categories=None, includes=None, retries=10):
|
||||
* Update Rollups
|
||||
|
||||
includes
|
||||
A list of features of the updates to cull by. Availble features
|
||||
A list of features of the updates to cull by. Available features
|
||||
include:
|
||||
|
||||
* **UI** - User interaction required, skipped by default
|
||||
|
@ -162,7 +162,7 @@ def lock(zk_hosts,
|
||||
|
||||
if __opts__['test']:
|
||||
ret['result'] = None
|
||||
ret['comment'] = 'attempt to aqcuire lock'
|
||||
ret['comment'] = 'attempt to acquire lock'
|
||||
return ret
|
||||
|
||||
zk = _get_zk_conn(zk_hosts)
|
||||
|
@ -312,7 +312,7 @@ class SaltEvent(object):
|
||||
def get_event(self, wait=5, tag='', full=False, use_pending=False, pending_tags=None):
|
||||
'''
|
||||
Get a single publication.
|
||||
IF no publication available THEN block for upto wait seconds
|
||||
IF no publication available THEN block for up to wait seconds
|
||||
AND either return publication OR None IF no publication available.
|
||||
|
||||
IF wait is 0 then block forever.
|
||||
|
@ -345,7 +345,7 @@ class SaltfileMixIn(object):
|
||||
# one from Saltfile, if any
|
||||
continue
|
||||
|
||||
# We reched this far! Set the Saltfile value on the option
|
||||
# We reached this far! Set the Saltfile value on the option
|
||||
setattr(self.options, option.dest, cli_config[option.dest])
|
||||
|
||||
# Let's also search for options referred in any option groups
|
||||
|
@ -37,7 +37,7 @@ class StdTest(integration.ModuleCase):
|
||||
self.assertTrue(ret['minion'])
|
||||
assert num_ret > 0
|
||||
|
||||
# ping a minion that doesnt exist, to make sure that it doesnt hang forever
|
||||
# ping a minion that doesn't exist, to make sure that it doesn't hang forever
|
||||
# create fake mininion
|
||||
key_file = os.path.join(self.master_opts['pki_dir'], 'minions', 'footest')
|
||||
# touch the file
|
||||
|
@ -68,7 +68,7 @@ class BaseCherryPyTestCase(TestCase):
|
||||
|
||||
* Responses are dispatched to a mounted application's
|
||||
page handler, if found. This is the reason why you
|
||||
must indicate which app you are targetting with
|
||||
must indicate which app you are targeting with
|
||||
this request by specifying its mount point.
|
||||
|
||||
You can simulate various request settings by setting
|
||||
|
Loading…
Reference in New Issue
Block a user