Merge pull request #39459 from rallytime/merge-develop

[develop] Merge forward from 2016.11 to develop
This commit is contained in:
Nicole Thomas 2017-02-16 14:30:31 -07:00 committed by GitHub
commit 1577bb68af
32 changed files with 1961 additions and 270 deletions

View File

@ -217,7 +217,6 @@ pseudoxml: translations
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
translations:
@if [ "$(SPHINXLANG)" = "en" ] || [ "x$(SPHINXLANG)" = "x" ]; then \
echo "No need to update translations. Skipping..."; \

View File

@ -101,11 +101,15 @@ During development it is easiest to be able to run the Salt master and minion
that are installed in the virtualenv you created above, and also to have all
the configuration, log, and cache files contained in the virtualenv as well.
The ``/path/to/your/virtualenv`` referenced multiple times below is also
available in the variable ``$VIRTUAL_ENV`` once the virtual environment is
activated.
Copy the master and minion config files into your virtualenv:
.. code-block:: bash
mkdir -p /path/to/your/virtualenv/etc/salt
mkdir -p /path/to/your/virtualenv/etc/salt/pki/{master,minion}
cp ./salt/conf/master ./salt/conf/minion /path/to/your/virtualenv/etc/salt/
Edit the master config file:
@ -113,24 +117,28 @@ Edit the master config file:
1. Uncomment and change the ``user: root`` value to your own user.
2. Uncomment and change the ``root_dir: /`` value to point to
``/path/to/your/virtualenv``.
3. If you are running version 0.11.1 or older, uncomment, and change the
3. Uncomment and change the ``pki: /etc/salt/pki/master`` value to point to
``/path/to/your/virtualenv/etc/salt/pki/master``
4. If you are running version 0.11.1 or older, uncomment, and change the
``pidfile: /var/run/salt-master.pid`` value to point to
``/path/to/your/virtualenv/salt-master.pid``.
4. If you are also running a non-development version of Salt you will have to
5. If you are also running a non-development version of Salt you will have to
change the ``publish_port`` and ``ret_port`` values as well.
Edit the minion config file:
1. Repeat the edits you made in the master config for the ``user`` and
``root_dir`` values as well as any port changes.
2. If you are running version 0.11.1 or older, uncomment, and change the
2. Uncomment and change the ``pki: /etc/salt/pki/minion`` value to point to
``/path/to/your/virtualenv/etc/salt/pki/minion``
3. If you are running version 0.11.1 or older, uncomment, and change the
``pidfile: /var/run/salt-minion.pid`` value to point to
``/path/to/your/virtualenv/salt-minion.pid``.
3. Uncomment and change the ``master: salt`` value to point at ``localhost``.
4. Uncomment and change the ``id:`` value to something descriptive like
4. Uncomment and change the ``master: salt`` value to point at ``localhost``.
5. Uncomment and change the ``id:`` value to something descriptive like
"saltdev". This isn't strictly necessary but it will serve as a reminder of
which Salt installation you are working with.
5. If you changed the ``ret_port`` value in the master config because you are
6. If you changed the ``ret_port`` value in the master config because you are
also running a non-development version of Salt, then you will have to
change the ``master_port`` value in the minion config to match.
@ -217,10 +225,10 @@ You can now call all of Salt's CLI tools without explicitly passing the configur
Additional Options
..................
In case you want to distribute your virtualenv, you probably don't want to
include Salt's clone ``.git/`` directory, and, without it, Salt won't report
the accurate version. You can tell ``setup.py`` to generate the hardcoded
version information which is distributable:
If you want to distribute your virtualenv, you probably don't want to include
Salt's clone ``.git/`` directory, and, without it, Salt won't report the
accurate version. You can tell ``setup.py`` to generate the hardcoded version
information which is distributable:
.. code-block:: bash

View File

@ -150,6 +150,24 @@ And the actual pillar file at '/srv/pillar/common_pillar.sls':
foo: bar
boo: baz
.. note::
When working with multiple pillar environments, assuming that each pillar
environment has its own top file, the jinja placeholder ``{{ saltenv }}``
can be used in place of the environment name:
.. code-block:: yaml
{{ saltenv }}:
'*':
- common_pillar
Yes, this is ``{{ saltenv }}``, and not ``{{ pillarenv }}``. The reason for
this is because the Pillar top files are parsed using some of the same code
which parses top files when :ref:`running states <running-highstate>`, so
the pillar environment takes the place of ``{{ saltenv }}`` in the jinja
context.
Pillar Namespace Flattening
===========================

View File

@ -368,7 +368,8 @@ Pre 2015.8 the proxymodule also must have an ``id()`` function. 2015.8 and foll
this function because the proxy's id is required on the command line.
Here is an example proxymodule used to interface to a *very* simple REST
server. Code for the server is in the `salt-contrib GitHub repository <https://github.com/saltstack/salt-contrib/proxyminion_rest_example>`_
server. Code for the server is in the `salt-contrib GitHub repository
<https://github.com/saltstack/salt-contrib/tree/master/proxyminion_rest_example>`_
This proxymodule enables "service" enumeration, starting, stopping, restarting,
and status; "package" installation, and a ping.

View File

@ -314,6 +314,26 @@ In the current release, the following modules were included:
- :mod:`SNMP configuration management state <salt.states.netsnmp>`
- :mod:`Users management state <salt.states.netusers>`
Cisco NXOS Proxy Minion
=======================
Beginning with 2016.11.0, there is a proxy minion that can be used to configure
nxos cisco devices over ssh.
- :mod:`Proxy Minion <salt.proxy.nxos>`
- :mod:`Execution Module <salt.modules.nxos>`
- :mod:`State Module <salt.states.nxos>`
Cisco Network Services Orchestrator Proxy Minion
================================================
Beginning with 2016.11.0, there is a proxy minion to use the Cisco Network
Services Orchestrator as a proxy minion.
- :mod:`Proxy Minion <salt.proxy.cisconso>`
- :mod:`Execution Module <salt.modules.cisconso>`
- :mod:`State Module <salt.states.cisconso>`
Junos Module Changes
====================

File diff suppressed because it is too large Load Diff

View File

@ -79,9 +79,6 @@ Additionally, version 0.21.0 of pygit2 introduced a dependency on python-cffi_,
which in turn depends on newer releases of libffi_. Upgrading libffi_ is not
advisable as several other applications depend on it, so on older LTS linux
releases pygit2_ 0.20.3 and libgit2_ 0.20.0 is the recommended combination.
While these are not packaged in the official repositories for Debian and
Ubuntu, SaltStack is actively working on adding packages for these to our
repositories_. The progress of this effort can be tracked `here <salt-pack-70>`_.
.. warning::
pygit2_ is actively developed and `frequently makes
@ -99,8 +96,53 @@ repositories_. The progress of this effort can be tracked `here <salt-pack-70>`_
.. _libssh2: http://www.libssh2.org/
.. _python-cffi: https://pypi.python.org/pypi/cffi
.. _libffi: http://sourceware.org/libffi/
.. _repositories: https://repo.saltstack.com
.. _salt-pack-70: https://github.com/saltstack/salt-pack/issues/70
RedHat Pygit2 Issues
~~~~~~~~~~~~~~~~~~~~
Around the time of the release of RedHat 7.3, RedHat effectively broke pygit2_
by upgrading python-cffi_ to a release incompatible with the version of pygit2_
available in their repositories. This prevents Python from importing the
pygit2_ module at all, leading to a master that refuses to start, and leaving
the following errors in the master log file:
.. code-block:: text
2017-02-10 09:07:34,892 [salt.utils.gitfs ][ERROR ][11211] Import pygit2 failed: CompileError: command 'gcc' failed with exit status 1
2017-02-10 09:07:34,907 [salt.utils.gitfs ][ERROR ][11211] gitfs is configured but could not be loaded, are pygit2 and libgit2 installed?
2017-02-10 09:07:34,907 [salt.utils.gitfs ][CRITICAL][11211] No suitable gitfs provider module is installed.
2017-02-10 09:07:34,912 [salt.master ][CRITICAL][11211] Master failed pre flight checks, exiting
This issue has been reported on the `RedHat Bugzilla`_. In the meantime, you
can work around it by downgrading python-cffi_. To do this, go to `this page`_
and download the appropriate python-cffi_ 0.8.6 RPM. Then copy that RPM to the
master and downgrade using the ``rpm`` command. For example:
.. code-block:: bash
# rpm -Uvh --oldpackage python-cffi-0.8.6-1.el7.x86_64.rpm
Preparing... ################################# [100%]
Updating / installing...
1:python-cffi-0.8.6-1.el7 ################################# [ 50%]
Cleaning up / removing...
2:python-cffi-1.6.0-5.el7 ################################# [100%]
# rpm -q python-cffi
python-cffi-0.8.6-1.el7.x86_64
To confirm that pygit2_ is now "fixed", you can test trying to import it like so:
.. code-block:: bash
# python -c 'import pygit2'
#
If the command produces no output, then your master should work when you start
it again.
.. _`this page`: https://koji.fedoraproject.org/koji/buildinfo?buildID=569520
.. _`RedHat Bugzilla`: https://bugzilla.redhat.com/show_bug.cgi?id=1400668
GitPython
---------

View File

@ -338,7 +338,7 @@ call should return.
alias='fred')
self.assertEqual(tgt_ret, 'bob')
Using multiple Salt commands in this manor provides two useful benefits. The first is
Using multiple Salt commands in this manner provides two useful benefits. The first is
that it provides some additional coverage for the ``aliases.set_target`` function.
The second benefit is the call to ``aliases.get_target`` is not dependent on the
presence of any aliases set outside of this test. Tests should not be dependent on

View File

@ -194,8 +194,8 @@ ShowUnInstDetails show
; See http://blogs.msdn.com/b/astebner/archive/2009/01/29/9384143.aspx for more info
Section -Prerequisites
; VCRedist only needed on Server 2008/Vista and below
${If} ${AtMostWin2008}
; VCRedist only needed on Windows Server 2008R2/Windows 7 and below
${If} ${AtMostWin2008R2}
!define VC_REDIST_X64_GUID "{5FCE6D76-F5DC-37AB-B2B8-22AB8CEDB1D4}"
!define VC_REDIST_X86_GUID "{9BE518E6-ECC6-35A9-88E4-87755C07200F}"

View File

@ -321,40 +321,82 @@ def _file_lists(load, form):
return cache_match
if refresh_cache:
ret = {
'files': [],
'dirs': [],
'empty_dirs': [],
'links': []
'files': set(),
'dirs': set(),
'empty_dirs': set(),
'links': {}
}
def _add_to(tgt, fs_root, parent_dir, items):
'''
Add the files to the target set
'''
def _translate_sep(path):
'''
Translate path separators for Windows masterless minions
'''
return path.replace('\\', '/') if os.path.sep == '\\' else path
for item in items:
abs_path = os.path.join(parent_dir, item)
log.trace('roots: Processing %s', abs_path)
is_link = os.path.islink(abs_path)
log.trace(
'roots: %s is %sa link',
abs_path, 'not ' if not is_link else ''
)
if is_link and __opts__['fileserver_ignoresymlinks']:
continue
rel_path = _translate_sep(os.path.relpath(abs_path, fs_root))
log.trace('roots: %s relative path is %s', abs_path, rel_path)
if salt.fileserver.is_file_ignored(__opts__, rel_path):
continue
tgt.add(rel_path)
try:
if not os.listdir(abs_path):
ret['empty_dirs'].add(rel_path)
except Exception:
# Generic exception because running os.listdir() on a
# non-directory path raises an OSError on *NIX and a
# WindowsError on Windows.
pass
if is_link:
link_dest = os.readlink(abs_path)
log.trace(
'roots: %s symlink destination is %s',
abs_path, link_dest
)
if link_dest.startswith('..'):
joined = os.path.join(abs_path, link_dest)
else:
joined = os.path.join(
os.path.dirname(abs_path), link_dest
)
rel_dest = os.path.relpath(
os.path.realpath(os.path.normpath(joined)),
fs_root
)
log.trace(
'roots: %s relative path is %s',
abs_path, rel_dest
)
if not rel_dest.startswith('..'):
# Only count the link if it does not point
# outside of the root dir of the fileserver
# (i.e. the "path" variable)
ret['links'][rel_path] = rel_dest
for path in __opts__['file_roots'][load['saltenv']]:
for root, dirs, files in os.walk(
path,
followlinks=__opts__['fileserver_followsymlinks']):
# Don't walk any directories that match file_ignore_regex or glob
dirs[:] = [d for d in dirs if not salt.fileserver.is_file_ignored(__opts__, d)]
_add_to(ret['dirs'], path, root, dirs)
_add_to(ret['files'], path, root, files)
ret['files'] = sorted(ret['files'])
ret['dirs'] = sorted(ret['dirs'])
ret['empty_dirs'] = sorted(ret['empty_dirs'])
dir_rel_fn = os.path.relpath(root, path)
if __opts__.get('file_client', 'remote') == 'local' and os.path.sep == "\\":
dir_rel_fn = dir_rel_fn.replace('\\', '/')
ret['dirs'].append(dir_rel_fn)
if len(dirs) == 0 and len(files) == 0:
if dir_rel_fn not in ('.', '..') \
and not salt.fileserver.is_file_ignored(__opts__, dir_rel_fn):
ret['empty_dirs'].append(dir_rel_fn)
for fname in files:
is_link = os.path.islink(os.path.join(root, fname))
if is_link:
ret['links'].append(fname)
if __opts__['fileserver_ignoresymlinks'] and is_link:
continue
rel_fn = os.path.relpath(
os.path.join(root, fname),
path
)
if not salt.fileserver.is_file_ignored(__opts__, rel_fn):
if __opts__.get('file_client', 'remote') == 'local' and os.path.sep == "\\":
rel_fn = rel_fn.replace('\\', '/')
ret['files'].append(rel_fn)
if save_cache:
try:
salt.fileserver.write_file_list_cache(
@ -406,28 +448,13 @@ def symlink_list(load):
ret = {}
if load['saltenv'] not in __opts__['file_roots']:
return ret
for path in __opts__['file_roots'][load['saltenv']]:
try:
prefix = load['prefix'].strip('/')
except KeyError:
prefix = ''
# Adopting rsync functionality here and stopping at any encounter of a symlink
for root, dirs, files in os.walk(os.path.join(path, prefix), followlinks=False):
# Don't walk any directories that match file_ignore_regex or glob
dirs[:] = [d for d in dirs if not salt.fileserver.is_file_ignored(__opts__, d)]
for fname in files:
if not os.path.islink(os.path.join(root, fname)):
continue
rel_fn = os.path.relpath(
os.path.join(root, fname),
path
)
if not salt.fileserver.is_file_ignored(__opts__, rel_fn):
ret[rel_fn] = os.readlink(os.path.join(root, fname))
for dname in dirs:
if os.path.islink(os.path.join(root, dname)):
ret[os.path.relpath(os.path.join(root,
dname),
path)] = os.readlink(os.path.join(root,
dname))
return ret
if 'prefix' in load:
prefix = load['prefix'].strip('/')
else:
prefix = ''
symlinks = _file_lists(load, 'links')
return dict([(key, val)
for key, val in six.iteritems(symlinks)
if key.startswith(prefix)])

View File

@ -1322,26 +1322,26 @@ class LazyLoader(salt.utils.lazy.LazyDict):
else:
desc = self.suffix_map[suffix]
# if it is a directory, we don't open a file
try:
mod_namespace = '.'.join((
self.loaded_base_name,
self.mod_type_check(fpath),
self.tag,
name))
except TypeError:
mod_namespace = '{0}.{1}.{2}.{3}'.format(
self.loaded_base_name,
self.mod_type_check(fpath),
self.tag,
name)
if suffix == '':
mod = imp.load_module(
'{0}.{1}.{2}.{3}'.format(
self.loaded_base_name,
self.mod_type_check(fpath),
self.tag,
name
), None, fpath, desc)
mod = imp.load_module(mod_namespace, None, fpath, desc)
# reload all submodules if necessary
if not self.initial_load:
self._reload_submodules(mod)
else:
with salt.utils.fopen(fpath, desc[1]) as fn_:
mod = imp.load_module(
'{0}.{1}.{2}.{3}'.format(
self.loaded_base_name,
self.mod_type_check(fpath),
self.tag,
name
), fn_, fpath, desc)
mod = imp.load_module(mod_namespace, fn_, fpath, desc)
except IOError:
raise
@ -1401,11 +1401,9 @@ class LazyLoader(salt.utils.lazy.LazyDict):
except Exception:
err_string = '__init__ failed'
log.debug(
'Error loading {0}.{1}: {2}'.format(
self.tag,
module_name,
err_string),
exc_info=True)
'Error loading %s.%s: %s',
self.tag, module_name, err_string, exc_info=True
)
self.missing_modules[module_name] = err_string
self.missing_modules[name] = err_string
return False
@ -1418,10 +1416,10 @@ class LazyLoader(salt.utils.lazy.LazyDict):
virtual_ret, module_name, virtual_err, virtual_aliases = \
self.process_virtual(mod, module_name)
if virtual_err is not None:
log.trace('Error loading {0}.{1}: {2}'.format(self.tag,
module_name,
virtual_err,
))
log.trace(
'Error loading %s.%s: %s',
self.tag, module_name, virtual_err
)
# if process_virtual returned a non-True value then we are
# supposed to not process this module

View File

@ -540,7 +540,7 @@ class MinionBase(object):
if attempts != 0:
# Give up a little time between connection attempts
# to allow the IOLoop to run any other scheduled tasks.
yield tornado.gen.sleep(1)
yield tornado.gen.sleep(opts['acceptance_wait_time'])
attempts += 1
if tries > 0:
log.debug('Connecting to master. Attempt {0} '
@ -605,7 +605,7 @@ class MinionBase(object):
if attempts != 0:
# Give up a little time between connection attempts
# to allow the IOLoop to run any other scheduled tasks.
yield tornado.gen.sleep(1)
yield tornado.gen.sleep(opts['acceptance_wait_time'])
attempts += 1
if tries > 0:
log.debug('Connecting to master. Attempt {0} '

View File

@ -551,7 +551,7 @@ def describe(name, tags=None, region=None, key=None, keyid=None,
'CopyTagsToSnapshot', 'MonitoringInterval',
'MonitoringRoleArn', 'PromotionTier',
'DomainMemberships')
return {'rds': dict([(k, rds.get(k)) for k in keys])}
return {'rds': dict([(k, rds.get('DBInstances', [{}])[0].get(k)) for k in keys])}
else:
return {'rds': None}
except ClientError as e:

View File

@ -590,6 +590,9 @@ def make_repo(repodir,
# test if using older than gnupg 2.1, env file exists
older_gnupg = __salt__['file.file_exists'](gpg_info_file)
# interval of 0.125 is really too fast on some systems
interval = 0.5
if keyid is not None:
with salt.utils.fopen(repoconfdist, 'a') as fow:
fow.write('SignWith: {0}\n'.format(keyid))
@ -654,8 +657,6 @@ def make_repo(repodir,
break
## sign_it_here
# interval of 0.125 is really too fast on some systems
interval = 0.5
for file in os.listdir(repodir):
if file.endswith('.dsc'):
abs_file = os.path.join(repodir, file)
@ -722,7 +723,7 @@ def make_repo(repodir,
cmd = 'reprepro --ignore=wrongdistribution --component=main -Vb . includedsc {0} {1}'.format(codename, abs_file)
__salt__['cmd.run'](cmd, cwd=repodir, use_vt=True)
else:
number_retries = 5
number_retries = timeout / interval
times_looped = 0
error_msg = 'Failed to reprepro includedsc file {0}'.format(abs_file)
cmd = 'reprepro --ignore=wrongdistribution --component=main -Vb . includedsc {0} {1}'.format(codename, abs_file)
@ -746,10 +747,10 @@ def make_repo(repodir,
if times_looped > number_retries:
raise SaltInvocationError(
'Attemping to reprepro includedsc for file {0} failed, timed out after {1} loops'.format(abs_file, times_looped)
'Attemping to reprepro includedsc for file {0} failed, timed out after {1} loops'
.format(abs_file, int(times_looped * interval))
)
# 0.125 is really too fast on some systems
time.sleep(0.5)
time.sleep(interval)
proc_exitstatus = proc.exitstatus
if proc_exitstatus != 0:

View File

@ -1545,7 +1545,7 @@ def _get_line_indent(src, line, indent):
'''
Indent the line with the source line.
'''
if not (indent or line):
if not indent:
return line
idt = []
@ -1755,7 +1755,6 @@ def line(path, content=None, match=None, mode=None, location=None,
elif mode == 'ensure':
after = after and after.strip()
before = before and before.strip()
content = content and content.strip()
if before and after:
_assert_occurrence(body, before, 'before')

View File

@ -41,11 +41,18 @@ def __virtual__():
return (False, 'useradd execution module not loaded: either pwd python library not available or system not one of Linux, OpenBSD or NetBSD')
def _quote_username(name):
if isinstance(name, int):
name = "{0}".format(name)
return name
def _get_gecos(name):
'''
Retrieve GECOS field info and return it in dictionary form
'''
gecos_field = pwd.getpwnam(name).pw_gecos.split(',', 3)
gecos_field = pwd.getpwnam(_quote_username(name)).pw_gecos.split(',', 3)
if not gecos_field:
return {}
else:
@ -521,7 +528,7 @@ def info(name):
salt '*' user.info root
'''
try:
data = pwd.getpwnam(name)
data = pwd.getpwnam(_quote_username(name))
except KeyError:
return {}
else:

View File

@ -171,6 +171,11 @@ def list_sites():
bindings = dict()
for binding in item['bindings']['Collection']:
# Ignore bindings which do not have host names
if binding['protocol'] not in ['http', 'https']:
continue
filtered_binding = dict()
for key in binding:

View File

@ -1287,152 +1287,161 @@ def remove(name=None, pkgs=None, version=None, **kwargs):
pkg_params = __salt__['pkg_resource.parse_targets'](name, pkgs, **kwargs)[0]
# Get a list of currently installed software for comparison at the end
old = list_pkgs(saltenv=saltenv, refresh=refresh)
old = list_pkgs(saltenv=saltenv, refresh=refresh, versions_as_list=True)
# Loop through each package
changed = []
for target in pkg_params:
for pkgname, version_num in six.iteritems(pkg_params):
# Load package information for the package
pkginfo = _get_package_info(target, saltenv=saltenv)
pkginfo = _get_package_info(pkgname, saltenv=saltenv)
# Make sure pkginfo was found
if not pkginfo:
log.error('Unable to locate package {0}'.format(name))
ret[target] = 'Unable to locate package {0}'.format(target)
msg = 'Unable to locate package {0}'.format(pkgname)
log.error(msg)
ret[pkgname] = msg
continue
# Get latest version if no version passed, else use passed version
if not version:
version_num = _get_latest_pkg_version(pkginfo)
else:
version_num = version
if 'latest' in pkginfo and version_num not in pkginfo:
if version_num is not None \
and version_num not in pkginfo \
and 'latest' in pkginfo:
version_num = 'latest'
# Check to see if package is installed on the system
if target not in old:
log.error('{0} {1} not installed'.format(target, version))
ret[target] = {'current': 'not installed'}
removal_targets = []
if pkgname not in old:
log.error('%s %s not installed', pkgname, version)
ret[pkgname] = {'current': 'not installed'}
continue
else:
if version_num not in old.get(target, '').split(',') \
and not old.get(target) == "Not Found" \
if version_num is None:
removal_targets.extend(old[pkgname])
elif version_num not in old[pkgname] \
and 'Not Found' not in old['pkgname'] \
and version_num != 'latest':
log.error('{0} {1} not installed'.format(target, version))
ret[target] = {
log.error('%s %s not installed', pkgname, version)
ret[pkgname] = {
'current': '{0} not installed'.format(version_num)
}
continue
else:
removal_targets.append(version_num)
# Get the uninstaller
uninstaller = pkginfo[version_num].get('uninstaller')
for target in removal_targets:
# Get the uninstaller
uninstaller = pkginfo[target].get('uninstaller')
# If no uninstaller found, use the installer
if not uninstaller:
uninstaller = pkginfo[version_num].get('installer')
# If no uninstaller found, use the installer
if not uninstaller:
uninstaller = pkginfo[target].get('installer')
# If still no uninstaller found, fail
if not uninstaller:
log.error('Error: No installer or uninstaller configured '
'for package {0}'.format(name))
ret[target] = {'no uninstaller': version_num}
continue
# If still no uninstaller found, fail
if not uninstaller:
log.error(
'No installer or uninstaller configured for package %s',
pkgname,
)
ret[pkgname] = {'no uninstaller': target}
continue
# Where is the uninstaller
if uninstaller.startswith(('salt:', 'http:', 'https:', 'ftp:')):
# Where is the uninstaller
if uninstaller.startswith(('salt:', 'http:', 'https:', 'ftp:')):
# Check to see if the uninstaller is cached
cached_pkg = __salt__['cp.is_cached'](uninstaller)
if not cached_pkg:
# It's not cached. Cache it, mate.
cached_pkg = __salt__['cp.cache_file'](uninstaller)
# Check if the uninstaller was cached successfully
# Check to see if the uninstaller is cached
cached_pkg = __salt__['cp.is_cached'](uninstaller)
if not cached_pkg:
log.error('Unable to cache {0}'.format(uninstaller))
ret[target] = {'unable to cache': uninstaller}
continue
else:
# Run the uninstaller directly (not hosted on salt:, https:, etc.)
cached_pkg = uninstaller
# It's not cached. Cache it, mate.
cached_pkg = __salt__['cp.cache_file'](uninstaller)
# Fix non-windows slashes
cached_pkg = cached_pkg.replace('/', '\\')
cache_path, _ = os.path.split(cached_pkg)
# Check if the uninstaller was cached successfully
if not cached_pkg:
log.error('Unable to cache %s', uninstaller)
ret[pkgname] = {'unable to cache': uninstaller}
continue
else:
# Run the uninstaller directly (not hosted on salt:, https:, etc.)
cached_pkg = uninstaller
# Get parameters for cmd
expanded_cached_pkg = str(os.path.expandvars(cached_pkg))
# Fix non-windows slashes
cached_pkg = cached_pkg.replace('/', '\\')
cache_path, _ = os.path.split(cached_pkg)
# Get uninstall flags
uninstall_flags = '{0}'.format(
pkginfo[version_num].get('uninstall_flags', '')
)
if kwargs.get('extra_uninstall_flags'):
uninstall_flags = '{0} {1}'.format(
uninstall_flags,
kwargs.get('extra_uninstall_flags', "")
# Get parameters for cmd
expanded_cached_pkg = str(os.path.expandvars(cached_pkg))
# Get uninstall flags
uninstall_flags = '{0}'.format(
pkginfo[target].get('uninstall_flags', '')
)
if kwargs.get('extra_uninstall_flags'):
uninstall_flags = '{0} {1}'.format(
uninstall_flags,
kwargs.get('extra_uninstall_flags', "")
)
# Uninstall the software
# Check Use Scheduler Option
if pkginfo[version_num].get('use_scheduler', False):
# Uninstall the software
# Check Use Scheduler Option
if pkginfo[target].get('use_scheduler', False):
# Build Scheduled Task Parameters
if pkginfo[version_num].get('msiexec'):
cmd = 'msiexec.exe'
arguments = ['/x']
arguments.extend(salt.utils.shlex_split(uninstall_flags))
else:
cmd = expanded_cached_pkg
arguments = salt.utils.shlex_split(uninstall_flags)
# Build Scheduled Task Parameters
if pkginfo[target].get('msiexec'):
cmd = 'msiexec.exe'
arguments = ['/x']
arguments.extend(salt.utils.shlex_split(uninstall_flags))
else:
cmd = expanded_cached_pkg
arguments = salt.utils.shlex_split(uninstall_flags)
# Create Scheduled Task
__salt__['task.create_task'](name='update-salt-software',
user_name='System',
force=True,
action_type='Execute',
cmd=cmd,
arguments=' '.join(arguments),
start_in=cache_path,
trigger_type='Once',
start_date='1975-01-01',
start_time='01:00',
ac_only=False,
stop_if_on_batteries=False)
# Run Scheduled Task
if not __salt__['task.run_wait'](name='update-salt-software'):
log.error('Failed to remove {0}'.format(target))
log.error('Scheduled Task failed to run')
ret[target] = {'uninstall status': 'failed'}
else:
# Build the install command
cmd = []
if pkginfo[version_num].get('msiexec'):
cmd.extend(['msiexec', '/x', expanded_cached_pkg])
# Create Scheduled Task
__salt__['task.create_task'](name='update-salt-software',
user_name='System',
force=True,
action_type='Execute',
cmd=cmd,
arguments=' '.join(arguments),
start_in=cache_path,
trigger_type='Once',
start_date='1975-01-01',
start_time='01:00',
ac_only=False,
stop_if_on_batteries=False)
# Run Scheduled Task
if not __salt__['task.run_wait'](name='update-salt-software'):
log.error('Failed to remove %s', pkgname)
log.error('Scheduled Task failed to run')
ret[pkgname] = {'uninstall status': 'failed'}
else:
cmd.append(expanded_cached_pkg)
cmd.extend(salt.utils.shlex_split(uninstall_flags))
# Launch the command
result = __salt__['cmd.run_all'](cmd,
output_loglevel='trace',
python_shell=False,
redirect_stderr=True)
if not result['retcode']:
ret[target] = {'uninstall status': 'success'}
changed.append(target)
else:
log.error('Failed to remove {0}'.format(target))
log.error('retcode {0}'.format(result['retcode']))
log.error('uninstaller output: {0}'.format(result['stdout']))
ret[target] = {'uninstall status': 'failed'}
# Build the install command
cmd = []
if pkginfo[target].get('msiexec'):
cmd.extend(['msiexec', '/x', expanded_cached_pkg])
else:
cmd.append(expanded_cached_pkg)
cmd.extend(salt.utils.shlex_split(uninstall_flags))
# Launch the command
result = __salt__['cmd.run_all'](cmd,
output_loglevel='trace',
python_shell=False,
redirect_stderr=True)
if not result['retcode']:
ret[pkgname] = {'uninstall status': 'success'}
changed.append(pkgname)
else:
log.error('Failed to remove %s', pkgname)
log.error('retcode %s', result['retcode'])
log.error('uninstaller output: %s', result['stdout'])
ret[pkgname] = {'uninstall status': 'failed'}
# Get a new list of installed software
new = list_pkgs(saltenv=saltenv)
tries = 0
difference = salt.utils.compare_dicts(old, new)
# Take the "old" package list and convert the values to strings in
# preparation for the comparison below.
__salt__['pkg_resource.stringify'](old)
difference = salt.utils.compare_dicts(old, new)
tries = 0
while not all(name in difference for name in changed) and tries <= 1000:
new = list_pkgs(saltenv=saltenv)
difference = salt.utils.compare_dicts(old, new)

View File

@ -171,7 +171,7 @@ def reboot(timeout=5, in_seconds=False, wait_for_reboot=False, # pylint: disabl
reports a pending reboot. To optionally reboot in a highstate, consider
using the reboot state instead of this module.
:return: True if successful
:return: True if successful (a reboot will occur)
:rtype: bool
CLI Example:
@ -250,7 +250,7 @@ def shutdown(message=None, timeout=5, force_close=True, reboot=False, # pylint:
system reports a pending reboot. To optionally shutdown in a highstate,
consider using the shutdown state instead of this module.
:return: True if successful
:return: True if successful (a shutdown or reboot will occur)
:rtype: bool
CLI Example:
@ -262,7 +262,7 @@ def shutdown(message=None, timeout=5, force_close=True, reboot=False, # pylint:
timeout = _convert_minutes_seconds(timeout, in_seconds)
if only_on_pending_reboot and not get_pending_reboot():
return True
return False
if message and not isinstance(message, str):
message = message.decode('utf-8')

View File

@ -25,6 +25,7 @@ Example output::
'''
from __future__ import absolute_import
# Import python libs
import collections
from numbers import Number
# Import salt libs
@ -127,7 +128,14 @@ class NestDisplay(object):
'----------'
)
)
for key in sorted(ret):
# respect key ordering of ordered dicts
if isinstance(ret, collections.OrderedDict):
keys = ret.keys()
else:
keys = sorted(ret)
for key in keys:
val = ret[key]
out.append(
self.ustring(

View File

@ -362,11 +362,14 @@ class Pillar(object):
opts['grains'] = {}
else:
opts['grains'] = grains
if not opts.get('environment'):
opts['environment'] = saltenv
# Allow minion/CLI saltenv/pillarenv to take precedence over master
opts['environment'] = saltenv \
if saltenv is not None \
else opts.get('environment')
opts['pillarenv'] = pillarenv \
if pillarenv is not None \
else opts.get('pillarenv')
opts['id'] = self.minion_id
if not opts.get('pillarenv'):
opts['pillarenv'] = pillarenv
if opts['state_top'].startswith('salt://'):
opts['state_top'] = opts['state_top']
elif opts['state_top'].startswith('/'):

View File

@ -1,6 +1,11 @@
# -*- coding: utf-8 -*-
'''
Recursively iterate over directories and add all files as Pillar data
``File_tree`` is an external pillar that allows
values from all files in a directory tree to be imported as Pillar data.
Note this is an external pillar, and is subject to the rules and constraints
governing external pillars detailed here: :ref:`external-pillars`.
.. versionadded:: 2015.5.0

View File

@ -60,6 +60,9 @@ def _walk_through(job_dir):
for top in os.listdir(job_dir):
t_path = os.path.join(job_dir, top)
if not os.path.exists(t_path):
continue
for final in os.listdir(t_path):
load_path = os.path.join(t_path, final, LOAD_P)

View File

@ -3,10 +3,21 @@
Return data to a PostgreSQL server with json data stored in Pg's jsonb data type
:maintainer: Dave Boucha <dave@saltstack.com>, Seth House <shouse@saltstack.com>, C. R. Oldham <cr@saltstack.com>
:maturity: new
:maturity: Stable
:depends: python-psycopg2
:platform: all
.. note::
There are three PostgreSQL returners. Any can function as an external
:ref:`master job cache <external-master-cache>`. but each has different
features. SaltStack recommends
:mod:`returners.pgjsonb <salt.returners.pgjsonb>` if you are working with
a version of PostgreSQL that has the appropriate native binary JSON types.
Otherwise, review
:mod:`returners.postgres <salt.returners.postgres>` and
:mod:`returners.postgres_local_cache <salt.returners.postgres_local_cache>`
to see which module best suits your particular needs.
To enable this returner, the minion will need the python client for PostgreSQL
installed and the following values configured in the minion or master
config. These are the defaults:

View File

@ -3,11 +3,15 @@
Return data to a postgresql server
.. note::
There are three PostgreSQL returners. Any can function as an external
:ref:`master job cache <external-master-cache>`. but each has different
features. SaltStack recommends
:mod:`returners.pgjsonb <salt.returners.pgjsonb>` if you are working with
a version of PostgreSQL that has the appropriate native binary JSON types.
Otherwise, review
:mod:`returners.postgres <salt.returners.postgres>` and
:mod:`returners.postgres_local_cache <salt.returners.postgres_local_cache>`
is recommended instead of this module when using PostgreSQL as a
:ref:`master job cache <external-job-cache>`. These two modules
provide different functionality so you should compare each to see which
module best suits your particular needs.
to see which module best suits your particular needs.
:maintainer: None
:maturity: New

View File

@ -4,14 +4,18 @@ Use a postgresql server for the master job cache. This helps the job cache to
cope with scale.
.. note::
:mod:`returners.postgres <salt.returners.postgres>` is also available if
you are not using PostgreSQL as a :ref:`master job cache
<external-job-cache>`. These two modules provide different
functionality so you should compare each to see which module best suits
your particular needs.
There are three PostgreSQL returners. Any can function as an external
:ref:`master job cache <external-job-cache>`. but each has different
features. SaltStack recommends
:mod:`returners.pgjsonb <salt.returners.pgjsonb>` if you are working with
a version of PostgreSQL that has the appropriate native binary JSON types.
Otherwise, review
:mod:`returners.postgres <salt.returners.postgres>` and
:mod:`returners.postgres_local_cache <salt.returners.postgres_local_cache>`
to see which module best suits your particular needs.
:maintainer: gjredelinghuys@gmail.com
:maturity: New
:maturity: Stable
:depends: psycopg2
:platform: all

View File

@ -3199,9 +3199,9 @@ def filter_by(lookup_dict,
# lookup_dict keys
for each in val if isinstance(val, list) else [val]:
for key in sorted(lookup_dict):
if key not in six.string_types:
key = str(key)
if fnmatch.fnmatchcase(each, key):
test_key = key if isinstance(key, six.string_types) else str(key)
test_each = each if isinstance(each, six.string_types) else str(each)
if fnmatch.fnmatchcase(test_each, test_key):
ret = lookup_dict[key]
break
if ret is not None:

View File

@ -154,11 +154,12 @@ class RootsTest(integration.ModuleCase):
self.assertIn('empty_dir', ret)
def test_symlink_list(self):
with patch.dict(roots.__opts__, {'file_roots': self.master_opts['file_roots'],
'fileserver_ignoresymlinks': False,
'fileserver_followsymlinks': False,
'file_ignore_regex': False,
'file_ignore_glob': False}):
with patch.dict(roots.__opts__, {'cachedir': self.master_opts['cachedir'],
'file_roots': self.master_opts['file_roots'],
'fileserver_ignoresymlinks': False,
'fileserver_followsymlinks': False,
'file_ignore_regex': False,
'file_ignore_glob': False}):
ret = roots.symlink_list({'saltenv': 'base'})
self.assertDictEqual(ret, {'dest_sym': 'source_sym'})

View File

@ -21,10 +21,14 @@ class SaltRunnerTest(integration.ShellCase):
'''
def test_salt_cmd(self):
'''
salt.cmd
test return values of salt.cmd
'''
ret = self.run_run_plus('salt.cmd', 'test.ping')
self.assertTrue(ret.get('out')[0])
out_ret = ret.get('out')[0]
return_ret = ret.get('return')
self.assertEqual(out_ret, 'True')
self.assertTrue(return_ret)
if __name__ == '__main__':

View File

@ -56,6 +56,7 @@ class DocTestCase(TestCase):
or key.endswith('doc/conf.py') \
or key.endswith('/conventions/documentation.rst') \
or key.endswith('doc/topics/releases/2016.11.2.rst') \
or key.endswith('doc/topics/releases/2016.11.3.rst') \
or key.endswith('doc/topics/releases/2016.3.5.rst'):
continue

View File

@ -6,15 +6,11 @@
# Import Python libs
from __future__ import absolute_import
# Import Salt Libs
from salt.modules import cp
from salt.utils import templates
from salt.exceptions import CommandExecutionError
# Import Salt Testing Libs
from salttesting import skipIf, TestCase
from salttesting.helpers import ensure_in_syspath
from salttesting.mock import (
Mock,
MagicMock,
mock_open,
patch,
@ -24,6 +20,13 @@ from salttesting.mock import (
ensure_in_syspath('../../')
# Import Salt Libs
from salt.modules import cp
from salt.utils import templates
from salt.exceptions import CommandExecutionError
import salt.utils
import salt.transport
# Globals
cp.__salt__ = {}
cp.__opts__ = {}
@ -130,6 +133,33 @@ class CpTestCase(TestCase):
self.assertEqual(cp.push_dir(path), ret)
@patch(
'salt.modules.cp.os.path',
MagicMock(isfile=Mock(return_value=True), wraps=cp.os.path))
@patch.multiple(
'salt.modules.cp',
_auth=MagicMock(**{'return_value.gen_token.return_value': 'token'}),
__opts__={'id': 'abc', 'file_buffer_size': 10})
@patch('salt.utils.fopen', mock_open(read_data='content'))
@patch('salt.transport.Channel.factory', MagicMock())
def test_push(self):
'''
Test if push works with good posix path.
'''
response = cp.push('/saltines/test.file')
self.assertEqual(response, True)
self.assertEqual(salt.utils.fopen().read.call_count, 2)
salt.transport.Channel.factory({}).send.assert_called_once_with(
dict(
loc=salt.utils.fopen().tell(),
cmd='_file_recv',
tok='token',
path=['saltines', 'test.file'],
data='', # data is empty here because load['data'] is overwritten
id='abc'
)
)
if __name__ == '__main__':
from integration import run_tests

View File

@ -6,7 +6,6 @@ import os
# Import Salt Testing Libs
from salttesting.unit import skipIf
from salttesting.case import TestCase
from salttesting.helpers import ensure_in_syspath
ensure_in_syspath('../../..')
@ -37,17 +36,6 @@ except ImportError:
from unit.utils.event_test import eventpublisher_process, event, SOCK_DIR # pylint: disable=import-error
@skipIf(HAS_TORNADO is False, 'The tornado package needs to be installed')
class TestUtils(TestCase):
def test_batching(self):
self.assertEqual(1, saltnado.get_batch_size('1', 10))
self.assertEqual(2, saltnado.get_batch_size('2', 10))
self.assertEqual(1, saltnado.get_batch_size('10%', 10))
# TODO: exception in this case? The core doesn't so we shouldn't
self.assertEqual(11, saltnado.get_batch_size('110%', 10))
@skipIf(HAS_TORNADO is False, 'The tornado package needs to be installed')
class TestSaltnadoUtils(AsyncTestCase):
def test_any_future(self):
@ -153,7 +141,3 @@ class TestEventListener(AsyncTestCase):
self.assertTrue(event_future.done())
with self.assertRaises(saltnado.TimeoutException):
event_future.result()
if __name__ == '__main__':
from integration import run_tests # pylint: disable=import-error
run_tests(TestUtils, needs_daemon=False)