Merge pull request #38759 from rallytime/merge-2016.11

[2016.11] Merge forward from 2016.3 to 2016.11
This commit is contained in:
Mike Place 2017-01-17 08:22:00 -07:00 committed by GitHub
commit 751e14c523
15 changed files with 369 additions and 364 deletions

View File

@ -404,6 +404,17 @@
# Pass in an alternative location for the salt-ssh roster file
#roster_file: /etc/salt/roster
# Define a location for roster files so they can be chosen when using Salt API.
# An administrator can place roster files into these locations. Then when
# calling Salt API, parameter 'roster_file' should contain a relative path to
# these locations. That is, "roster_file=/foo/roster" will be resolved as
# "/etc/salt/roster.d/foo/roster" etc. This feature prevents passing insecure
# custom rosters through the Salt API.
#
#rosters:
# - /etc/salt/roster.d
# - /opt/salt/some/more/rosters
# The log file of the salt-ssh command:
#ssh_log_file: /var/log/salt/ssh

View File

@ -608,7 +608,9 @@ The directory where Unix sockets will be kept.
Default: ``''``
Backup files replaced by file.managed and file.recurse under cachedir.
Make backups of files replaced by ``file.managed`` and ``file.recurse`` state modules under
:conf_minion:`cachedir` in ``file_backup`` subdirectory preserving original paths.
Refer to :ref:`File State Backups documentation <file-state-backups>` for more details.
.. code-block:: yaml

View File

@ -139,13 +139,13 @@ Running Test Subsections
Instead of running the entire test suite all at once, which can take a long time,
there are several ways to run only specific groups of tests or individual tests:
* Run unit tests only: ``./tests/runtests.py --unit-tests``
* Run unit and integration tests for states: ``./tests/runtests.py --state``
* Run integration tests for an individual module: ``./tests/runtests.py -n integration.modules.virt``
* Run unit tests for an individual module: ``./tests/runtests.py -n unit.modules.virt_test``
* Run :ref:`unit tests only<running-unit-tests-no-daemons>`: ``python tests/runtests.py --unit-tests``
* Run unit and integration tests for states: ``python tests/runtests.py --state``
* Run integration tests for an individual module: ``python tests/runtests.py -n integration.modules.virt``
* Run unit tests for an individual module: ``python tests/runtests.py -n unit.modules.virt_test``
* Run an individual test by using the class and test name (this example is for the
``test_default_kvm_profile`` test in the ``integration.module.virt``):
``./tests/runtests.py -n integration.module.virt.VirtTest.test_default_kvm_profile``
``python tests/runtests.py -n integration.module.virt.VirtTest.test_default_kvm_profile``
For more specific examples of how to run various test subsections or individual
tests, please see the :ref:`Test Selection Options <test-selection-options>`
@ -163,14 +163,14 @@ Since the unit tests do not require a master or minion to execute, it is often u
run unit tests individually, or as a whole group, without having to start up the integration testing
daemons. Starting up the master, minion, and syndic daemons takes a lot of time before the tests can
even start running and is unnecessary to run unit tests. To run unit tests without invoking the
integration test daemons, simple remove the ``/tests`` portion of the ``runtests.py`` command:
integration test daemons, simply run the ``runtests.py`` script with ``--unit`` argument:
.. code-block:: bash
./runtests.py --unit
python tests/runtests.py --unit
All of the other options to run individual tests, entire classes of tests, or entire test modules still
apply.
All of the other options to run individual tests, entire classes of tests, or
entire test modules still apply.
Running Destructive Integration Tests
@ -191,13 +191,14 @@ successfully. Therefore, running destructive tests should be done with caution.
.. note::
Running destructive tests will change the underlying system. Use caution when running destructive tests.
Running destructive tests will change the underlying system.
Use caution when running destructive tests.
To run tests marked as destructive, set the ``--run-destructive`` flag:
.. code-block:: bash
./tests/runtests.py --run-destructive
python tests/runtests.py --run-destructive
Running Cloud Provider Tests
@ -259,13 +260,13 @@ Here's a simple usage example:
.. code-block:: bash
tests/runtests.py --docked=ubuntu-12.04 -v
python tests/runtests.py --docked=ubuntu-12.04 -v
The full `docker`_ container repository can also be provided:
.. code-block:: bash
tests/runtests.py --docked=salttest/ubuntu-12.04 -v
python tests/runtests.py --docked=salttest/ubuntu-12.04 -v
The SaltStack team is creating some containers which will have the necessary

View File

@ -38,7 +38,7 @@ simply by creating a data structure. (And this is exactly how much of Salt's
own internals work!)
.. autoclass:: salt.netapi.NetapiClient
:members: local, local_async, local_batch, local_subset, ssh, ssh_async,
:members: local, local_async, local_subset, ssh, ssh_async,
runner, runner_async, wheel, wheel_async
.. toctree::

View File

@ -0,0 +1,31 @@
============================
Salt 2015.8.13 Release Notes
============================
Version 2015.8.13 is a bugfix release for :ref:`2015.8.0 <release-2015-8-0>`.
Changes for v2015.8.12..v2015.8.13
----------------------------------
Extended changelog courtesy of Todd Stansell (https://github.com/tjstansell/salt-changelogs):
*Generated at: 2017-01-09T21:17:06Z*
Statistics:
- Total Merges: **3**
- Total Issue references: **3**
- Total PR references: **5**
Changes:
* 3428232 Clean up tests and docs for batch execution
* 3d8f3d1 Remove batch execution from NetapiClient and Saltnado
* 97b0f64 Lintfix
* d151666 Add explanation comment
* 62f2c87 Add docstring
* 9b0a786 Explain what it is about and how to configure that
* 5ea3579 Pick up a specified roster file from the configured locations
* 3a8614c Disable custom rosters in API
* c0e5a11 Add roster disable flag

View File

@ -58,7 +58,7 @@ Unfortunately, it can lead to code that looks like the following.
{% endfor %}
This is an example from the author's salt formulae demonstrating misuse of jinja.
Aside from being difficult to read and maintian,
Aside from being difficult to read and maintain,
accessing the logic it contains from a non-jinja renderer
while probably possible is a significant barrier!
@ -158,6 +158,6 @@ Conclusion
----------
That was... surprisingly straight-forward.
Now the logic is now available in every renderer, instead of just Jinja.
Now the logic is available in every renderer, instead of just Jinja.
Best of all, it can be maintained in Python,
which is a whole lot easier than Jinja.

View File

@ -11,7 +11,7 @@ idna==2.1
ioflo==1.5.5
ioloop==0.1a0
ipaddress==1.0.16
Jinja2==2.8
Jinja2==2.9.4
libnacl==1.4.5
lxml==3.6.0
Mako==1.0.4

View File

@ -22,7 +22,8 @@ class SSHClient(object):
'''
def __init__(self,
c_path=os.path.join(syspaths.CONFIG_DIR, 'master'),
mopts=None):
mopts=None,
disable_custom_roster=False):
if mopts:
self.opts = mopts
else:
@ -35,6 +36,9 @@ class SSHClient(object):
)
self.opts = salt.config.client_config(c_path)
# Salt API should never offer a custom roster!
self.opts['__disable_custom_roster'] = disable_custom_roster
def _prep_ssh(
self,
tgt,

View File

@ -9,7 +9,7 @@
#
# BUGS: https://github.com/saltstack/salt-bootstrap/issues
#
# COPYRIGHT: (c) 2012-2016 by the SaltStack Team, see AUTHORS.rst for more
# COPYRIGHT: (c) 2012-2017 by the SaltStack Team, see AUTHORS.rst for more
# details.
#
# LICENSE: Apache 2.0
@ -18,7 +18,7 @@
#======================================================================================================================
set -o nounset # Treat unset variables as an error
__ScriptVersion="2016.10.25"
__ScriptVersion="2017.01.10"
__ScriptName="bootstrap-salt.sh"
__ScriptFullName="$0"
@ -309,9 +309,10 @@ __usage() {
-F Allow copied files to overwrite existing (config, init.d, etc)
-K If set, keep the temporary files in the temporary directories specified
with -c and -k
-C Only run the configuration function. This option automatically bypasses
any installation. Implies -F (forced overwrite). To overwrite master or
syndic configs, -M or -S, respectively, must also be specified.
-C Only run the configuration function. Implies -F (forced overwrite).
To overwrite Master or Syndic configs, -M or -S, respectively, must
also be specified. Salt installation will be ommitted, but some of the
dependencies could be installed to write configuration with -j or -J.
-A Pass the salt-master DNS name or IP. This will be stored under
\${BS_SALT_ETC_DIR}/minion.d/99-master-address.conf
-i Pass the salt-minion id. This will be stored under
@ -342,12 +343,12 @@ __usage() {
repo.saltstack.com. The option passed with -R replaces the
"repo.saltstack.com". If -R is passed, -r is also set. Currently only
works on CentOS/RHEL based distributions.
-J Replace the Master config file with data passed in as a json string. If
-J Replace the Master config file with data passed in as a JSON string. If
a Master config file is found, a reasonable effort will be made to save
the file with a ".bak" extension. If used in conjunction with -C or -F,
no ".bak" file will be created as either of those options will force
a complete overwrite of the file.
-j Replace the Minion config file with data passed in as a json string. If
-j Replace the Minion config file with data passed in as a JSON string. If
a Minion config file is found, a reasonable effort will be made to save
the file with a ".bak" extension. If used in conjunction with -C or -F,
no ".bak" file will be created as either of those options will force
@ -475,7 +476,7 @@ fi
# Check that we're installing or configuring a master if we're being passed a master config json dict
if [ "$_CUSTOM_MASTER_CONFIG" != "null" ]; then
if [ "$_INSTALL_MASTER" -eq $BS_FALSE ] && [ "$_CONFIG_ONLY" -eq $BS_FALSE ]; then
echoerror "Don't pass a master config json dict (-J) if no master is going to be bootstrapped or configured."
echoerror "Don't pass a master config JSON dict (-J) if no master is going to be bootstrapped or configured."
exit 1
fi
fi
@ -483,7 +484,7 @@ fi
# Check that we're installing or configuring a minion if we're being passed a minion config json dict
if [ "$_CUSTOM_MINION_CONFIG" != "null" ]; then
if [ "$_INSTALL_MINION" -eq $BS_FALSE ] && [ "$_CONFIG_ONLY" -eq $BS_FALSE ]; then
echoerror "Don't pass a minion config json dict (-j) if no minion is going to be bootstrapped or configured."
echoerror "Don't pass a minion config JSON dict (-j) if no minion is going to be bootstrapped or configured."
exit 1
fi
fi
@ -850,7 +851,7 @@ __derive_debian_numeric_version() {
# DESCRIPTION: Strip single or double quotes from the provided string.
#----------------------------------------------------------------------------------------------------------------------
__unquote_string() {
echo "$*" | sed -e "s/^\([\"']\)\(.*\)\1\$/\2/g"
echo "$*" | sed -e "s/^\([\"\']\)\(.*\)\1\$/\2/g"
}
#--- FUNCTION -------------------------------------------------------------------------------------------------------
@ -924,6 +925,8 @@ __gather_linux_system_info() {
DISTRO_NAME=$(lsb_release -si)
if [ "${DISTRO_NAME}" = "Scientific" ]; then
DISTRO_NAME="Scientific Linux"
elif [ "$(echo "$DISTRO_NAME" | grep ^CloudLinux)" != "" ]; then
DISTRO_NAME="Cloud Linux"
elif [ "$(echo "$DISTRO_NAME" | grep ^RedHat)" != "" ]; then
# Let's convert 'CamelCased' to 'Camel Cased'
n=$(__camelcase_split "$DISTRO_NAME")
@ -1037,6 +1040,9 @@ __gather_linux_system_info() {
n="Arch Linux"
v="" # Arch Linux does not provide a version.
;;
cloudlinux )
n="Cloud Linux"
;;
debian )
n="Debian"
v=$(__derive_debian_numeric_version "$v")
@ -1195,12 +1201,6 @@ __ubuntu_derivatives_translation() {
# Mappings
trisquel_6_ubuntu_base="12.04"
linuxmint_13_ubuntu_base="12.04"
linuxmint_14_ubuntu_base="12.10"
#linuxmint_15_ubuntu_base="13.04"
# Bug preventing add-apt-repository from working on Mint 15:
# https://bugs.launchpad.net/linuxmint/+bug/1198751
linuxmint_16_ubuntu_base="13.10"
linuxmint_17_ubuntu_base="14.04"
linuxmint_18_ubuntu_base="16.04"
linaro_12_ubuntu_base="12.04"
@ -1258,15 +1258,12 @@ __ubuntu_codename_translation() {
"14")
DISTRO_CODENAME="trusty"
;;
"15")
if [ -n "$_april" ]; then
DISTRO_CODENAME="vivid"
else
DISTRO_CODENAME="wily"
fi
;;
"16")
DISTRO_CODENAME="xenial"
if [ "$_april" ]; then
DISTRO_CODENAME="xenial"
else
DISTRO_CODENAME="yakkety"
fi
;;
*)
DISTRO_CODENAME="trusty"
@ -1453,6 +1450,14 @@ if ([ "${DISTRO_NAME_L}" != "ubuntu" ] && [ $_PIP_ALL -eq $BS_TRUE ]);then
exit 1
fi
# Starting from Ubuntu 16.10, gnupg-curl has been renamed to gnupg1-curl.
GNUPG_CURL="gnupg-curl"
if ([ "${DISTRO_NAME_L}" = "ubuntu" ] && [ "${DISTRO_VERSION}" = "16.10" ]); then
GNUPG_CURL="gnupg1-curl"
fi
#--- FUNCTION -------------------------------------------------------------------------------------------------------
# NAME: __function_defined
# DESCRIPTION: Checks if a function is defined within this scripts scope
@ -1497,7 +1502,7 @@ __apt_get_upgrade_noinput() {
__apt_key_fetch() {
url=$1
__apt_get_install_noinput gnupg-curl || return 1
__apt_get_install_noinput ${GNUPG_CURL} || return 1
# shellcheck disable=SC2086
apt-key adv ${_GPG_ARGS} --fetch-keys "$url"; return $?
@ -1561,6 +1566,10 @@ __yum_install_noinput() {
__git_clone_and_checkout() {
echodebug "Installed git version: $(git --version | awk '{ print $3 }')"
# Turn off SSL verification if -I flag was set for insecure downloads
if [ "$_INSECURE_DL" -eq $BS_TRUE ]; then
export GIT_SSL_NO_VERIFY=1
fi
__SALT_GIT_CHECKOUT_PARENT_DIR=$(dirname "${_SALT_GIT_CHECKOUT_DIR}" 2>/dev/null)
__SALT_GIT_CHECKOUT_PARENT_DIR="${__SALT_GIT_CHECKOUT_PARENT_DIR:-/tmp/git}"
@ -1689,7 +1698,12 @@ __check_end_of_life_versions() {
# Ubuntu versions not supported
#
# < 12.04
if [ "$DISTRO_MAJOR_VERSION" -lt 12 ]; then
# 13.x, 15.x
# 12.10, 14.10
if [ "$DISTRO_MAJOR_VERSION" -lt 12 ] || \
[ "$DISTRO_MAJOR_VERSION" -eq 13 ] || \
[ "$DISTRO_MAJOR_VERSION" -eq 15 ] || \
([ "$DISTRO_MAJOR_VERSION" -lt 16 ] && [ "$DISTRO_MINOR_VERSION" -eq 10 ]); then
echoerror "End of life distributions are not supported."
echoerror "Please consider upgrading to the next stable. See:"
echoerror " https://wiki.ubuntu.com/Releases"
@ -1726,7 +1740,7 @@ __check_end_of_life_versions() {
fedora)
# Fedora lower than 18 are no longer supported
if [ "$DISTRO_MAJOR_VERSION" -lt 18 ]; then
if [ "$DISTRO_MAJOR_VERSION" -lt 23 ]; then
echoerror "End of life distributions are not supported."
echoerror "Please consider upgrading to the next stable. See:"
echoerror " https://fedoraproject.org/wiki/Releases"
@ -2284,49 +2298,30 @@ __enable_universe_repository() {
echodebug "Enabling the universe repository"
# Ubuntu versions higher than 12.04 do not live in the old repositories
if [ "$DISTRO_MAJOR_VERSION" -gt 12 ] || ([ "$DISTRO_MAJOR_VERSION" -eq 12 ] && [ "$DISTRO_MINOR_VERSION" -gt 04 ]); then
add-apt-repository -y "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) universe" || return 1
elif [ "$DISTRO_MAJOR_VERSION" -lt 11 ] && [ "$DISTRO_MINOR_VERSION" -lt 10 ]; then
# Below Ubuntu 11.10, the -y flag to add-apt-repository is not supported
add-apt-repository "deb http://old-releases.ubuntu.com/ubuntu $(lsb_release -sc) universe" || return 1
else
add-apt-repository -y "deb http://old-releases.ubuntu.com/ubuntu $(lsb_release -sc) universe" || return 1
fi
add-apt-repository -y "deb http://old-releases.ubuntu.com/ubuntu $(lsb_release -sc) universe" || return 1
add-apt-repository -y "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) universe" || return 1
return 0
}
install_ubuntu_deps() {
if [ "$DISTRO_MAJOR_VERSION" -gt 12 ] || ([ "$DISTRO_MAJOR_VERSION" -eq 12 ] && [ "$DISTRO_MINOR_VERSION" -eq 10 ]); then
if [ "$DISTRO_MAJOR_VERSION" -gt 12 ]; then
# Above Ubuntu 12.04 add-apt-repository is in a different package
__apt_get_install_noinput software-properties-common || return 1
else
__apt_get_install_noinput python-software-properties || return 1
fi
if [ $_DISABLE_REPOS -eq $BS_FALSE ]; then
__enable_universe_repository || return 1
# Versions starting with 2015.5.6 and 2015.8.1 are hosted at repo.saltstack.com
if [ "$(echo "$STABLE_REV" | egrep '^(2015\.5|2015\.8|latest|archive\/)')" = "" ]; then
if [ "$(echo "$STABLE_REV" | egrep '^(2015\.5|2015\.8|2016\.3|2016\.11|latest|archive\/)')" = "" ]; then
if [ "$DISTRO_MAJOR_VERSION" -lt 14 ]; then
echoinfo "Installing Python Requests/Chardet from Chris Lea's PPA repository"
if [ "$DISTRO_MAJOR_VERSION" -gt 11 ] || ([ "$DISTRO_MAJOR_VERSION" -eq 11 ] && [ "$DISTRO_MINOR_VERSION" -gt 04 ]); then
# Above Ubuntu 11.04 add a -y flag
add-apt-repository -y "ppa:chris-lea/python-requests" || return 1
add-apt-repository -y "ppa:chris-lea/python-chardet" || return 1
add-apt-repository -y "ppa:chris-lea/python-urllib3" || return 1
add-apt-repository -y "ppa:chris-lea/python-crypto" || return 1
else
add-apt-repository "ppa:chris-lea/python-requests" || return 1
add-apt-repository "ppa:chris-lea/python-chardet" || return 1
add-apt-repository "ppa:chris-lea/python-urllib3" || return 1
add-apt-repository "ppa:chris-lea/python-crypto" || return 1
fi
add-apt-repository -y "ppa:chris-lea/python-requests" || return 1
add-apt-repository -y "ppa:chris-lea/python-chardet" || return 1
add-apt-repository -y "ppa:chris-lea/python-urllib3" || return 1
add-apt-repository -y "ppa:chris-lea/python-crypto" || return 1
fi
fi
@ -2337,7 +2332,7 @@ install_ubuntu_deps() {
# Minimal systems might not have upstart installed, install it
__PACKAGES="upstart"
if [ "$DISTRO_MAJOR_VERSION" -ge 15 ]; then
if [ "$DISTRO_MAJOR_VERSION" -ge 16 ]; then
__PACKAGES="${__PACKAGES} python2.7"
fi
if [ "$_VIRTUALENV_DIR" != "null" ]; then
@ -2349,6 +2344,9 @@ install_ubuntu_deps() {
# requests is still used by many salt modules
__PACKAGES="${__PACKAGES} python-requests"
# YAML module is used for generating custom master/minion configs
__PACKAGES="${__PACKAGES} python-yaml"
# Additionally install procps and pciutils which allows for Docker bootstraps. See 366#issuecomment-39666813
__PACKAGES="${__PACKAGES} procps pciutils"
@ -2365,7 +2363,7 @@ install_ubuntu_deps() {
}
install_ubuntu_stable_deps() {
if ([ "${_SLEEP}" -eq "${__DEFAULT_SLEEP}" ] && [ "$DISTRO_MAJOR_VERSION" -lt 15 ]); then
if [ "${_SLEEP}" -eq "${__DEFAULT_SLEEP}" ] && [ "$DISTRO_MAJOR_VERSION" -lt 16 ]; then
# The user did not pass a custom sleep value as an argument, let's increase the default value
echodebug "On Ubuntu systems we increase the default sleep value to 10."
echodebug "See https://github.com/saltstack/salt/issues/12248 for more info."
@ -2408,12 +2406,12 @@ install_ubuntu_stable_deps() {
fi
# Versions starting with 2015.5.6, 2015.8.1 and 2016.3.0 are hosted at repo.saltstack.com
if [ "$(echo "$STABLE_REV" | egrep '^(2015\.5|2015\.8|2016\.3|latest|archive\/)')" != "" ]; then
if [ "$(echo "$STABLE_REV" | egrep '^(2015\.5|2015\.8|2016\.3|2016\.11|latest|archive\/)')" != "" ]; then
# Workaround for latest non-LTS ubuntu
if [ "$DISTRO_MAJOR_VERSION" -eq 15 ]; then
if [ "$DISTRO_VERSION" = "16.10" ]; then
echowarn "Non-LTS Ubuntu detected, but stable packages requested. Trying packages from latest LTS release. You may experience problems."
UBUNTU_VERSION=14.04
UBUNTU_CODENAME=trusty
UBUNTU_VERSION=16.04
UBUNTU_CODENAME=xenial
else
UBUNTU_VERSION=$DISTRO_VERSION
UBUNTU_CODENAME=$DISTRO_CODENAME
@ -2439,12 +2437,7 @@ install_ubuntu_stable_deps() {
STABLE_PPA="saltstack/salt"
fi
if [ "$DISTRO_MAJOR_VERSION" -gt 11 ] || ([ "$DISTRO_MAJOR_VERSION" -eq 11 ] && [ "$DISTRO_MINOR_VERSION" -gt 04 ]); then
# Above Ubuntu 11.04 add a -y flag
add-apt-repository -y "ppa:$STABLE_PPA" || return 1
else
add-apt-repository "ppa:$STABLE_PPA" || return 1
fi
add-apt-repository -y "ppa:$STABLE_PPA" || return 1
fi
apt-get update
@ -2456,24 +2449,17 @@ install_ubuntu_stable_deps() {
install_ubuntu_daily_deps() {
install_ubuntu_stable_deps || return 1
if [ "$DISTRO_MAJOR_VERSION" -ge 12 ]; then
# Above Ubuntu 11.10 add-apt-repository is in a different package
if [ "$DISTRO_MAJOR_VERSION" -gt 12 ]; then
__apt_get_install_noinput software-properties-common || return 1
else
# Ubuntu 12.04 needs python-software-properties to get add-apt-repository binary
__apt_get_install_noinput python-software-properties || return 1
fi
if [ $_DISABLE_REPOS -eq $BS_FALSE ]; then
__enable_universe_repository || return 1
# for anything up to and including 11.04 do not use the -y option
if [ "$DISTRO_MAJOR_VERSION" -gt 11 ] || ([ "$DISTRO_MAJOR_VERSION" -eq 11 ] && [ "$DISTRO_MINOR_VERSION" -gt 04 ]); then
# Above Ubuntu 11.04 add a -y flag
add-apt-repository -y ppa:saltstack/salt-daily || return 1
else
add-apt-repository ppa:saltstack/salt-daily || return 1
fi
add-apt-repository -y ppa:saltstack/salt-daily || return 1
apt-get update
fi
@ -2486,7 +2472,15 @@ install_ubuntu_daily_deps() {
install_ubuntu_git_deps() {
apt-get update
__apt_get_install_noinput git-core || return 1
if ! __check_command_exists git; then
__apt_get_install_noinput git-core || return 1
fi
if [ "$_INSECURE_DL" -eq $BS_FALSE ] && [ "${_SALT_REPO_URL%%://*}" = "https" ]; then
__apt_get_install_noinput ca-certificates
fi
__git_clone_and_checkout || return 1
__PACKAGES=""
@ -2569,12 +2563,6 @@ install_ubuntu_git() {
}
install_ubuntu_stable_post() {
# Workaround for latest LTS packages on latest ubuntu. Normally packages on
# debian-based systems will automatically start the corresponding daemons
if [ "$DISTRO_MAJOR_VERSION" -lt 15 ]; then
return 0
fi
for fname in api master minion syndic; do
# Skip salt-api since the service should be opt-in and not necessarily started on boot
[ $fname = "api" ] && continue
@ -2607,7 +2595,7 @@ install_ubuntu_git_post() {
[ $fname = "minion" ] && [ "$_INSTALL_MINION" -eq $BS_FALSE ] && continue
[ $fname = "syndic" ] && [ "$_INSTALL_SYNDIC" -eq $BS_FALSE ] && continue
if [ -f /bin/systemctl ] && [ "$DISTRO_MAJOR_VERSION" -ge 15 ]; then
if [ -f /bin/systemctl ] && [ "$DISTRO_MAJOR_VERSION" -ge 16 ]; then
__copyfile "${_SALT_GIT_CHECKOUT_DIR}/pkg/deb/salt-${fname}.service" "/lib/systemd/system/salt-${fname}.service"
# Skip salt-api since the service should be opt-in and not necessarily started on boot
@ -2652,7 +2640,7 @@ install_ubuntu_restart_daemons() {
[ $_START_DAEMONS -eq $BS_FALSE ] && return
# Ensure upstart configs / systemd units are loaded
if [ -f /bin/systemctl ] && [ "$DISTRO_MAJOR_VERSION" -ge 15 ]; then
if [ -f /bin/systemctl ] && [ "$DISTRO_MAJOR_VERSION" -ge 16 ]; then
systemctl daemon-reload
elif [ -f /sbin/initctl ]; then
/sbin/initctl reload-configuration
@ -2667,7 +2655,7 @@ install_ubuntu_restart_daemons() {
[ $fname = "minion" ] && [ "$_INSTALL_MINION" -eq $BS_FALSE ] && continue
[ $fname = "syndic" ] && [ "$_INSTALL_SYNDIC" -eq $BS_FALSE ] && continue
if [ -f /bin/systemctl ] && [ "$DISTRO_MAJOR_VERSION" -ge 15 ]; then
if [ -f /bin/systemctl ] && [ "$DISTRO_MAJOR_VERSION" -ge 16 ]; then
echodebug "There's systemd support while checking salt-$fname"
systemctl stop salt-$fname > /dev/null 2>&1
systemctl start salt-$fname.service
@ -2711,7 +2699,7 @@ install_ubuntu_check_services() {
[ $fname = "master" ] && [ "$_INSTALL_MASTER" -eq $BS_FALSE ] && continue
[ $fname = "syndic" ] && [ "$_INSTALL_SYNDIC" -eq $BS_FALSE ] && continue
if [ -f /bin/systemctl ] && [ "$DISTRO_MAJOR_VERSION" -ge 15 ]; then
if [ -f /bin/systemctl ] && [ "$DISTRO_MAJOR_VERSION" -ge 16 ]; then
__check_services_systemd salt-$fname || return 1
elif [ -f /sbin/initctl ] && [ -f /etc/init/salt-${fname}.conf ]; then
__check_services_upstart salt-$fname || return 1
@ -2755,6 +2743,9 @@ install_debian_deps() {
__PACKAGES="procps pciutils"
__PIP_PACKAGES=""
# YAML module is used for generating custom master/minion configs
__PACKAGES="${__PACKAGES} python-yaml"
# shellcheck disable=SC2086
__apt_get_install_noinput ${__PACKAGES} || return 1
@ -2817,7 +2808,7 @@ install_debian_7_deps() {
fi
# Versions starting with 2015.8.7 and 2016.3.0 are hosted at repo.saltstack.com
if [ "$(echo "$STABLE_REV" | egrep '^(2015\.8|2016\.3|latest|archive\/201[5-6]\.)')" != "" ]; then
if [ "$(echo "$STABLE_REV" | egrep '^(2015\.8|2016\.3|2016\.11|latest|archive\/201[5-6]\.)')" != "" ]; then
# amd64 is just a part of repository URI, 32-bit pkgs are hosted under the same location
SALTSTACK_DEBIAN_URL="${HTTP_VAL}://repo.saltstack.com/apt/debian/${DISTRO_MAJOR_VERSION}/${__REPO_ARCH}/${STABLE_REV}"
echo "deb $SALTSTACK_DEBIAN_URL wheezy main" > "/etc/apt/sources.list.d/saltstack.list"
@ -2841,6 +2832,9 @@ install_debian_7_deps() {
# Additionally install procps and pciutils which allows for Docker bootstraps. See 366#issuecomment-39666813
__PACKAGES='procps pciutils'
# YAML module is used for generating custom master/minion configs
__PACKAGES="${__PACKAGES} python-yaml"
# shellcheck disable=SC2086
__apt_get_install_noinput ${__PACKAGES} || return 1
@ -2896,7 +2890,7 @@ install_debian_8_deps() {
fi
# Versions starting with 2015.5.6, 2015.8.1 and 2016.3.0 are hosted at repo.saltstack.com
if [ "$(echo "$STABLE_REV" | egrep '^(2015\.5|2015\.8|2016\.3|latest|archive\/201[5-6]\.)')" != "" ]; then
if [ "$(echo "$STABLE_REV" | egrep '^(2015\.5|2015\.8|2016\.3|2016\.11|latest|archive\/201[5-6]\.)')" != "" ]; then
SALTSTACK_DEBIAN_URL="${HTTP_VAL}://repo.saltstack.com/apt/debian/${DISTRO_MAJOR_VERSION}/${__REPO_ARCH}/${STABLE_REV}"
echo "deb $SALTSTACK_DEBIAN_URL jessie main" > "/etc/apt/sources.list.d/saltstack.list"
@ -2920,9 +2914,8 @@ install_debian_8_deps() {
# shellcheck disable=SC2086
__apt_get_install_noinput ${__PACKAGES} || return 1
if [ "$_UPGRADE_SYS" -eq $BS_TRUE ]; then
__apt_get_upgrade_noinput || return 1
fi
# YAML module is used for generating custom master/minion configs
__PACKAGES="${__PACKAGES} python-yaml"
if [ "${_EXTRA_PACKAGES}" != "" ]; then
echoinfo "Installing the following extra packages as requested: ${_EXTRA_PACKAGES}"
@ -2938,10 +2931,14 @@ install_debian_git_deps() {
__apt_get_install_noinput git || return 1
fi
if [ "$_INSECURE_DL" -eq $BS_FALSE ] && [ "${_SALT_REPO_URL%%://*}" = "https" ]; then
__apt_get_install_noinput ca-certificates
fi
__git_clone_and_checkout || return 1
__PACKAGES="libzmq3 libzmq3-dev lsb-release python-apt python-backports.ssl-match-hostname python-crypto"
__PACKAGES="${__PACKAGES} python-jinja2 python-msgpack python-requests python-tornado"
__PACKAGES="${__PACKAGES} python-jinja2 python-msgpack python-requests"
__PACKAGES="${__PACKAGES} python-tornado python-yaml python-zmq"
if [ "$_INSTALL_CLOUD" -eq $BS_TRUE ]; then
@ -2975,9 +2972,14 @@ install_debian_8_git_deps() {
__apt_get_install_noinput git || return 1
fi
if [ "$_INSECURE_DL" -eq $BS_FALSE ] && [ "${_SALT_REPO_URL%%://*}" = "https" ]; then
__apt_get_install_noinput ca-certificates
fi
__git_clone_and_checkout || return 1
__PACKAGES='libzmq3 libzmq3-dev lsb-release python-apt python-crypto python-jinja2 python-msgpack python-requests python-yaml python-zmq'
__PACKAGES="libzmq3 libzmq3-dev lsb-release python-apt python-crypto python-jinja2 python-msgpack"
__PACKAGES="${__PACKAGES} python-requests python-systemd python-yaml python-zmq"
if [ "$_INSTALL_CLOUD" -eq $BS_TRUE ]; then
# Install python-libcloud if asked to
@ -3184,16 +3186,7 @@ install_debian_check_services() {
# Fedora Install Functions
#
FEDORA_PACKAGE_MANAGER="yum"
__fedora_get_package_manager() {
if [ "$DISTRO_MAJOR_VERSION" -ge 22 ] || __check_command_exists dnf; then
FEDORA_PACKAGE_MANAGER="dnf"
fi
}
install_fedora_deps() {
__fedora_get_package_manager
if [ $_DISABLE_REPOS -eq $BS_FALSE ]; then
if [ "$_ENABLE_EXTERNAL_ZMQ_REPOS" -eq $BS_TRUE ]; then
@ -3203,32 +3196,25 @@ install_fedora_deps() {
__install_saltstack_copr_salt_repository || return 1
fi
__PACKAGES="yum-utils PyYAML libyaml python-crypto python-jinja2 python-zmq"
if [ "$DISTRO_MAJOR_VERSION" -ge 23 ]; then
__PACKAGES="${__PACKAGES} python2-msgpack python2-requests"
else
__PACKAGES="${__PACKAGES} python-msgpack python-requests"
fi
__PACKAGES="yum-utils PyYAML libyaml python-crypto python-jinja2 python-zmq python2-msgpack python2-requests"
# shellcheck disable=SC2086
$FEDORA_PACKAGE_MANAGER install -y ${__PACKAGES} || return 1
dnf install -y ${__PACKAGES} || return 1
if [ "$_UPGRADE_SYS" -eq $BS_TRUE ]; then
$FEDORA_PACKAGE_MANAGER -y update || return 1
dnf -y update || return 1
fi
if [ "${_EXTRA_PACKAGES}" != "" ]; then
echoinfo "Installing the following extra packages as requested: ${_EXTRA_PACKAGES}"
# shellcheck disable=SC2086
$FEDORA_PACKAGE_MANAGER install -y ${_EXTRA_PACKAGES} || return 1
dnf install -y ${_EXTRA_PACKAGES} || return 1
fi
return 0
}
install_fedora_stable() {
__fedora_get_package_manager
__PACKAGES=""
if [ "$_INSTALL_CLOUD" -eq $BS_TRUE ];then
@ -3245,7 +3231,7 @@ install_fedora_stable() {
fi
# shellcheck disable=SC2086
$FEDORA_PACKAGE_MANAGER install -y ${__PACKAGES} || return 1
dnf install -y ${__PACKAGES} || return 1
return 0
}
@ -3267,11 +3253,15 @@ install_fedora_stable_post() {
}
install_fedora_git_deps() {
__fedora_get_package_manager
if [ "$_INSECURE_DL" -eq $BS_FALSE ] && [ "${_SALT_REPO_URL%%://*}" = "https" ]; then
dnf install -y ca-certificates || return 1
fi
install_fedora_deps || return 1
if ! __check_command_exists git; then
$FEDORA_PACKAGE_MANAGER install -y git || return 1
dnf install -y git || return 1
fi
__git_clone_and_checkout || return 1
@ -3299,7 +3289,7 @@ install_fedora_git_deps() {
fi
# shellcheck disable=SC2086
$FEDORA_PACKAGE_MANAGER install -y ${__PACKAGES} || return 1
dnf install -y ${__PACKAGES} || return 1
if [ "${__PIP_PACKAGES}" != "" ]; then
# shellcheck disable=SC2086,SC2090
@ -3449,7 +3439,13 @@ __install_saltstack_rhel_repository() {
repo_url="repo.saltstack.com"
fi
base_url="${HTTP_VAL}://${repo_url}/yum/redhat/\$releasever/\$basearch/${repo_rev}/"
# Cloud Linux $releasever = 7.x, which doesn't exist in repo.saltstack.com, we need this to be "7"
if [ "${DISTRO_NAME}" = "Cloud Linux" ] && [ "${DISTRO_MAJOR_VERSION}" = "7" ]; then
base_url="${HTTP_VAL}://${repo_url}/yum/redhat/${DISTRO_MAJOR_VERSION}/\$basearch/${repo_rev}/"
else
base_url="${HTTP_VAL}://${repo_url}/yum/redhat/\$releasever/\$basearch/${repo_rev}/"
fi
fetch_url="${HTTP_VAL}://${repo_url}/yum/redhat/${DISTRO_MAJOR_VERSION}/${CPU_ARCH_L}/${repo_rev}/"
if [ "${DISTRO_MAJOR_VERSION}" -eq 5 ]; then
@ -3528,14 +3524,23 @@ install_centos_stable_deps() {
__PACKAGES="yum-utils chkconfig"
if [ "${_EXTRA_PACKAGES}" != "" ]; then
echoinfo "Also installing the following extra packages as requested: ${_EXTRA_PACKAGES}"
__PACKAGES="${__PACKAGES} ${_EXTRA_PACKAGES}"
# YAML module is used for generating custom master/minion configs
if [ "$DISTRO_MAJOR_VERSION" -eq 5 ]; then
__PACKAGES="${__PACKAGES} python26-PyYAML"
else
__PACKAGES="${__PACKAGES} PyYAML"
fi
# shellcheck disable=SC2086
__yum_install_noinput ${__PACKAGES} || return 1
if [ "${_EXTRA_PACKAGES}" != "" ]; then
echoinfo "Installing the following extra packages as requested: ${_EXTRA_PACKAGES}"
# shellcheck disable=SC2086
__yum_install_noinput ${_EXTRA_PACKAGES} || return 1
fi
return 0
}
@ -3574,7 +3579,7 @@ install_centos_stable_post() {
[ $fname = "syndic" ] && [ "$_INSTALL_SYNDIC" -eq $BS_FALSE ] && continue
if [ -f /bin/systemctl ]; then
/usr/systemctl is-enabled salt-${fname}.service > /dev/null 2>&1 || (
/bin/systemctl is-enabled salt-${fname}.service > /dev/null 2>&1 || (
/bin/systemctl preset salt-${fname}.service > /dev/null 2>&1 &&
/bin/systemctl enable salt-${fname}.service > /dev/null 2>&1
)
@ -3593,6 +3598,14 @@ install_centos_stable_post() {
}
install_centos_git_deps() {
if [ "$_INSECURE_DL" -eq $BS_FALSE ] && [ "${_SALT_REPO_URL%%://*}" = "https" ]; then
if [ "$DISTRO_MAJOR_VERSION" -gt 5 ]; then
__yum_install_noinput ca-certificates || return 1
else
__yum_install_noinput "openssl.${CPU_ARCH_L}" || return 1
fi
fi
install_centos_stable_deps || return 1
if ! __check_command_exists git; then
@ -3604,10 +3617,10 @@ install_centos_git_deps() {
__PACKAGES=""
if [ "$DISTRO_MAJOR_VERSION" -eq 5 ]; then
__PACKAGES="${__PACKAGES} python26-PyYAML python26 python26-requests"
__PACKAGES="${__PACKAGES} python26-crypto python26-jinja2 python26-msgpack python26-tornado python26-zmq"
__PACKAGES="${__PACKAGES} python26 python26-crypto python26-jinja2 python26-msgpack python26-requests"
__PACKAGES="${__PACKAGES} python26-tornado python26-zmq"
else
__PACKAGES="${__PACKAGES} PyYAML python-crypto python-futures python-msgpack python-zmq python-jinja2"
__PACKAGES="${__PACKAGES} python-crypto python-futures python-msgpack python-zmq python-jinja2"
__PACKAGES="${__PACKAGES} python-requests python-tornado"
fi
@ -4082,6 +4095,69 @@ install_scientific_linux_check_services() {
#
#######################################################################################################################
#######################################################################################################################
#
# CloudLinux Install Functions
#
install_cloud_linux_stable_deps() {
install_centos_stable_deps || return 1
return 0
}
install_cloud_linux_git_deps() {
install_centos_git_deps || return 1
return 0
}
install_cloud_linux_testing_deps() {
install_centos_testing_deps || return 1
return 0
}
install_cloud_linux_stable() {
install_centos_stable || return 1
return 0
}
install_cloud_linux_git() {
install_centos_git || return 1
return 0
}
install_cloud_linux_testing() {
install_centos_testing || return 1
return 0
}
install_cloud_linux_stable_post() {
install_centos_stable_post || return 1
return 0
}
install_cloud_linux_git_post() {
install_centos_git_post || return 1
return 0
}
install_cloud_linux_testing_post() {
install_centos_testing_post || return 1
return 0
}
install_cloud_linux_restart_daemons() {
install_centos_restart_daemons || return 1
return 0
}
install_cloud_linux_check_services() {
install_centos_check_services || return 1
return 0
}
#
# End of CloudLinux Install Functions
#
#######################################################################################################################
#######################################################################################################################
#
# Amazon Linux AMI Install Functions
@ -4089,6 +4165,10 @@ install_scientific_linux_check_services() {
install_amazon_linux_ami_deps() {
# We need to install yum-utils before doing anything else when installing on
# Amazon Linux ECS-optimized images. See issue #974.
yum -y install yum-utils
ENABLE_EPEL_CMD=""
if [ $_DISABLE_REPOS -eq $BS_TRUE ]; then
ENABLE_EPEL_CMD="--enablerepo=${_EPEL_REPO}"
@ -4133,6 +4213,10 @@ _eof
}
install_amazon_linux_ami_git_deps() {
if [ "$_INSECURE_DL" -eq $BS_FALSE ] && [ "${_SALT_REPO_URL%%://*}" = "https" ]; then
yum -y install ca-certificates || return 1
fi
install_amazon_linux_ami_deps || return 1
ENABLE_EPEL_CMD=""
@ -4238,6 +4322,9 @@ install_arch_linux_stable_deps() {
pacman-db-upgrade || return 1
fi
# YAML module is used for generating custom master/minion configs
pacman -Sy --noconfirm --needed python2-yaml
if [ "$_UPGRADE_SYS" -eq $BS_TRUE ]; then
pacman -Syyu --noconfirm --needed || return 1
fi
@ -4262,7 +4349,7 @@ install_arch_linux_git_deps() {
fi
pacman -R --noconfirm python2-distribute
pacman -Sy --noconfirm --needed python2-crypto python2-setuptools python2-jinja \
python2-markupsafe python2-msgpack python2-psutil python2-yaml \
python2-markupsafe python2-msgpack python2-psutil \
python2-pyzmq zeromq python2-requests python2-systemd || return 1
__git_clone_and_checkout || return 1
@ -4293,7 +4380,7 @@ install_arch_linux_stable() {
pacman -S --noconfirm --needed bash || return 1
pacman -Su --noconfirm || return 1
# We can now resume regular salt update
pacman -Syu --noconfirm salt-zmq || return 1
pacman -Syu --noconfirm salt || return 1
return 0
}
@ -4515,6 +4602,10 @@ install_freebsd_9_stable_deps() {
# shellcheck disable=SC2086
/usr/local/sbin/pkg install ${FROM_FREEBSD} -y swig || return 1
# YAML module is used for generating custom master/minion configs
# shellcheck disable=SC2086
/usr/local/sbin/pkg install ${FROM_FREEBSD} -y py27-yaml || return 1
if [ "${_EXTRA_PACKAGES}" != "" ]; then
echoinfo "Installing the following extra packages as requested: ${_EXTRA_PACKAGES}"
# shellcheck disable=SC2086
@ -5027,8 +5118,7 @@ __ZYPPER_REQUIRES_REPLACE_FILES=-1
__set_suse_pkg_repo() {
suse_pkg_url_path="${DISTRO_REPO}/systemsmanagement:saltstack.repo"
if [ "$_DOWNSTREAM_PKG_REPO" -eq $BS_TRUE ]; then
# FIXME: cleartext download over unsecure protocol (HTTP)
suse_pkg_url_base="http://download.opensuse.org/repositories/systemsmanagement:saltstack"
suse_pkg_url_base="http://download.opensuse.org/repositories/systemsmanagement:/saltstack"
else
suse_pkg_url_base="${HTTP_VAL}://repo.saltstack.com/opensuse"
fi
@ -5127,6 +5217,10 @@ install_opensuse_stable_deps() {
}
install_opensuse_git_deps() {
if [ "$_INSECURE_DL" -eq $BS_FALSE ] && [ "${_SALT_REPO_URL%%://*}" = "https" ]; then
__zypper_install ca-certificates || return 1
fi
install_opensuse_stable_deps || return 1
if ! __check_command_exists git; then
@ -5917,7 +6011,7 @@ config_salt() {
# Copy the minions configuration if found
# Explicitly check for custom master config to avoid moving the minion config
elif [ -f "$_TEMP_CONFIG_DIR/minion" ] && [ "$_CUSTOM_MASTER_CONFIG" = "null" ]; then
__movefile "$_TEMP_CONFIG_DIR/minion" "$_SALT_ETC_DIR" "$_CONFIG_ONLY" || return 1
__movefile "$_TEMP_CONFIG_DIR/minion" "$_SALT_ETC_DIR" "$_FORCE_OVERWRITE" || return 1
CONFIGURED_ANYTHING=$BS_TRUE
fi
@ -6008,9 +6102,6 @@ config_salt() {
exit 0
fi
# Create default logs directory if not exists
mkdir -p /var/log/salt
return 0
}
#
@ -6116,7 +6207,7 @@ for FUNC_NAME in $(__strip_duplicates "$DEP_FUNC_NAMES"); do
done
echodebug "DEPS_INSTALL_FUNC=${DEPS_INSTALL_FUNC}"
# Let's get the minion config function
# Let's get the Salt config function
CONFIG_FUNC_NAMES="config_${DISTRO_NAME_L}${PREFIXED_DISTRO_MAJOR_VERSION}_${ITYPE}_salt"
CONFIG_FUNC_NAMES="$CONFIG_FUNC_NAMES config_${DISTRO_NAME_L}${PREFIXED_DISTRO_MAJOR_VERSION}${PREFIXED_DISTRO_MINOR_VERSION}_${ITYPE}_salt"
CONFIG_FUNC_NAMES="$CONFIG_FUNC_NAMES config_${DISTRO_NAME_L}${PREFIXED_DISTRO_MAJOR_VERSION}_salt"
@ -6265,6 +6356,16 @@ if [ "$_CUSTOM_MASTER_CONFIG" != "null" ] || [ "$_CUSTOM_MINION_CONFIG" != "null
if [ "$_TEMP_CONFIG_DIR" = "null" ]; then
_TEMP_CONFIG_DIR="$_SALT_ETC_DIR"
fi
if [ "$_CONFIG_ONLY" -eq $BS_TRUE ]; then
# Execute function to satisfy dependencies for configuration step
echoinfo "Running ${DEPS_INSTALL_FUNC}()"
$DEPS_INSTALL_FUNC
if [ $? -ne 0 ]; then
echoerror "Failed to run ${DEPS_INSTALL_FUNC}()!!!"
exit 1
fi
fi
fi
# Configure Salt
@ -6277,7 +6378,21 @@ if [ "$CONFIG_SALT_FUNC" != "null" ] && [ "$_TEMP_CONFIG_DIR" != "null" ]; then
fi
fi
# Pre-Seed master keys
# Drop the master address if passed
if [ "$_SALT_MASTER_ADDRESS" != "null" ]; then
[ ! -d "$_SALT_ETC_DIR/minion.d" ] && mkdir -p "$_SALT_ETC_DIR/minion.d"
cat <<_eof > $_SALT_ETC_DIR/minion.d/99-master-address.conf
master: $_SALT_MASTER_ADDRESS
_eof
fi
# Drop the minion id if passed
if [ "$_SALT_MINION_ID" != "null" ]; then
[ ! -d "$_SALT_ETC_DIR" ] && mkdir -p "$_SALT_ETC_DIR"
echo "$_SALT_MINION_ID" > "$_SALT_ETC_DIR/minion_id"
fi
# Pre-seed master keys
if [ "$PRESEED_MASTER_FUNC" != "null" ] && [ "$_TEMP_KEYS_DIR" != "null" ]; then
echoinfo "Running ${PRESEED_MASTER_FUNC}()"
$PRESEED_MASTER_FUNC
@ -6298,29 +6413,6 @@ if [ "$_CONFIG_ONLY" -eq $BS_FALSE ]; then
fi
fi
# Ensure that the cachedir exists
# (Workaround for https://github.com/saltstack/salt/issues/6502)
if [ "$_INSTALL_MINION" -eq $BS_TRUE ]; then
if [ ! -d "${_SALT_CACHE_DIR}/minion/proc" ]; then
echodebug "Creating salt's cachedir"
mkdir -p "${_SALT_CACHE_DIR}/minion/proc"
fi
fi
# Drop the master address if passed
if [ "$_SALT_MASTER_ADDRESS" != "null" ]; then
[ ! -d "$_SALT_ETC_DIR/minion.d" ] && mkdir -p "$_SALT_ETC_DIR/minion.d"
cat <<_eof > $_SALT_ETC_DIR/minion.d/99-master-address.conf
master: $_SALT_MASTER_ADDRESS
_eof
fi
# Drop the minion id if passed
if [ "$_SALT_MINION_ID" != "null" ]; then
[ ! -d "$_SALT_ETC_DIR" ] && mkdir -p "$_SALT_ETC_DIR"
echo "$_SALT_MINION_ID" > "$_SALT_ETC_DIR/minion_id"
fi
# Run any post install function. Only execute function if not in config mode only
if [ "$POST_INSTALL_FUNC" != "null" ] && [ "$_CONFIG_ONLY" -eq $BS_FALSE ]; then
echoinfo "Running ${POST_INSTALL_FUNC}()"

View File

@ -498,11 +498,6 @@ VALID_OPTS = {
# http://api.zeromq.org/3-2:zmq-setsockopt
'pub_hwm': int,
# ZMQ HWM for SaltEvent pub socket
'salt_event_pub_hwm': int,
# ZMQ HWM for EventPublisher pub socket
'event_publisher_pub_hwm': int,
# IPC buffer size
# Refs https://github.com/saltstack/salt/issues/34215
'ipc_write_buffer': int,
@ -1162,10 +1157,6 @@ DEFAULT_MINION_OPTS = {
'sudo_user': '',
'http_request_timeout': 1 * 60 * 60.0, # 1 hour
'http_max_body': 100 * 1024 * 1024 * 1024, # 100GB
# ZMQ HWM for SaltEvent pub socket - different for minion vs. master
'salt_event_pub_hwm': 2000,
# ZMQ HWM for EventPublisher pub socket - different for minion vs. master
'event_publisher_pub_hwm': 1000,
'event_match_type': 'startswith',
'minion_restart_command': [],
'pub_ret': True,
@ -1183,10 +1174,6 @@ DEFAULT_MASTER_OPTS = {
'publish_port': 4505,
'zmq_backlog': 1000,
'pub_hwm': 1000,
# ZMQ HWM for SaltEvent pub socket - different for minion vs. master
'salt_event_pub_hwm': 2000,
# ZMQ HWM for EventPublisher pub socket - different for minion vs. master
'event_publisher_pub_hwm': 1000,
'auth_mode': 1,
'user': 'root',
'worker_threads': 5,

View File

@ -98,18 +98,6 @@ class NetapiClient(object):
local = salt.client.get_local_client(mopts=self.opts)
return local.cmd(*args, **kwargs)
def local_batch(self, *args, **kwargs):
'''
Run :ref:`execution modules <all-salt.modules>` against batches of minions
Wraps :py:meth:`salt.client.LocalClient.cmd_batch`
:return: Returns the result from the exeuction module for each batch of
returns
'''
local = salt.client.get_local_client(mopts=self.opts)
return local.cmd_batch(*args, **kwargs)
def local_subset(self, *args, **kwargs):
'''
Run :ref:`execution modules <all-salt.modules>` against subsets of minions
@ -129,7 +117,8 @@ class NetapiClient(object):
:return: Returns the result from the salt-ssh command
'''
ssh_client = salt.client.ssh.client.SSHClient(mopts=self.opts)
ssh_client = salt.client.ssh.client.SSHClient(mopts=self.opts,
disable_custom_roster=True)
return ssh_client.cmd_sync(kwargs)
def runner(self, fun, timeout=None, **kwargs):

View File

@ -191,7 +191,6 @@ a return like::
# Import Python libs
import time
import math
import fnmatch
import logging
from copy import copy
@ -230,7 +229,6 @@ logger = logging.getLogger()
# # all of these require coordinating minion stuff
# - "local" (done)
# - "local_async" (done)
# - "local_batch" (done)
# # master side
# - "runner" (done)
@ -252,7 +250,6 @@ class SaltClientsMixIn(object):
SaltClientsMixIn.__saltclients = {
'local': local_client.run_job_async,
# not the actual client we'll use.. but its what we'll use to get args
'local_batch': local_client.cmd_batch,
'local_async': local_client.run_job_async,
'runner': salt.runner.RunnerClient(opts=self.application.opts).cmd_async,
'runner_async': None, # empty, since we use the same client as `runner`
@ -390,30 +387,6 @@ class EventListener(object):
del self.timeout_map[future]
# TODO: move to a utils function within salt-- the batching stuff is a bit tied together
def get_batch_size(batch, num_minions):
'''
Return the batch size that you should have
batch: string
num_minions: int
'''
# figure out how many we can keep in flight
partition = lambda x: float(x) / 100.0 * num_minions
try:
if '%' in batch:
res = partition(float(batch.strip('%')))
if res < 1:
return int(math.ceil(res))
else:
return int(res)
else:
return int(batch)
except ValueError:
print(('Invalid batch data sent: {0}\nData must be in the form'
'of %10, 10% or 3').format(batch))
class BaseSaltAPIHandler(tornado.web.RequestHandler, SaltClientsMixIn): # pylint: disable=W0223
ct_out_map = (
('application/json', json.dumps),
@ -809,7 +782,7 @@ class SaltAPIHandler(BaseSaltAPIHandler, SaltClientsMixIn): # pylint: disable=W
Content-Type: application/json
Content-Legnth: 83
{"clients": ["local", "local_batch", "local_async", "runner", "runner_async"], "return": "Welcome"}
{"clients": ["local", "local_async", "runner", "runner_async"], "return": "Welcome"}
'''
ret = {"clients": list(self.saltclients.keys()),
"return": "Welcome"}
@ -927,57 +900,6 @@ class SaltAPIHandler(BaseSaltAPIHandler, SaltClientsMixIn): # pylint: disable=W
self.write(self.serialize({'return': ret}))
self.finish()
@tornado.gen.coroutine
def _disbatch_local_batch(self, chunk):
'''
Disbatch local client batched commands
'''
f_call = salt.utils.format_call(self.saltclients['local_batch'], chunk)
# ping all the minions (to see who we have to talk to)
# Don't catch any exception, since we won't know what to do, we'll
# let the upper level deal with this one
ping_ret = yield self._disbatch_local({'tgt': chunk['tgt'],
'fun': 'test.ping',
'expr_form': f_call['kwargs']['expr_form']})
chunk_ret = {}
if not isinstance(ping_ret, dict):
raise tornado.gen.Return(chunk_ret)
minions = list(ping_ret.keys())
maxflight = get_batch_size(f_call['kwargs']['batch'], len(minions))
inflight_futures = []
# override the expr_form
f_call['kwargs']['expr_form'] = 'list'
# do this batch
while len(minions) > 0 or len(inflight_futures) > 0:
# if you have more to go, lets disbatch jobs
while len(inflight_futures) < maxflight and len(minions) > 0:
minion_id = minions.pop(0)
batch_chunk = dict(chunk)
batch_chunk['tgt'] = [minion_id]
batch_chunk['expr_form'] = 'list'
future = self._disbatch_local(batch_chunk)
inflight_futures.append(future)
# if we have nothing to wait for, don't wait
if len(inflight_futures) == 0:
continue
# wait until someone is done
finished_future = yield Any(inflight_futures)
try:
b_ret = finished_future.result()
except TimeoutException:
break
chunk_ret.update(b_ret)
inflight_futures.remove(finished_future)
raise tornado.gen.Return(chunk_ret)
@tornado.gen.coroutine
def _disbatch_local(self, chunk):
'''

View File

@ -19,19 +19,43 @@ log = logging.getLogger(__name__)
def get_roster_file(options):
if options.get('roster_file'):
template = options.get('roster_file')
elif 'config_dir' in options.get('__master_opts__', {}):
template = os.path.join(options['__master_opts__']['config_dir'],
'roster')
elif 'config_dir' in options:
template = os.path.join(options['config_dir'], 'roster')
else:
template = os.path.join(salt.syspaths.CONFIG_DIR, 'roster')
'''
Find respective roster file.
:param options:
:return:
'''
template = None
# The __disable_custom_roster is always True if Salt SSH Client comes
# from Salt API. In that case no way to define own 'roster_file', instead
# this file needs to be chosen from already validated rosters
# (see /etc/salt/master config).
if options.get('__disable_custom_roster') and options.get('roster_file'):
roster = options.get('roster_file').strip('/')
for roster_location in options.get('rosters'):
r_file = os.path.join(roster_location, roster)
if os.path.isfile(r_file):
template = r_file
break
del options['roster_file']
if not template:
if options.get('roster_file'):
template = options.get('roster_file')
elif 'config_dir' in options.get('__master_opts__', {}):
template = os.path.join(options['__master_opts__']['config_dir'],
'roster')
elif 'config_dir' in options:
template = os.path.join(options['config_dir'], 'roster')
else:
template = os.path.join(salt.syspaths.CONFIG_DIR, 'roster')
if not os.path.isfile(template):
raise IOError('No roster file found')
if not os.access(template, os.R_OK):
raise IOError('Access denied to roster "{0}"'.format(template))
return template

View File

@ -113,12 +113,12 @@ There is more documentation about this feature in the
Special files can be managed via the ``mknod`` function. This function will
create and enforce the permissions on a special file. The function supports the
creation of character devices, block devices, and fifo pipes. The function will
creation of character devices, block devices, and FIFO pipes. The function will
create the directory structure up to the special file if it is needed on the
minion. The function will not overwrite or operate on (change major/minor
numbers) existing special files with the exception of user, group, and
permissions. In most cases the creation of some special files require root
permisisons on the minion. This would require that the minion to be run as the
permissions on the minion. This would require that the minion to be run as the
root user. Here is an example of a character device:
.. code-block:: yaml
@ -1352,7 +1352,8 @@ def managed(name,
Default context passed to the template.
backup
Overrides the default backup mode for this specific file.
Overrides the default backup mode for this specific file. See
:ref:`backup_mode documentation <file-state-backups>` for more details.
show_changes
Output a unified diff of the old file and the new file. If ``False``
@ -2469,6 +2470,10 @@ def recurse(name,
Set this to True if empty directories should also be created
(default is False)
backup
Overrides the default backup mode for all replaced files. See
:ref:`backup_mode documentation <file-state-backups>` for more details.
include_pat
When copying, include only this pattern from the source. Default
is glob match; if prefixed with 'E@', then regexp match.

View File

@ -48,13 +48,12 @@ class TestSaltAPIHandler(SaltnadoTestCase):
)
self.assertEqual(response.code, 200)
response_obj = json.loads(response.body)
self.assertEqual(response_obj['clients'],
['runner',
'runner_async',
'local_async',
'local',
'local_batch']
)
self.assertItemsEqual(response_obj['clients'],
['runner',
'runner_async',
'local_async',
'local']
)
self.assertEqual(response_obj['return'], 'Welcome')
def test_post_no_auth(self):
@ -152,68 +151,6 @@ class TestSaltAPIHandler(SaltnadoTestCase):
)
self.assertEqual(response.code, 400)
# local_batch tests
@skipIf(True, 'to be reenabled when #23623 is merged')
def test_simple_local_batch_post(self):
'''
Basic post against local_batch
'''
low = [{'client': 'local_batch',
'tgt': '*',
'fun': 'test.ping',
}]
response = self.fetch('/',
method='POST',
body=json.dumps(low),
headers={'Content-Type': self.content_type_map['json'],
saltnado.AUTH_TOKEN_HEADER: self.token['token']},
connect_timeout=30,
request_timeout=30,
)
response_obj = json.loads(response.body)
self.assertEqual(response_obj['return'], [{'minion': True, 'sub_minion': True}])
# local_batch tests
@skipIf(True, 'to be reenabled when #23623 is merged')
def test_full_local_batch_post(self):
'''
Test full parallelism of local_batch
'''
low = [{'client': 'local_batch',
'tgt': '*',
'fun': 'test.ping',
'batch': '100%',
}]
response = self.fetch('/',
method='POST',
body=json.dumps(low),
headers={'Content-Type': self.content_type_map['json'],
saltnado.AUTH_TOKEN_HEADER: self.token['token']},
connect_timeout=30,
request_timeout=30,
)
response_obj = json.loads(response.body)
self.assertEqual(response_obj['return'], [{'minion': True, 'sub_minion': True}])
def test_simple_local_batch_post_no_tgt(self):
'''
Local_batch testing with no tgt
'''
low = [{'client': 'local_batch',
'tgt': 'minion_we_dont_have',
'fun': 'test.ping',
}]
response = self.fetch('/',
method='POST',
body=json.dumps(low),
headers={'Content-Type': self.content_type_map['json'],
saltnado.AUTH_TOKEN_HEADER: self.token['token']},
connect_timeout=30,
request_timeout=30,
)
response_obj = json.loads(response.body)
self.assertEqual(response_obj['return'], [{}])
# local_async tests
def test_simple_local_async_post(self):
low = [{'client': 'local_async',
@ -435,7 +372,7 @@ class TestMinionSaltAPIHandler(SaltnadoTestCase):
make sure you get an error
'''
# get a token for this test
low = [{'client': 'local_batch',
low = [{'client': 'local',
'tgt': '*',
'fun': 'test.ping',
}]