mirror of
https://github.com/valitydev/salt.git
synced 2024-11-09 01:36:48 +00:00
Merge pull request #21158 from terminalmage/2015.2-develop
Merge 2015.2 branch into develop
This commit is contained in:
commit
60b28a0f94
@ -1,21 +1,43 @@
|
|||||||
|
.. _file-server-backends:
|
||||||
|
|
||||||
====================
|
====================
|
||||||
File Server Backends
|
File Server Backends
|
||||||
====================
|
====================
|
||||||
|
|
||||||
Salt version 0.12.0 introduced the ability for the Salt Master to integrate
|
In Salt 0.12.0, the modular fileserver was introduced. This feature added the
|
||||||
different file server backends. File server backends allows the Salt file
|
ability for the Salt Master to integrate different file server backends. File
|
||||||
server to act as a transparent bridge to external resources. The primary
|
server backends allow the Salt file server to act as a transparent bridge to
|
||||||
example of this is the git backend which allows for all of the Salt formulas
|
external resources. A good example of this is the :mod:`git
|
||||||
and files to be maintained in a remote git repository.
|
<salt.fileserver.git>` backend, which allows Salt to serve files sourced from
|
||||||
|
one or more git repositories, but there are several others as well. Click
|
||||||
|
:ref:`here <all-salt.fileserver>` for a full list of Salt's fileserver
|
||||||
|
backends.
|
||||||
|
|
||||||
The fileserver backend system can accept multiple backends as well. This makes
|
Enabling a Fileserver Backend
|
||||||
it possible to have the environments listed in the :conf_master:`file_roots`
|
-----------------------------
|
||||||
configuration available in addition to other backends, or the ability to mix
|
|
||||||
multiple backends.
|
|
||||||
|
|
||||||
This feature is managed by the :conf_master:`fileserver_backend` option in the
|
Fileserver backends can be enabled with the :conf_master:`fileserver_backend`
|
||||||
master config. The desired backend systems are listed in order of search
|
option.
|
||||||
priority:
|
|
||||||
|
.. code-block:: yaml
|
||||||
|
|
||||||
|
fileserver_backend:
|
||||||
|
- git
|
||||||
|
|
||||||
|
See the :ref:`documentation <all-salt.fileserver>` for each backend to find the
|
||||||
|
correct value to add to :conf_master:`fileserver_backend` in order to enable
|
||||||
|
them.
|
||||||
|
|
||||||
|
Using Multiple Backends
|
||||||
|
-----------------------
|
||||||
|
|
||||||
|
If :conf_master:`fileserver_backend` is not defined in the Master config file,
|
||||||
|
Salt will use the :mod:`roots <salt.fileserver.roots>` backend, but the
|
||||||
|
:conf_master:`fileserver_backend` option supports multiple backends. When more
|
||||||
|
than one backend is in use, the files from the enabled backends are merged into a
|
||||||
|
single virtual filesystem. When a file is requested, the backends will be
|
||||||
|
searched in order for that file, and the first backend to match will be the one
|
||||||
|
which returns the file.
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
@ -24,16 +46,56 @@ priority:
|
|||||||
- git
|
- git
|
||||||
|
|
||||||
With this configuration, the environments and files defined in the
|
With this configuration, the environments and files defined in the
|
||||||
:conf_master:`file_roots` parameter will be searched first, if the referenced
|
:conf_master:`file_roots` parameter will be searched first, and if the file is
|
||||||
environment and file is not found then the :mod:`git <salt.fileserver.gitfs>`
|
not found then the git repositories defined in :conf_master:`gitfs_remotes`
|
||||||
backend will be searched.
|
will be searched.
|
||||||
|
|
||||||
Environments
|
Environments
|
||||||
------------
|
------------
|
||||||
|
|
||||||
The concept of environments is followed in all backend systems. The
|
Just as the order of the values in :conf_master:`fileserver_backend` matters,
|
||||||
environments in the classic :mod:`roots <salt.fileserver.roots>` backend are
|
so too does the order in which different sources are defined within a
|
||||||
defined in the :conf_master:`file_roots` option. Environments map differently
|
fileserver environment. For example, given the below :conf_master:`file_roots`
|
||||||
based on the backend, for instance the git backend translated branches and tags
|
configuration, if both ``/srv/salt/dev/foo.txt`` and ``/srv/salt/prod/foo.txt``
|
||||||
in git to environments. This makes it easy to define environments in git by
|
exist on the Master, then ``salt://foo.txt`` would point to
|
||||||
just setting a tag or forking a branch.
|
``/srv/salt/dev/foo.txt`` in the ``dev`` environment, but it would point to
|
||||||
|
``/srv/salt/prod/foo.txt`` in the ``base`` environment.
|
||||||
|
|
||||||
|
.. code-block:: yaml
|
||||||
|
|
||||||
|
file_roots:
|
||||||
|
base:
|
||||||
|
- /srv/salt/prod
|
||||||
|
qa:
|
||||||
|
- /srv/salt/qa
|
||||||
|
- /srv/salt/prod
|
||||||
|
dev:
|
||||||
|
- /srv/salt/dev
|
||||||
|
- /srv/salt/qa
|
||||||
|
- /srv/salt/prod
|
||||||
|
|
||||||
|
Similarly, when using the :mod:`git <salt.fileserver.gitfs>` backend, if both
|
||||||
|
repositories defined below have a ``hotfix23`` branch/tag, and both of them
|
||||||
|
also contain the file ``bar.txt`` in the root of the repository at that
|
||||||
|
branch/tag, then ``salt://bar.txt`` in the ``hotfix23`` environment would be
|
||||||
|
served from the ``first`` repository.
|
||||||
|
|
||||||
|
.. code-block:: yaml
|
||||||
|
|
||||||
|
gitfs_remotes:
|
||||||
|
- https://mydomain.tld/repos/first.git
|
||||||
|
- https://mydomain.tld/repos/second.git
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
Environments map differently based on the fileserver backend. For instance,
|
||||||
|
the mappings are explicitly defined in :mod:`roots <salt.fileserver.roots>`
|
||||||
|
backend, while in the VCS backends (:mod:`git <salt.fileserver.gitfs>`,
|
||||||
|
:mod:`hg <salt.fileserver.hgfs>`, :mod:`svn <salt.fileserver.svnfs>`) the
|
||||||
|
environments are created from branches/tags/bookmarks/etc. For the
|
||||||
|
:mod:`minion <salt.fileserver.minionfs>` backend, the files are all in a
|
||||||
|
single environment, which is specified by the :conf_master:`minionfs_env`
|
||||||
|
option.
|
||||||
|
|
||||||
|
See the documentation for each backend for a more detailed explanation of
|
||||||
|
how environments are mapped.
|
||||||
|
@ -13,7 +13,6 @@ Follow one of the below links for further information and examples
|
|||||||
:template: autosummary.rst.tmpl
|
:template: autosummary.rst.tmpl
|
||||||
|
|
||||||
compact
|
compact
|
||||||
grains
|
|
||||||
highstate
|
highstate
|
||||||
json_out
|
json_out
|
||||||
key
|
key
|
||||||
|
@ -1,6 +0,0 @@
|
|||||||
==================
|
|
||||||
salt.output.grains
|
|
||||||
==================
|
|
||||||
|
|
||||||
.. automodule:: salt.output.grains
|
|
||||||
:members:
|
|
@ -32,6 +32,8 @@ Misc Fixes/Additions
|
|||||||
updates!)
|
updates!)
|
||||||
- Joyent now requires a ``keyname`` to be specified in the provider
|
- Joyent now requires a ``keyname`` to be specified in the provider
|
||||||
configuration. This change was necessitated upstream by the 7.0+ API.
|
configuration. This change was necessitated upstream by the 7.0+ API.
|
||||||
|
- Add ``args`` argument to ``cmd.script_retcode`` to match ``cmd.script`` in
|
||||||
|
the :py:mod:`cmd module <salt.cmd.cmdmod>`. (:issue:`21122`)
|
||||||
|
|
||||||
Deprecations
|
Deprecations
|
||||||
============
|
============
|
||||||
|
@ -122,6 +122,30 @@ For APT-based distros such as Ubuntu and Debian:
|
|||||||
|
|
||||||
# apt-get install python-dulwich
|
# apt-get install python-dulwich
|
||||||
|
|
||||||
|
.. important::
|
||||||
|
|
||||||
|
If switching to Dulwich from GitPython/pygit2, or switching from
|
||||||
|
GitPython/pygit2 to Dulwich, it is necessary to clear the gitfs cache to
|
||||||
|
avoid unpredictable behavior. This is probably a good idea whenever
|
||||||
|
switching to a new :conf_master:`gitfs_provider`, but it is less important
|
||||||
|
when switching between GitPython and pygit2.
|
||||||
|
|
||||||
|
Beginning in version 2015.2.0, the gitfs cache can be easily cleared using
|
||||||
|
the :mod:`fileserver.clear_cache <salt.runners.fileserver.clear_cache>`
|
||||||
|
runner.
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
salt-run fileserver.clear_cache backend=git
|
||||||
|
|
||||||
|
If the Master is running an earlier version, then the cache can be cleared
|
||||||
|
by removing the ``gitfs`` and ``file_lists/gitfs`` directories (both paths
|
||||||
|
relative to the master cache directory, usually
|
||||||
|
``/var/cache/salt/master``).
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
rm -rf /var/cache/salt/master{,/file_lists}/gitfs
|
||||||
|
|
||||||
Simple Configuration
|
Simple Configuration
|
||||||
====================
|
====================
|
||||||
@ -157,6 +181,14 @@ master:
|
|||||||
Information on how to authenticate to SSH remotes can be found :ref:`here
|
Information on how to authenticate to SSH remotes can be found :ref:`here
|
||||||
<gitfs-authentication>`.
|
<gitfs-authentication>`.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
Dulwich does not recognize ``ssh://`` URLs, ``git+ssh://`` must be used
|
||||||
|
instead. Salt version 2015.2.0 and later will automatically add the
|
||||||
|
``git+`` to the beginning of these URLs before fetching, but earlier
|
||||||
|
Salt versions will fail to fetch unless the URL is specified using
|
||||||
|
``git+ssh://``.
|
||||||
|
|
||||||
3. Restart the master to load the new configuration.
|
3. Restart the master to load the new configuration.
|
||||||
|
|
||||||
|
|
||||||
|
@ -208,7 +208,23 @@ will directly correspond to a parameter in an LXC configuration file (see ``man
|
|||||||
- **flags** - Corresponds to **lxc.network.flags**
|
- **flags** - Corresponds to **lxc.network.flags**
|
||||||
|
|
||||||
Interface-specific options (MAC address, IPv4/IPv6, etc.) must be passed on a
|
Interface-specific options (MAC address, IPv4/IPv6, etc.) must be passed on a
|
||||||
container-by-container basis.
|
container-by-container basis, for instance using the ``nic_opts`` argument to
|
||||||
|
:mod:`lxc.create <salt.modules.lxc.create>`:
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
salt myminion lxc.create container1 profile=centos network_profile=centos nic_opts='{eth0: {ipv4: 10.0.0.20/24, gateway: 10.0.0.1}}'
|
||||||
|
|
||||||
|
.. warning::
|
||||||
|
|
||||||
|
The ``ipv4``, ``ipv6``, ``gateway``, and ``link`` (bridge) settings in
|
||||||
|
network profiles / nic_opts will only work if the container doesnt redefine
|
||||||
|
the network configuration (for example in
|
||||||
|
``/etc/sysconfig/network-scripts/ifcfg-<interface_name>`` on RHEL/CentOS,
|
||||||
|
or ``/etc/network/interfaces`` on Debian/Ubuntu/etc.). Use these with
|
||||||
|
caution. The container images installed using the ``download`` template,
|
||||||
|
for instance, typically are configured for eth0 to use DHCP, which will
|
||||||
|
conflict with static IP addresses set at the container level.
|
||||||
|
|
||||||
|
|
||||||
Creating a Container on the CLI
|
Creating a Container on the CLI
|
||||||
@ -404,15 +420,15 @@ New functions have been added to mimic the behavior of the functions in the
|
|||||||
equivalents:
|
equivalents:
|
||||||
|
|
||||||
|
|
||||||
======================================= ====================================================== ===========================================================
|
======================================= ====================================================== ===================================================
|
||||||
Description :mod:`cmd <salt.modules.cmdmod>` module :mod:`lxc <salt.modules.lxc>` module
|
Description :mod:`cmd <salt.modules.cmdmod>` module :mod:`lxc <salt.modules.lxc>` module
|
||||||
======================================= ====================================================== ===========================================================
|
======================================= ====================================================== ===================================================
|
||||||
Run a command and get all output :mod:`cmd.run <salt.modules.cmdmod.run>` :mod:`lxc.cmd_run <salt.modules.lxc.cmd_run>`
|
Run a command and get all output :mod:`cmd.run <salt.modules.cmdmod.run>` :mod:`lxc.run <salt.modules.lxc.run>`
|
||||||
Run a command and get just stdout :mod:`cmd.run_stdout <salt.modules.cmdmod.run_stdout>` :mod:`lxc.cmd_run_stdout <salt.modules.lxc.cmd_run_stdout>`
|
Run a command and get just stdout :mod:`cmd.run_stdout <salt.modules.cmdmod.run_stdout>` :mod:`lxc.run_stdout <salt.modules.lxc.run_stdout>`
|
||||||
Run a command and get just stderr :mod:`cmd.run_stderr <salt.modules.cmdmod.run_stderr>` :mod:`lxc.cmd_run_stderr <salt.modules.lxc.cmd_run_stderr>`
|
Run a command and get just stderr :mod:`cmd.run_stderr <salt.modules.cmdmod.run_stderr>` :mod:`lxc.run_stderr <salt.modules.lxc.run_stderr>`
|
||||||
Run a command and get just the retcode :mod:`cmd.retcode <salt.modules.cmdmod.retcode>` :mod:`lxc.cmd_retcode <salt.modules.lxc.cmd_retcode>`
|
Run a command and get just the retcode :mod:`cmd.retcode <salt.modules.cmdmod.retcode>` :mod:`lxc.retcode <salt.modules.lxc.retcode>`
|
||||||
Run a command and get all information :mod:`cmd.run_all <salt.modules.cmdmod.run_all>` :mod:`lxc.cmd_run_all <salt.modules.lxc.cmd_run_all>`
|
Run a command and get all information :mod:`cmd.run_all <salt.modules.cmdmod.run_all>` :mod:`lxc.run_all <salt.modules.lxc.run_all>`
|
||||||
======================================= ====================================================== ===========================================================
|
======================================= ====================================================== ===================================================
|
||||||
|
|
||||||
|
|
||||||
2014.7.x and Earlier
|
2014.7.x and Earlier
|
||||||
|
@ -33,6 +33,18 @@ same relative path in more than one root, then the top-most match "wins". For
|
|||||||
example, if ``/srv/salt/foo.txt`` and ``/mnt/salt-nfs/base/foo.txt`` both
|
example, if ``/srv/salt/foo.txt`` and ``/mnt/salt-nfs/base/foo.txt`` both
|
||||||
exist, then ``salt://foo.txt`` will point to ``/srv/salt/foo.txt``.
|
exist, then ``salt://foo.txt`` will point to ``/srv/salt/foo.txt``.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
When using multiple fileserver backends, the order in which they are listed
|
||||||
|
in the :conf_master:`fileserver_backend` parameter also matters. If both
|
||||||
|
``roots`` and ``git`` backends contain a file with the same relative path,
|
||||||
|
and ``roots`` appears before ``git`` in the
|
||||||
|
:conf_master:`fileserver_backend` list, then the file in ``roots`` will
|
||||||
|
"win", and the file in gitfs will be ignored.
|
||||||
|
|
||||||
|
A more thorough explanation of how Salt's modular fileserver works can be
|
||||||
|
found :ref:`here <file-server-backends>`. We recommend reading this.
|
||||||
|
|
||||||
Environment configuration
|
Environment configuration
|
||||||
=========================
|
=========================
|
||||||
|
|
||||||
@ -192,4 +204,4 @@ who are using Salt, we have a very :ref:`active community <salt-community>`
|
|||||||
and we'd love to hear from you.
|
and we'd love to hear from you.
|
||||||
|
|
||||||
In addition, by continuing to :doc:`part 5 <states_pt5>`, you can learn about
|
In addition, by continuing to :doc:`part 5 <states_pt5>`, you can learn about
|
||||||
the powerful orchestration of which Salt is capable.
|
the powerful orchestration of which Salt is capable.
|
||||||
|
@ -490,7 +490,7 @@ class SSH(object):
|
|||||||
# Save the invocation information
|
# Save the invocation information
|
||||||
argv = self.opts['argv']
|
argv = self.opts['argv']
|
||||||
|
|
||||||
if self.opts['raw_shell']:
|
if self.opts.get('raw_shell', False):
|
||||||
fun = 'ssh._raw'
|
fun = 'ssh._raw'
|
||||||
args = argv
|
args = argv
|
||||||
else:
|
else:
|
||||||
@ -687,7 +687,7 @@ class Single(object):
|
|||||||
'''
|
'''
|
||||||
stdout = stderr = retcode = None
|
stdout = stderr = retcode = None
|
||||||
|
|
||||||
if self.opts.get('raw_shell'):
|
if self.opts.get('raw_shell', False):
|
||||||
cmd_str = ' '.join([self._escape_arg(arg) for arg in self.argv])
|
cmd_str = ' '.join([self._escape_arg(arg) for arg in self.argv])
|
||||||
stdout, stderr, retcode = self.shell.exec_cmd(cmd_str)
|
stdout, stderr, retcode = self.shell.exec_cmd(cmd_str)
|
||||||
|
|
||||||
|
@ -96,6 +96,7 @@ try:
|
|||||||
import Crypto
|
import Crypto
|
||||||
# PKCS1_v1_5 was added in PyCrypto 2.5
|
# PKCS1_v1_5 was added in PyCrypto 2.5
|
||||||
from Crypto.Cipher import PKCS1_v1_5 # pylint: disable=E0611
|
from Crypto.Cipher import PKCS1_v1_5 # pylint: disable=E0611
|
||||||
|
from Crypto.Hash import SHA # pylint: disable=E0611,W0611
|
||||||
HAS_PYCRYPTO = True
|
HAS_PYCRYPTO = True
|
||||||
except ImportError:
|
except ImportError:
|
||||||
HAS_PYCRYPTO = False
|
HAS_PYCRYPTO = False
|
||||||
@ -2070,6 +2071,8 @@ def create(vm_=None, call=None):
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
vm_['key_filename'] = key_filename
|
vm_['key_filename'] = key_filename
|
||||||
|
# wait_for_instance requires private_key
|
||||||
|
vm_['private_key'] = key_filename
|
||||||
|
|
||||||
# Get SSH Gateway config early to verify the private_key,
|
# Get SSH Gateway config early to verify the private_key,
|
||||||
# if used, exists or not. We don't want to deploy an instance
|
# if used, exists or not. We don't want to deploy an instance
|
||||||
|
@ -17,7 +17,10 @@ import salt.utils.cloud
|
|||||||
import salt.config as config
|
import salt.config as config
|
||||||
|
|
||||||
# Import pyrax libraries
|
# Import pyrax libraries
|
||||||
import salt.utils.openstack.pyrax as suop
|
# This is typically against SaltStack coding styles,
|
||||||
|
# it should be 'import salt.utils.openstack.pyrax as suop'. Something
|
||||||
|
# in the loader is creating a name clash and making that form fail
|
||||||
|
from salt.utils.openstack import pyrax as suop
|
||||||
|
|
||||||
|
|
||||||
# Only load in this module is the OPENSTACK configurations are in place
|
# Only load in this module is the OPENSTACK configurations are in place
|
||||||
|
@ -17,7 +17,7 @@
|
|||||||
# CREATED: 10/15/2012 09:49:37 PM WEST
|
# CREATED: 10/15/2012 09:49:37 PM WEST
|
||||||
#======================================================================================================================
|
#======================================================================================================================
|
||||||
set -o nounset # Treat unset variables as an error
|
set -o nounset # Treat unset variables as an error
|
||||||
__ScriptVersion="2015.02.27"
|
__ScriptVersion="2015.02.28"
|
||||||
__ScriptName="bootstrap-salt.sh"
|
__ScriptName="bootstrap-salt.sh"
|
||||||
|
|
||||||
#======================================================================================================================
|
#======================================================================================================================
|
||||||
@ -2039,13 +2039,9 @@ _eof
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
# Debian Backports
|
# Debian Backports
|
||||||
if [ "$(grep -R 'backports.debian.org' /etc/apt)" = "" ]; then
|
if [ "$(grep -R 'squeeze-backports' /etc/apt | grep -v "^#")" = "" ]; then
|
||||||
echo "deb http://backports.debian.org/debian-backports squeeze-backports main" >> \
|
echo "deb http://http.debian.net/debian-backports squeeze-backports main" >> \
|
||||||
/etc/apt/sources.list.d/backports.list
|
/etc/apt/sources.list.d/backports.list
|
||||||
|
|
||||||
# Add the backports key
|
|
||||||
gpg --keyserver pgpkeys.mit.edu --recv-key 8B48AD6246925553
|
|
||||||
gpg -a --export 8B48AD6246925553 | apt-key add -
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Saltstack's Stable Debian repository
|
# Saltstack's Stable Debian repository
|
||||||
@ -2098,6 +2094,12 @@ install_debian_7_deps() {
|
|||||||
# Install Keys
|
# Install Keys
|
||||||
__apt_get_install_noinput debian-archive-keyring && apt-get update
|
__apt_get_install_noinput debian-archive-keyring && apt-get update
|
||||||
|
|
||||||
|
# Debian Backports
|
||||||
|
if [ "$(grep -R 'wheezy-backports' /etc/apt | grep -v "^#")" = "" ]; then
|
||||||
|
echo "deb http://http.debian.net/debian wheezy-backports main" >> \
|
||||||
|
/etc/apt/sources.list.d/backports.list
|
||||||
|
fi
|
||||||
|
|
||||||
# Saltstack's Stable Debian repository
|
# Saltstack's Stable Debian repository
|
||||||
if [ "$(grep -R 'wheezy-saltstack' /etc/apt)" = "" ]; then
|
if [ "$(grep -R 'wheezy-saltstack' /etc/apt)" = "" ]; then
|
||||||
echo "deb http://debian.saltstack.com/debian wheezy-saltstack main" >> \
|
echo "deb http://debian.saltstack.com/debian wheezy-saltstack main" >> \
|
||||||
|
@ -233,8 +233,10 @@ def fileserver_update(fileserver):
|
|||||||
'''
|
'''
|
||||||
try:
|
try:
|
||||||
if not fileserver.servers:
|
if not fileserver.servers:
|
||||||
log.error('No fileservers loaded, the master will not be'
|
log.error(
|
||||||
'able to serve files to minions')
|
'No fileservers loaded, the master will not be able to '
|
||||||
|
'serve files to minions'
|
||||||
|
)
|
||||||
raise SaltMasterError('No fileserver backends available')
|
raise SaltMasterError('No fileserver backends available')
|
||||||
fileserver.update()
|
fileserver.update()
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
|
@ -80,6 +80,12 @@ class MinionError(SaltException):
|
|||||||
'''
|
'''
|
||||||
|
|
||||||
|
|
||||||
|
class FileserverConfigError(SaltException):
|
||||||
|
'''
|
||||||
|
Used when invalid fileserver settings are detected
|
||||||
|
'''
|
||||||
|
|
||||||
|
|
||||||
class SaltInvocationError(SaltException, TypeError):
|
class SaltInvocationError(SaltException, TypeError):
|
||||||
'''
|
'''
|
||||||
Used when the wrong number of arguments are sent to modules or invalid
|
Used when the wrong number of arguments are sent to modules or invalid
|
||||||
|
@ -19,6 +19,7 @@ import salt.utils
|
|||||||
# Import 3rd-party libs
|
# Import 3rd-party libs
|
||||||
import salt.ext.six as six
|
import salt.ext.six as six
|
||||||
|
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
@ -285,11 +286,22 @@ class Fileserver(object):
|
|||||||
ret = []
|
ret = []
|
||||||
if not back:
|
if not back:
|
||||||
back = self.opts['fileserver_backend']
|
back = self.opts['fileserver_backend']
|
||||||
if isinstance(back, str):
|
if isinstance(back, six.string_types):
|
||||||
back = [back]
|
back = back.split(',')
|
||||||
for sub in back:
|
if all((x.startswith('-') for x in back)):
|
||||||
if '{0}.envs'.format(sub) in self.servers:
|
# Only subtracting backends from enabled ones
|
||||||
ret.append(sub)
|
ret = self.opts['fileserver_backend']
|
||||||
|
for sub in back:
|
||||||
|
if '{0}.envs'.format(sub[1:]) in self.servers:
|
||||||
|
ret.remove(sub[1:])
|
||||||
|
elif '{0}.envs'.format(sub[1:-2]) in self.servers:
|
||||||
|
ret.remove(sub[1:-2])
|
||||||
|
else:
|
||||||
|
for sub in back:
|
||||||
|
if '{0}.envs'.format(sub) in self.servers:
|
||||||
|
ret.append(sub)
|
||||||
|
elif '{0}.envs'.format(sub[:-2]) in self.servers:
|
||||||
|
ret.append(sub[:-2])
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
def master_opts(self, load):
|
def master_opts(self, load):
|
||||||
@ -298,21 +310,98 @@ class Fileserver(object):
|
|||||||
'''
|
'''
|
||||||
return self.opts
|
return self.opts
|
||||||
|
|
||||||
|
def clear_cache(self, back=None):
|
||||||
|
'''
|
||||||
|
Clear the cache of all of the fileserver backends that support the
|
||||||
|
clear_cache function or the named backend(s) only.
|
||||||
|
'''
|
||||||
|
back = self._gen_back(back)
|
||||||
|
cleared = []
|
||||||
|
errors = []
|
||||||
|
for fsb in back:
|
||||||
|
fstr = '{0}.clear_cache'.format(fsb)
|
||||||
|
if fstr in self.servers:
|
||||||
|
log.debug('Clearing {0} fileserver cache'.format(fsb))
|
||||||
|
failed = self.servers[fstr]()
|
||||||
|
if failed:
|
||||||
|
errors.extend(failed)
|
||||||
|
else:
|
||||||
|
cleared.append(
|
||||||
|
'The {0} fileserver cache was successfully cleared'
|
||||||
|
.format(fsb)
|
||||||
|
)
|
||||||
|
return cleared, errors
|
||||||
|
|
||||||
|
def lock(self, back=None, remote=None):
|
||||||
|
'''
|
||||||
|
``remote`` can either be a dictionary containing repo configuration
|
||||||
|
information, or a pattern. If the latter, then remotes for which the URL
|
||||||
|
matches the pattern will be locked.
|
||||||
|
'''
|
||||||
|
back = self._gen_back(back)
|
||||||
|
locked = []
|
||||||
|
errors = []
|
||||||
|
for fsb in back:
|
||||||
|
fstr = '{0}.lock'.format(fsb)
|
||||||
|
if fstr in self.servers:
|
||||||
|
msg = 'Setting update lock for {0} remotes'.format(fsb)
|
||||||
|
if remote:
|
||||||
|
if not isinstance(remote, six.string_types):
|
||||||
|
errors.append(
|
||||||
|
'Badly formatted remote pattern \'{0}\''
|
||||||
|
.format(remote)
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
msg += ' matching {0}'.format(remote)
|
||||||
|
log.debug(msg)
|
||||||
|
good, bad = self.servers[fstr](remote=remote)
|
||||||
|
locked.extend(good)
|
||||||
|
errors.extend(bad)
|
||||||
|
return locked, errors
|
||||||
|
|
||||||
|
def clear_lock(self, back=None, remote=None):
|
||||||
|
'''
|
||||||
|
Clear the update lock for the enabled fileserver backends
|
||||||
|
|
||||||
|
back
|
||||||
|
Only clear the update lock for the specified backend(s). The
|
||||||
|
default is to clear the lock for all enabled backends
|
||||||
|
|
||||||
|
remote
|
||||||
|
If not None, then any remotes which contain the passed string will
|
||||||
|
have their lock cleared.
|
||||||
|
'''
|
||||||
|
back = self._gen_back(back)
|
||||||
|
cleared = []
|
||||||
|
errors = []
|
||||||
|
for fsb in back:
|
||||||
|
fstr = '{0}.clear_lock'.format(fsb)
|
||||||
|
if fstr in self.servers:
|
||||||
|
msg = 'Clearing update lock for {0} remotes'.format(fsb)
|
||||||
|
if remote:
|
||||||
|
msg += ' matching {0}'.format(remote)
|
||||||
|
log.debug(msg)
|
||||||
|
good, bad = self.servers[fstr](remote=remote)
|
||||||
|
cleared.extend(good)
|
||||||
|
errors.extend(bad)
|
||||||
|
return cleared, errors
|
||||||
|
|
||||||
def update(self, back=None):
|
def update(self, back=None):
|
||||||
'''
|
'''
|
||||||
Update all of the file-servers that support the update function or the
|
Update all of the enabled fileserver backends which support the update
|
||||||
named fileserver only.
|
function, or
|
||||||
'''
|
'''
|
||||||
back = self._gen_back(back)
|
back = self._gen_back(back)
|
||||||
for fsb in back:
|
for fsb in back:
|
||||||
fstr = '{0}.update'.format(fsb)
|
fstr = '{0}.update'.format(fsb)
|
||||||
if fstr in self.servers:
|
if fstr in self.servers:
|
||||||
log.debug('Updating fileserver cache')
|
log.debug('Updating {0} fileserver cache'.format(fsb))
|
||||||
self.servers[fstr]()
|
self.servers[fstr]()
|
||||||
|
|
||||||
def envs(self, back=None, sources=False):
|
def envs(self, back=None, sources=False):
|
||||||
'''
|
'''
|
||||||
Return the environments for the named backend or all back-ends
|
Return the environments for the named backend or all backends
|
||||||
'''
|
'''
|
||||||
back = self._gen_back(back)
|
back = self._gen_back(back)
|
||||||
ret = set()
|
ret = set()
|
||||||
@ -448,7 +537,7 @@ class Fileserver(object):
|
|||||||
ret = set()
|
ret = set()
|
||||||
if 'saltenv' not in load:
|
if 'saltenv' not in load:
|
||||||
return []
|
return []
|
||||||
for fsb in self._gen_back(None):
|
for fsb in self._gen_back(load.pop('fsbackend', None)):
|
||||||
fstr = '{0}.file_list'.format(fsb)
|
fstr = '{0}.file_list'.format(fsb)
|
||||||
if fstr in self.servers:
|
if fstr in self.servers:
|
||||||
ret.update(self.servers[fstr](load))
|
ret.update(self.servers[fstr](load))
|
||||||
@ -504,7 +593,7 @@ class Fileserver(object):
|
|||||||
ret = set()
|
ret = set()
|
||||||
if 'saltenv' not in load:
|
if 'saltenv' not in load:
|
||||||
return []
|
return []
|
||||||
for fsb in self._gen_back(None):
|
for fsb in self._gen_back(load.pop('fsbackend', None)):
|
||||||
fstr = '{0}.dir_list'.format(fsb)
|
fstr = '{0}.dir_list'.format(fsb)
|
||||||
if fstr in self.servers:
|
if fstr in self.servers:
|
||||||
ret.update(self.servers[fstr](load))
|
ret.update(self.servers[fstr](load))
|
||||||
@ -532,7 +621,7 @@ class Fileserver(object):
|
|||||||
ret = {}
|
ret = {}
|
||||||
if 'saltenv' not in load:
|
if 'saltenv' not in load:
|
||||||
return {}
|
return {}
|
||||||
for fsb in self._gen_back(None):
|
for fsb in self._gen_back(load.pop('fsbackend', None)):
|
||||||
symlstr = '{0}.symlink_list'.format(fsb)
|
symlstr = '{0}.symlink_list'.format(fsb)
|
||||||
if symlstr in self.servers:
|
if symlstr in self.servers:
|
||||||
ret = self.servers[symlstr](load)
|
ret = self.servers[symlstr](load)
|
||||||
|
File diff suppressed because it is too large
Load Diff
@ -3,7 +3,12 @@
|
|||||||
Mercurial Fileserver Backend
|
Mercurial Fileserver Backend
|
||||||
|
|
||||||
To enable, add ``hg`` to the :conf_master:`fileserver_backend` option in the
|
To enable, add ``hg`` to the :conf_master:`fileserver_backend` option in the
|
||||||
master config file.
|
Master config file.
|
||||||
|
|
||||||
|
.. code-block:: yaml
|
||||||
|
|
||||||
|
fileserver_backend:
|
||||||
|
- hg
|
||||||
|
|
||||||
After enabling this backend, branches, bookmarks, and tags in a remote
|
After enabling this backend, branches, bookmarks, and tags in a remote
|
||||||
mercurial repository are exposed to salt as different environments. This
|
mercurial repository are exposed to salt as different environments. This
|
||||||
@ -30,12 +35,15 @@ will set the desired branch method. Possible values are: ``branches``,
|
|||||||
# Import python libs
|
# Import python libs
|
||||||
from __future__ import absolute_import
|
from __future__ import absolute_import
|
||||||
import copy
|
import copy
|
||||||
|
import errno
|
||||||
|
import fnmatch
|
||||||
import glob
|
import glob
|
||||||
import hashlib
|
import hashlib
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
import shutil
|
import shutil
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
from salt.exceptions import FileserverConfigError
|
||||||
|
|
||||||
VALID_BRANCH_METHODS = ('branches', 'bookmarks', 'mixed')
|
VALID_BRANCH_METHODS = ('branches', 'bookmarks', 'mixed')
|
||||||
PER_REMOTE_PARAMS = ('base', 'branch_method', 'mountpoint', 'root')
|
PER_REMOTE_PARAMS = ('base', 'branch_method', 'mountpoint', 'root')
|
||||||
@ -163,6 +171,15 @@ def _get_ref(repo, name):
|
|||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def _failhard():
|
||||||
|
'''
|
||||||
|
Fatal fileserver configuration issue, raise an exception
|
||||||
|
'''
|
||||||
|
raise FileserverConfigError(
|
||||||
|
'Failed to load hg fileserver backend'
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def init():
|
def init():
|
||||||
'''
|
'''
|
||||||
Return a list of hglib objects for the various hgfs remotes
|
Return a list of hglib objects for the various hgfs remotes
|
||||||
@ -186,11 +203,13 @@ def init():
|
|||||||
)
|
)
|
||||||
if not per_remote_conf:
|
if not per_remote_conf:
|
||||||
log.error(
|
log.error(
|
||||||
'Invalid per-remote configuration for remote {0}. If no '
|
'Invalid per-remote configuration for hgfs remote {0}. If '
|
||||||
'per-remote parameters are being specified, there may be '
|
'no per-remote parameters are being specified, there may '
|
||||||
'a trailing colon after the URI, which should be removed. '
|
'be a trailing colon after the URL, which should be '
|
||||||
'Check the master configuration file.'.format(repo_url)
|
'removed. Check the master configuration file.'
|
||||||
|
.format(repo_url)
|
||||||
)
|
)
|
||||||
|
_failhard()
|
||||||
|
|
||||||
branch_method = \
|
branch_method = \
|
||||||
per_remote_conf.get('branch_method',
|
per_remote_conf.get('branch_method',
|
||||||
@ -202,8 +221,9 @@ def init():
|
|||||||
.format(branch_method, repo_url,
|
.format(branch_method, repo_url,
|
||||||
', '.join(VALID_BRANCH_METHODS))
|
', '.join(VALID_BRANCH_METHODS))
|
||||||
)
|
)
|
||||||
continue
|
_failhard()
|
||||||
|
|
||||||
|
per_remote_errors = False
|
||||||
for param in (x for x in per_remote_conf
|
for param in (x for x in per_remote_conf
|
||||||
if x not in PER_REMOTE_PARAMS):
|
if x not in PER_REMOTE_PARAMS):
|
||||||
log.error(
|
log.error(
|
||||||
@ -213,17 +233,20 @@ def init():
|
|||||||
param, repo_url, ', '.join(PER_REMOTE_PARAMS)
|
param, repo_url, ', '.join(PER_REMOTE_PARAMS)
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
per_remote_conf.pop(param)
|
per_remote_errors = True
|
||||||
|
if per_remote_errors:
|
||||||
|
_failhard()
|
||||||
|
|
||||||
repo_conf.update(per_remote_conf)
|
repo_conf.update(per_remote_conf)
|
||||||
else:
|
else:
|
||||||
repo_url = remote
|
repo_url = remote
|
||||||
|
|
||||||
if not isinstance(repo_url, six.string_types):
|
if not isinstance(repo_url, six.string_types):
|
||||||
log.error(
|
log.error(
|
||||||
'Invalid gitfs remote {0}. Remotes must be strings, you may '
|
'Invalid hgfs remote {0}. Remotes must be strings, you may '
|
||||||
'need to enclose the URI in quotes'.format(repo_url)
|
'need to enclose the URL in quotes'.format(repo_url)
|
||||||
)
|
)
|
||||||
continue
|
_failhard()
|
||||||
|
|
||||||
try:
|
try:
|
||||||
repo_conf['mountpoint'] = salt.utils.strip_proto(
|
repo_conf['mountpoint'] = salt.utils.strip_proto(
|
||||||
@ -252,11 +275,24 @@ def init():
|
|||||||
'delete this directory on the master to continue to use this '
|
'delete this directory on the master to continue to use this '
|
||||||
'hgfs remote.'.format(rp_, repo_url)
|
'hgfs remote.'.format(rp_, repo_url)
|
||||||
)
|
)
|
||||||
continue
|
_failhard()
|
||||||
|
except Exception as exc:
|
||||||
|
log.error(
|
||||||
|
'Exception \'{0}\' encountered while initializing hgfs remote '
|
||||||
|
'{1}'.format(exc, repo_url)
|
||||||
|
)
|
||||||
|
_failhard()
|
||||||
|
|
||||||
refs = repo.config(names='paths')
|
try:
|
||||||
|
refs = repo.config(names='paths')
|
||||||
|
except hglib.error.CommandError:
|
||||||
|
refs = None
|
||||||
|
|
||||||
|
# Do NOT put this if statement inside the except block above. Earlier
|
||||||
|
# versions of hglib did not raise an exception, so we need to do it
|
||||||
|
# this way to support both older and newer hglib.
|
||||||
if not refs:
|
if not refs:
|
||||||
# Write an hgrc defining the remote URI
|
# Write an hgrc defining the remote URL
|
||||||
hgconfpath = os.path.join(rp_, '.hg', 'hgrc')
|
hgconfpath = os.path.join(rp_, '.hg', 'hgrc')
|
||||||
with salt.utils.fopen(hgconfpath, 'w+') as hgconfig:
|
with salt.utils.fopen(hgconfpath, 'w+') as hgconfig:
|
||||||
hgconfig.write('[paths]\n')
|
hgconfig.write('[paths]\n')
|
||||||
@ -266,7 +302,10 @@ def init():
|
|||||||
'repo': repo,
|
'repo': repo,
|
||||||
'url': repo_url,
|
'url': repo_url,
|
||||||
'hash': repo_hash,
|
'hash': repo_hash,
|
||||||
'cachedir': rp_
|
'cachedir': rp_,
|
||||||
|
'lockfile': os.path.join(__opts__['cachedir'],
|
||||||
|
'hgfs',
|
||||||
|
'{0}.update.lk'.format(repo_hash))
|
||||||
})
|
})
|
||||||
repos.append(repo_conf)
|
repos.append(repo_conf)
|
||||||
repo.close()
|
repo.close()
|
||||||
@ -287,27 +326,164 @@ def init():
|
|||||||
return repos
|
return repos
|
||||||
|
|
||||||
|
|
||||||
def purge_cache():
|
def _clear_old_remotes():
|
||||||
'''
|
'''
|
||||||
Purge the fileserver cache
|
Remove cache directories for remotes no longer configured
|
||||||
'''
|
'''
|
||||||
bp_ = os.path.join(__opts__['cachedir'], 'hgfs')
|
bp_ = os.path.join(__opts__['cachedir'], 'hgfs')
|
||||||
try:
|
try:
|
||||||
remove_dirs = os.listdir(bp_)
|
cachedir_ls = os.listdir(bp_)
|
||||||
except OSError:
|
except OSError:
|
||||||
remove_dirs = []
|
cachedir_ls = []
|
||||||
for repo in init():
|
repos = init()
|
||||||
|
# Remove actively-used remotes from list
|
||||||
|
for repo in repos:
|
||||||
try:
|
try:
|
||||||
remove_dirs.remove(repo['hash'])
|
cachedir_ls.remove(repo['hash'])
|
||||||
except ValueError:
|
except ValueError:
|
||||||
pass
|
pass
|
||||||
remove_dirs = [os.path.join(bp_, rdir) for rdir in remove_dirs
|
to_remove = []
|
||||||
if rdir not in ('hash', 'refs', 'envs.p', 'remote_map.txt')]
|
for item in cachedir_ls:
|
||||||
if remove_dirs:
|
if item in ('hash', 'refs'):
|
||||||
for rdir in remove_dirs:
|
continue
|
||||||
shutil.rmtree(rdir)
|
path = os.path.join(bp_, item)
|
||||||
return True
|
if os.path.isdir(path):
|
||||||
return False
|
to_remove.append(path)
|
||||||
|
failed = []
|
||||||
|
if to_remove:
|
||||||
|
for rdir in to_remove:
|
||||||
|
try:
|
||||||
|
shutil.rmtree(rdir)
|
||||||
|
except OSError as exc:
|
||||||
|
log.error(
|
||||||
|
'Unable to remove old hgfs remote cachedir {0}: {1}'
|
||||||
|
.format(rdir, exc)
|
||||||
|
)
|
||||||
|
failed.append(rdir)
|
||||||
|
else:
|
||||||
|
log.debug('hgfs removed old cachedir {0}'.format(rdir))
|
||||||
|
for fdir in failed:
|
||||||
|
to_remove.remove(fdir)
|
||||||
|
return bool(to_remove), repos
|
||||||
|
|
||||||
|
|
||||||
|
def clear_cache():
|
||||||
|
'''
|
||||||
|
Completely clear hgfs cache
|
||||||
|
'''
|
||||||
|
fsb_cachedir = os.path.join(__opts__['cachedir'], 'hgfs')
|
||||||
|
list_cachedir = os.path.join(__opts__['cachedir'], 'file_lists/hgfs')
|
||||||
|
errors = []
|
||||||
|
for rdir in (fsb_cachedir, list_cachedir):
|
||||||
|
if os.path.exists(rdir):
|
||||||
|
try:
|
||||||
|
shutil.rmtree(rdir)
|
||||||
|
except OSError as exc:
|
||||||
|
errors.append('Unable to delete {0}: {1}'.format(rdir, exc))
|
||||||
|
return errors
|
||||||
|
|
||||||
|
|
||||||
|
def clear_lock(remote=None):
|
||||||
|
'''
|
||||||
|
Clear update.lk
|
||||||
|
|
||||||
|
``remote`` can either be a dictionary containing repo configuration
|
||||||
|
information, or a pattern. If the latter, then remotes for which the URL
|
||||||
|
matches the pattern will be locked.
|
||||||
|
'''
|
||||||
|
def _do_clear_lock(repo):
|
||||||
|
def _add_error(errlist, repo, exc):
|
||||||
|
msg = ('Unable to remove update lock for {0} ({1}): {2} '
|
||||||
|
.format(repo['url'], repo['lockfile'], exc))
|
||||||
|
log.debug(msg)
|
||||||
|
errlist.append(msg)
|
||||||
|
success = []
|
||||||
|
failed = []
|
||||||
|
if os.path.exists(repo['lockfile']):
|
||||||
|
try:
|
||||||
|
os.remove(repo['lockfile'])
|
||||||
|
except OSError as exc:
|
||||||
|
if exc.errno == errno.EISDIR:
|
||||||
|
# Somehow this path is a directory. Should never happen
|
||||||
|
# unless some wiseguy manually creates a directory at this
|
||||||
|
# path, but just in case, handle it.
|
||||||
|
try:
|
||||||
|
shutil.rmtree(repo['lockfile'])
|
||||||
|
except OSError as exc:
|
||||||
|
_add_error(failed, repo, exc)
|
||||||
|
else:
|
||||||
|
_add_error(failed, repo, exc)
|
||||||
|
else:
|
||||||
|
msg = 'Removed lock for {0}'.format(repo['url'])
|
||||||
|
log.debug(msg)
|
||||||
|
success.append(msg)
|
||||||
|
return success, failed
|
||||||
|
|
||||||
|
if isinstance(remote, dict):
|
||||||
|
return _do_clear_lock(remote)
|
||||||
|
|
||||||
|
cleared = []
|
||||||
|
errors = []
|
||||||
|
for repo in init():
|
||||||
|
if remote:
|
||||||
|
try:
|
||||||
|
if not fnmatch.fnmatch(repo['url'], remote):
|
||||||
|
continue
|
||||||
|
except TypeError:
|
||||||
|
# remote was non-string, try again
|
||||||
|
if not fnmatch.fnmatch(repo['url'], six.text_type(remote)):
|
||||||
|
continue
|
||||||
|
success, failed = _do_clear_lock(repo)
|
||||||
|
cleared.extend(success)
|
||||||
|
errors.extend(failed)
|
||||||
|
return cleared, errors
|
||||||
|
|
||||||
|
|
||||||
|
def lock(remote=None):
|
||||||
|
'''
|
||||||
|
Place an update.lk
|
||||||
|
|
||||||
|
``remote`` can either be a dictionary containing repo configuration
|
||||||
|
information, or a pattern. If the latter, then remotes for which the URL
|
||||||
|
matches the pattern will be locked.
|
||||||
|
'''
|
||||||
|
def _do_lock(repo):
|
||||||
|
success = []
|
||||||
|
failed = []
|
||||||
|
if not os.path.exists(repo['lockfile']):
|
||||||
|
try:
|
||||||
|
with salt.utils.fopen(repo['lockfile'], 'w+') as fp_:
|
||||||
|
fp_.write('')
|
||||||
|
except (IOError, OSError) as exc:
|
||||||
|
msg = ('Unable to set update lock for {0} ({1}): {2} '
|
||||||
|
.format(repo['url'], repo['lockfile'], exc))
|
||||||
|
log.debug(msg)
|
||||||
|
failed.append(msg)
|
||||||
|
else:
|
||||||
|
msg = 'Set lock for {0}'.format(repo['url'])
|
||||||
|
log.debug(msg)
|
||||||
|
success.append(msg)
|
||||||
|
return success, failed
|
||||||
|
|
||||||
|
if isinstance(remote, dict):
|
||||||
|
return _do_lock(remote)
|
||||||
|
|
||||||
|
locked = []
|
||||||
|
errors = []
|
||||||
|
for repo in init():
|
||||||
|
if remote:
|
||||||
|
try:
|
||||||
|
if not fnmatch.fnmatch(repo['url'], remote):
|
||||||
|
continue
|
||||||
|
except TypeError:
|
||||||
|
# remote was non-string, try again
|
||||||
|
if not fnmatch.fnmatch(repo['url'], six.text_type(remote)):
|
||||||
|
continue
|
||||||
|
success, failed = _do_lock(repo)
|
||||||
|
locked.extend(success)
|
||||||
|
errors.extend(failed)
|
||||||
|
|
||||||
|
return locked, errors
|
||||||
|
|
||||||
|
|
||||||
def update():
|
def update():
|
||||||
@ -317,13 +493,27 @@ def update():
|
|||||||
# data for the fileserver event
|
# data for the fileserver event
|
||||||
data = {'changed': False,
|
data = {'changed': False,
|
||||||
'backend': 'hgfs'}
|
'backend': 'hgfs'}
|
||||||
pid = os.getpid()
|
# _clear_old_remotes runs init(), so use the value from there to avoid a
|
||||||
data['changed'] = purge_cache()
|
# second init()
|
||||||
for repo in init():
|
data['changed'], repos = _clear_old_remotes()
|
||||||
|
for repo in repos:
|
||||||
|
if os.path.exists(repo['lockfile']):
|
||||||
|
log.warning(
|
||||||
|
'Update lockfile is present for hgfs remote {0}, skipping. '
|
||||||
|
'If this warning persists, it is possible that the update '
|
||||||
|
'process was interrupted. Removing {1} or running '
|
||||||
|
'\'salt-run fileserver.clear_lock hgfs\' will allow updates '
|
||||||
|
'to continue for this remote.'
|
||||||
|
.format(repo['url'], repo['lockfile'])
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
_, errors = lock(repo)
|
||||||
|
if errors:
|
||||||
|
log.error('Unable to set update lock for hgfs remote {0}, '
|
||||||
|
'skipping.'.format(repo['url']))
|
||||||
|
continue
|
||||||
|
log.debug('hgfs is fetching from {0}'.format(repo['url']))
|
||||||
repo['repo'].open()
|
repo['repo'].open()
|
||||||
lk_fn = os.path.join(repo['repo'].root(), 'update.lk')
|
|
||||||
with salt.utils.fopen(lk_fn, 'w+') as fp_:
|
|
||||||
fp_.write(str(pid))
|
|
||||||
curtip = repo['repo'].tip()
|
curtip = repo['repo'].tip()
|
||||||
try:
|
try:
|
||||||
repo['repo'].pull()
|
repo['repo'].pull()
|
||||||
@ -338,10 +528,7 @@ def update():
|
|||||||
if curtip[1] != newtip[1]:
|
if curtip[1] != newtip[1]:
|
||||||
data['changed'] = True
|
data['changed'] = True
|
||||||
repo['repo'].close()
|
repo['repo'].close()
|
||||||
try:
|
clear_lock(repo)
|
||||||
os.remove(lk_fn)
|
|
||||||
except (IOError, OSError):
|
|
||||||
pass
|
|
||||||
|
|
||||||
env_cache = os.path.join(__opts__['cachedir'], 'hgfs/envs.p')
|
env_cache = os.path.join(__opts__['cachedir'], 'hgfs/envs.p')
|
||||||
if data.get('changed', False) is True or not os.path.isfile(env_cache):
|
if data.get('changed', False) is True or not os.path.isfile(env_cache):
|
||||||
|
@ -1,11 +1,20 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
'''
|
'''
|
||||||
Fileserver backend which serves files pushed to master by :mod:`cp.push
|
Fileserver backend which serves files pushed to the Master
|
||||||
<salt.modules.cp.push>`
|
|
||||||
|
|
||||||
:conf_master:`file_recv` needs to be enabled in the master config file in order
|
The :mod:`cp.push <salt.modules.cp.push>` function allows Minions to push files
|
||||||
to use this backend, and ``minion`` must also be present in the
|
up to the Master. Using this backend, these pushed files are exposed to other
|
||||||
:conf_master:`fileserver_backends` list.
|
Minions via the Salt fileserver.
|
||||||
|
|
||||||
|
To enable minionfs, :conf_master:`file_recv` needs to be set to ``True`` in
|
||||||
|
the master config file (otherwise :mod:`cp.push <salt.modules.cp.push>` will
|
||||||
|
not be allowed to push files to the Master), and ``minion`` must be added to
|
||||||
|
the :conf_master:`fileserver_backends` list.
|
||||||
|
|
||||||
|
.. code-block:: yaml
|
||||||
|
|
||||||
|
fileserver_backend:
|
||||||
|
- minion
|
||||||
|
|
||||||
Other minionfs settings include: :conf_master:`minionfs_whitelist`,
|
Other minionfs settings include: :conf_master:`minionfs_whitelist`,
|
||||||
:conf_master:`minionfs_blacklist`, :conf_master:`minionfs_mountpoint`, and
|
:conf_master:`minionfs_blacklist`, :conf_master:`minionfs_mountpoint`, and
|
||||||
|
@ -2,8 +2,18 @@
|
|||||||
'''
|
'''
|
||||||
The default file server backend
|
The default file server backend
|
||||||
|
|
||||||
Based on the environments in the :conf_master:`file_roots` configuration
|
This fileserver backend serves files from the Master's local filesystem. If
|
||||||
option.
|
:conf_master:`fileserver_backend` is not defined in the Master config file,
|
||||||
|
then this backend is enabled by default. If it *is* defined then ``roots`` must
|
||||||
|
be in the :conf_master:`fileserver_backend` list to enable this backend.
|
||||||
|
|
||||||
|
.. code-block:: yaml
|
||||||
|
|
||||||
|
fileserver_backend:
|
||||||
|
- roots
|
||||||
|
|
||||||
|
Fileserver environments are defined using the :conf_master:`file_roots`
|
||||||
|
configuration option.
|
||||||
'''
|
'''
|
||||||
from __future__ import absolute_import
|
from __future__ import absolute_import
|
||||||
|
|
||||||
|
@ -1,15 +1,17 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
'''
|
'''
|
||||||
The backend for a fileserver based on Amazon S3
|
Amazon S3 Fileserver Backend
|
||||||
|
|
||||||
.. seealso:: :doc:`/ref/file_server/index`
|
This backend exposes directories in S3 buckets as Salt environments. To enable
|
||||||
|
this backend, add ``s3`` to the :conf_master:`fileserver_backend` option in the
|
||||||
|
Master config file.
|
||||||
|
|
||||||
This backend exposes directories in S3 buckets as Salt environments. This
|
.. code-block:: yaml
|
||||||
feature is managed by the :conf_master:`fileserver_backend` option in the Salt
|
|
||||||
Master config.
|
|
||||||
|
|
||||||
|
fileserver_backend:
|
||||||
|
- s3
|
||||||
|
|
||||||
S3 credentials can be set in the master config file like so:
|
S3 credentials must also be set in the master config file:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
@ -19,14 +21,6 @@ S3 credentials can be set in the master config file like so:
|
|||||||
Alternatively, if on EC2 these credentials can be automatically loaded from
|
Alternatively, if on EC2 these credentials can be automatically loaded from
|
||||||
instance metadata.
|
instance metadata.
|
||||||
|
|
||||||
Additionally, ``s3fs`` must be included in the
|
|
||||||
:conf_master:`fileserver_backend` config parameter in the master config file:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
fileserver_backend:
|
|
||||||
- s3fs
|
|
||||||
|
|
||||||
This fileserver supports two modes of operation for the buckets:
|
This fileserver supports two modes of operation for the buckets:
|
||||||
|
|
||||||
1. :strong:`A single bucket per environment`
|
1. :strong:`A single bucket per environment`
|
||||||
|
@ -3,9 +3,14 @@
|
|||||||
Subversion Fileserver Backend
|
Subversion Fileserver Backend
|
||||||
|
|
||||||
After enabling this backend, branches, and tags in a remote subversion
|
After enabling this backend, branches, and tags in a remote subversion
|
||||||
repository are exposed to salt as different environments. This feature is
|
repository are exposed to salt as different environments. To enable this
|
||||||
managed by the :conf_master:`fileserver_backend` option in the salt master
|
backend, add ``svn`` to the :conf_master:`fileserver_backend` option in the
|
||||||
config.
|
Master config file.
|
||||||
|
|
||||||
|
.. code-block:: yaml
|
||||||
|
|
||||||
|
fileserver_backend:
|
||||||
|
- svn
|
||||||
|
|
||||||
This backend assumes a standard svn layout with directories for ``branches``,
|
This backend assumes a standard svn layout with directories for ``branches``,
|
||||||
``tags``, and ``trunk``, at the repository root.
|
``tags``, and ``trunk``, at the repository root.
|
||||||
@ -13,7 +18,6 @@ This backend assumes a standard svn layout with directories for ``branches``,
|
|||||||
:depends: - subversion
|
:depends: - subversion
|
||||||
- pysvn
|
- pysvn
|
||||||
|
|
||||||
|
|
||||||
.. versionchanged:: 2014.7.0
|
.. versionchanged:: 2014.7.0
|
||||||
The paths to the trunk, branches, and tags have been made configurable, via
|
The paths to the trunk, branches, and tags have been made configurable, via
|
||||||
the config options :conf_master:`svnfs_trunk`,
|
the config options :conf_master:`svnfs_trunk`,
|
||||||
@ -26,11 +30,14 @@ This backend assumes a standard svn layout with directories for ``branches``,
|
|||||||
# Import python libs
|
# Import python libs
|
||||||
from __future__ import absolute_import
|
from __future__ import absolute_import
|
||||||
import copy
|
import copy
|
||||||
|
import errno
|
||||||
|
import fnmatch
|
||||||
import hashlib
|
import hashlib
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
import shutil
|
import shutil
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
from salt.exceptions import FileserverConfigError
|
||||||
|
|
||||||
PER_REMOTE_PARAMS = ('mountpoint', 'root', 'trunk', 'branches', 'tags')
|
PER_REMOTE_PARAMS = ('mountpoint', 'root', 'trunk', 'branches', 'tags')
|
||||||
|
|
||||||
@ -100,6 +107,15 @@ def _rev(repo):
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _failhard():
|
||||||
|
'''
|
||||||
|
Fatal fileserver configuration issue, raise an exception
|
||||||
|
'''
|
||||||
|
raise FileserverConfigError(
|
||||||
|
'Failed to load svn fileserver backend'
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def init():
|
def init():
|
||||||
'''
|
'''
|
||||||
Return the list of svn remotes and their configuration information
|
Return the list of svn remotes and their configuration information
|
||||||
@ -125,10 +141,12 @@ def init():
|
|||||||
log.error(
|
log.error(
|
||||||
'Invalid per-remote configuration for remote {0}. If no '
|
'Invalid per-remote configuration for remote {0}. If no '
|
||||||
'per-remote parameters are being specified, there may be '
|
'per-remote parameters are being specified, there may be '
|
||||||
'a trailing colon after the URI, which should be removed. '
|
'a trailing colon after the URL, which should be removed. '
|
||||||
'Check the master configuration file.'.format(repo_url)
|
'Check the master configuration file.'.format(repo_url)
|
||||||
)
|
)
|
||||||
|
_failhard()
|
||||||
|
|
||||||
|
per_remote_errors = False
|
||||||
for param in (x for x in per_remote_conf
|
for param in (x for x in per_remote_conf
|
||||||
if x not in PER_REMOTE_PARAMS):
|
if x not in PER_REMOTE_PARAMS):
|
||||||
log.error(
|
log.error(
|
||||||
@ -138,17 +156,20 @@ def init():
|
|||||||
param, repo_url, ', '.join(PER_REMOTE_PARAMS)
|
param, repo_url, ', '.join(PER_REMOTE_PARAMS)
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
per_remote_conf.pop(param)
|
per_remote_errors = True
|
||||||
|
if per_remote_errors:
|
||||||
|
_failhard()
|
||||||
|
|
||||||
repo_conf.update(per_remote_conf)
|
repo_conf.update(per_remote_conf)
|
||||||
else:
|
else:
|
||||||
repo_url = remote
|
repo_url = remote
|
||||||
|
|
||||||
if not isinstance(repo_url, six.string_types):
|
if not isinstance(repo_url, six.string_types):
|
||||||
log.error(
|
log.error(
|
||||||
'Invalid gitfs remote {0}. Remotes must be strings, you may '
|
'Invalid svnfs remote {0}. Remotes must be strings, you may '
|
||||||
'need to enclose the URI in quotes'.format(repo_url)
|
'need to enclose the URL in quotes'.format(repo_url)
|
||||||
)
|
)
|
||||||
continue
|
_failhard()
|
||||||
|
|
||||||
try:
|
try:
|
||||||
repo_conf['mountpoint'] = salt.utils.strip_proto(
|
repo_conf['mountpoint'] = salt.utils.strip_proto(
|
||||||
@ -175,7 +196,7 @@ def init():
|
|||||||
'Failed to initialize svnfs remote {0!r}: {1}'
|
'Failed to initialize svnfs remote {0!r}: {1}'
|
||||||
.format(repo_url, exc)
|
.format(repo_url, exc)
|
||||||
)
|
)
|
||||||
continue
|
_failhard()
|
||||||
else:
|
else:
|
||||||
# Confirm that there is an svn checkout at the necessary path by
|
# Confirm that there is an svn checkout at the necessary path by
|
||||||
# running pysvn.Client().status()
|
# running pysvn.Client().status()
|
||||||
@ -188,13 +209,14 @@ def init():
|
|||||||
'manually delete this directory on the master to continue '
|
'manually delete this directory on the master to continue '
|
||||||
'to use this svnfs remote.'.format(rp_, repo_url)
|
'to use this svnfs remote.'.format(rp_, repo_url)
|
||||||
)
|
)
|
||||||
continue
|
_failhard()
|
||||||
|
|
||||||
repo_conf.update({
|
repo_conf.update({
|
||||||
'repo': rp_,
|
'repo': rp_,
|
||||||
'url': repo_url,
|
'url': repo_url,
|
||||||
'hash': repo_hash,
|
'hash': repo_hash,
|
||||||
'cachedir': rp_
|
'cachedir': rp_,
|
||||||
|
'lockfile': os.path.join(rp_, 'update.lk')
|
||||||
})
|
})
|
||||||
repos.append(repo_conf)
|
repos.append(repo_conf)
|
||||||
|
|
||||||
@ -218,27 +240,164 @@ def init():
|
|||||||
return repos
|
return repos
|
||||||
|
|
||||||
|
|
||||||
def purge_cache():
|
def _clear_old_remotes():
|
||||||
'''
|
'''
|
||||||
Purge the fileserver cache
|
Remove cache directories for remotes no longer configured
|
||||||
'''
|
'''
|
||||||
bp_ = os.path.join(__opts__['cachedir'], 'svnfs')
|
bp_ = os.path.join(__opts__['cachedir'], 'svnfs')
|
||||||
try:
|
try:
|
||||||
remove_dirs = os.listdir(bp_)
|
cachedir_ls = os.listdir(bp_)
|
||||||
except OSError:
|
except OSError:
|
||||||
remove_dirs = []
|
cachedir_ls = []
|
||||||
for repo in init():
|
repos = init()
|
||||||
|
# Remove actively-used remotes from list
|
||||||
|
for repo in repos:
|
||||||
try:
|
try:
|
||||||
remove_dirs.remove(repo['hash'])
|
cachedir_ls.remove(repo['hash'])
|
||||||
except ValueError:
|
except ValueError:
|
||||||
pass
|
pass
|
||||||
remove_dirs = [os.path.join(bp_, rdir) for rdir in remove_dirs
|
to_remove = []
|
||||||
if rdir not in ('hash', 'refs', 'envs.p', 'remote_map.txt')]
|
for item in cachedir_ls:
|
||||||
if remove_dirs:
|
if item in ('hash', 'refs'):
|
||||||
for rdir in remove_dirs:
|
continue
|
||||||
shutil.rmtree(rdir)
|
path = os.path.join(bp_, item)
|
||||||
return True
|
if os.path.isdir(path):
|
||||||
return False
|
to_remove.append(path)
|
||||||
|
failed = []
|
||||||
|
if to_remove:
|
||||||
|
for rdir in to_remove:
|
||||||
|
try:
|
||||||
|
shutil.rmtree(rdir)
|
||||||
|
except OSError as exc:
|
||||||
|
log.error(
|
||||||
|
'Unable to remove old svnfs remote cachedir {0}: {1}'
|
||||||
|
.format(rdir, exc)
|
||||||
|
)
|
||||||
|
failed.append(rdir)
|
||||||
|
else:
|
||||||
|
log.debug('svnfs removed old cachedir {0}'.format(rdir))
|
||||||
|
for fdir in failed:
|
||||||
|
to_remove.remove(fdir)
|
||||||
|
return bool(to_remove), repos
|
||||||
|
|
||||||
|
|
||||||
|
def clear_cache():
|
||||||
|
'''
|
||||||
|
Completely clear svnfs cache
|
||||||
|
'''
|
||||||
|
fsb_cachedir = os.path.join(__opts__['cachedir'], 'svnfs')
|
||||||
|
list_cachedir = os.path.join(__opts__['cachedir'], 'file_lists/svnfs')
|
||||||
|
errors = []
|
||||||
|
for rdir in (fsb_cachedir, list_cachedir):
|
||||||
|
if os.path.exists(rdir):
|
||||||
|
try:
|
||||||
|
shutil.rmtree(rdir)
|
||||||
|
except OSError as exc:
|
||||||
|
errors.append('Unable to delete {0}: {1}'.format(rdir, exc))
|
||||||
|
return errors
|
||||||
|
|
||||||
|
|
||||||
|
def clear_lock(remote=None):
|
||||||
|
'''
|
||||||
|
Clear update.lk
|
||||||
|
|
||||||
|
``remote`` can either be a dictionary containing repo configuration
|
||||||
|
information, or a pattern. If the latter, then remotes for which the URL
|
||||||
|
matches the pattern will be locked.
|
||||||
|
'''
|
||||||
|
def _do_clear_lock(repo):
|
||||||
|
def _add_error(errlist, repo, exc):
|
||||||
|
msg = ('Unable to remove update lock for {0} ({1}): {2} '
|
||||||
|
.format(repo['url'], repo['lockfile'], exc))
|
||||||
|
log.debug(msg)
|
||||||
|
errlist.append(msg)
|
||||||
|
success = []
|
||||||
|
failed = []
|
||||||
|
if os.path.exists(repo['lockfile']):
|
||||||
|
try:
|
||||||
|
os.remove(repo['lockfile'])
|
||||||
|
except OSError as exc:
|
||||||
|
if exc.errno == errno.EISDIR:
|
||||||
|
# Somehow this path is a directory. Should never happen
|
||||||
|
# unless some wiseguy manually creates a directory at this
|
||||||
|
# path, but just in case, handle it.
|
||||||
|
try:
|
||||||
|
shutil.rmtree(repo['lockfile'])
|
||||||
|
except OSError as exc:
|
||||||
|
_add_error(failed, repo, exc)
|
||||||
|
else:
|
||||||
|
_add_error(failed, repo, exc)
|
||||||
|
else:
|
||||||
|
msg = 'Removed lock for {0}'.format(repo['url'])
|
||||||
|
log.debug(msg)
|
||||||
|
success.append(msg)
|
||||||
|
return success, failed
|
||||||
|
|
||||||
|
if isinstance(remote, dict):
|
||||||
|
return _do_clear_lock(remote)
|
||||||
|
|
||||||
|
cleared = []
|
||||||
|
errors = []
|
||||||
|
for repo in init():
|
||||||
|
if remote:
|
||||||
|
try:
|
||||||
|
if remote not in repo['url']:
|
||||||
|
continue
|
||||||
|
except TypeError:
|
||||||
|
# remote was non-string, try again
|
||||||
|
if six.text_type(remote) not in repo['url']:
|
||||||
|
continue
|
||||||
|
success, failed = _do_clear_lock(repo)
|
||||||
|
cleared.extend(success)
|
||||||
|
errors.extend(failed)
|
||||||
|
return cleared, errors
|
||||||
|
|
||||||
|
|
||||||
|
def lock(remote=None):
|
||||||
|
'''
|
||||||
|
Place an update.lk
|
||||||
|
|
||||||
|
``remote`` can either be a dictionary containing repo configuration
|
||||||
|
information, or a pattern. If the latter, then remotes for which the URL
|
||||||
|
matches the pattern will be locked.
|
||||||
|
'''
|
||||||
|
def _do_lock(repo):
|
||||||
|
success = []
|
||||||
|
failed = []
|
||||||
|
if not os.path.exists(repo['lockfile']):
|
||||||
|
try:
|
||||||
|
with salt.utils.fopen(repo['lockfile'], 'w+') as fp_:
|
||||||
|
fp_.write('')
|
||||||
|
except (IOError, OSError) as exc:
|
||||||
|
msg = ('Unable to set update lock for {0} ({1}): {2} '
|
||||||
|
.format(repo['url'], repo['lockfile'], exc))
|
||||||
|
log.debug(msg)
|
||||||
|
failed.append(msg)
|
||||||
|
else:
|
||||||
|
msg = 'Set lock for {0}'.format(repo['url'])
|
||||||
|
log.debug(msg)
|
||||||
|
success.append(msg)
|
||||||
|
return success, failed
|
||||||
|
|
||||||
|
if isinstance(remote, dict):
|
||||||
|
return _do_lock(remote)
|
||||||
|
|
||||||
|
locked = []
|
||||||
|
errors = []
|
||||||
|
for repo in init():
|
||||||
|
if remote:
|
||||||
|
try:
|
||||||
|
if not fnmatch.fnmatch(repo['url'], remote):
|
||||||
|
continue
|
||||||
|
except TypeError:
|
||||||
|
# remote was non-string, try again
|
||||||
|
if not fnmatch.fnmatch(repo['url'], six.text_type(remote)):
|
||||||
|
continue
|
||||||
|
success, failed = _do_lock(repo)
|
||||||
|
locked.extend(success)
|
||||||
|
errors.extend(failed)
|
||||||
|
|
||||||
|
return locked, errors
|
||||||
|
|
||||||
|
|
||||||
def update():
|
def update():
|
||||||
@ -248,12 +407,26 @@ def update():
|
|||||||
# data for the fileserver event
|
# data for the fileserver event
|
||||||
data = {'changed': False,
|
data = {'changed': False,
|
||||||
'backend': 'svnfs'}
|
'backend': 'svnfs'}
|
||||||
pid = os.getpid()
|
# _clear_old_remotes runs init(), so use the value from there to avoid a
|
||||||
data['changed'] = purge_cache()
|
# second init()
|
||||||
for repo in init():
|
data['changed'], repos = _clear_old_remotes()
|
||||||
lk_fn = os.path.join(repo['repo'], 'update.lk')
|
for repo in repos:
|
||||||
with salt.utils.fopen(lk_fn, 'w+') as fp_:
|
if os.path.exists(repo['lockfile']):
|
||||||
fp_.write(str(pid))
|
log.warning(
|
||||||
|
'Update lockfile is present for svnfs remote {0}, skipping. '
|
||||||
|
'If this warning persists, it is possible that the update '
|
||||||
|
'process was interrupted. Removing {1} or running '
|
||||||
|
'\'salt-run fileserver.clear_lock svnfs\' will allow updates '
|
||||||
|
'to continue for this remote.'
|
||||||
|
.format(repo['url'], repo['lockfile'])
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
_, errors = lock(repo)
|
||||||
|
if errors:
|
||||||
|
log.error('Unable to set update lock for svnfs remote {0}, '
|
||||||
|
'skipping.'.format(repo['url']))
|
||||||
|
continue
|
||||||
|
log.debug('svnfs is fetching from {0}'.format(repo['url']))
|
||||||
old_rev = _rev(repo)
|
old_rev = _rev(repo)
|
||||||
try:
|
try:
|
||||||
CLIENT.update(repo['repo'])
|
CLIENT.update(repo['repo'])
|
||||||
@ -262,10 +435,6 @@ def update():
|
|||||||
'Error updating svnfs remote {0} (cachedir: {1}): {2}'
|
'Error updating svnfs remote {0} (cachedir: {1}): {2}'
|
||||||
.format(repo['url'], repo['cachedir'], exc)
|
.format(repo['url'], repo['cachedir'], exc)
|
||||||
)
|
)
|
||||||
try:
|
|
||||||
os.remove(lk_fn)
|
|
||||||
except (OSError, IOError):
|
|
||||||
pass
|
|
||||||
|
|
||||||
new_rev = _rev(repo)
|
new_rev = _rev(repo)
|
||||||
if any((x is None for x in (old_rev, new_rev))):
|
if any((x is None for x in (old_rev, new_rev))):
|
||||||
@ -274,6 +443,8 @@ def update():
|
|||||||
if new_rev != old_rev:
|
if new_rev != old_rev:
|
||||||
data['changed'] = True
|
data['changed'] = True
|
||||||
|
|
||||||
|
clear_lock(repo)
|
||||||
|
|
||||||
env_cache = os.path.join(__opts__['cachedir'], 'svnfs/envs.p')
|
env_cache = os.path.join(__opts__['cachedir'], 'svnfs/envs.p')
|
||||||
if data.get('changed', False) is True or not os.path.isfile(env_cache):
|
if data.get('changed', False) is True or not os.path.isfile(env_cache):
|
||||||
env_cachedir = os.path.dirname(env_cache)
|
env_cachedir = os.path.dirname(env_cache)
|
||||||
@ -388,7 +559,7 @@ def find_file(path, tgt_env='base', **kwargs): # pylint: disable=W0613
|
|||||||
'''
|
'''
|
||||||
Find the first file to match the path and ref. This operates similarly to
|
Find the first file to match the path and ref. This operates similarly to
|
||||||
the roots file sever but with assumptions of the directory structure
|
the roots file sever but with assumptions of the directory structure
|
||||||
based of svn standard practices.
|
based on svn standard practices.
|
||||||
'''
|
'''
|
||||||
fnd = {'path': '',
|
fnd = {'path': '',
|
||||||
'rel': ''}
|
'rel': ''}
|
||||||
|
@ -55,6 +55,7 @@ import salt.utils.process
|
|||||||
import salt.utils.zeromq
|
import salt.utils.zeromq
|
||||||
import salt.utils.jid
|
import salt.utils.jid
|
||||||
from salt.defaults import DEFAULT_TARGET_DELIM
|
from salt.defaults import DEFAULT_TARGET_DELIM
|
||||||
|
from salt.exceptions import FileserverConfigError
|
||||||
from salt.utils.debug import enable_sigusr1_handler, enable_sigusr2_handler, inspect_stack
|
from salt.utils.debug import enable_sigusr1_handler, enable_sigusr2_handler, inspect_stack
|
||||||
from salt.utils.event import tagify
|
from salt.utils.event import tagify
|
||||||
from salt.utils.master import ConnectedCache
|
from salt.utils.master import ConnectedCache
|
||||||
@ -360,6 +361,13 @@ class Master(SMaster):
|
|||||||
'Failed to load fileserver backends, the configured backends '
|
'Failed to load fileserver backends, the configured backends '
|
||||||
'are: {0}'.format(', '.join(self.opts['fileserver_backend']))
|
'are: {0}'.format(', '.join(self.opts['fileserver_backend']))
|
||||||
)
|
)
|
||||||
|
else:
|
||||||
|
# Run init() for all backends which support the function, to
|
||||||
|
# double-check configuration
|
||||||
|
try:
|
||||||
|
fileserver.init()
|
||||||
|
except FileserverConfigError as exc:
|
||||||
|
errors.append('{0}'.format(exc))
|
||||||
if not self.opts['fileserver_backend']:
|
if not self.opts['fileserver_backend']:
|
||||||
errors.append('No fileserver backends are configured')
|
errors.append('No fileserver backends are configured')
|
||||||
if errors:
|
if errors:
|
||||||
|
@ -42,28 +42,50 @@ def __virtual__():
|
|||||||
return 'chocolatey'
|
return 'chocolatey'
|
||||||
|
|
||||||
|
|
||||||
|
def _clear_context():
|
||||||
|
'''
|
||||||
|
Clear variables stored in __context__. Run this function when a new version
|
||||||
|
of chocolatey is installed.
|
||||||
|
'''
|
||||||
|
for var in (x for x in __context__ if x.startswith('chocolatey.')):
|
||||||
|
__context__.pop(var)
|
||||||
|
|
||||||
|
|
||||||
|
def _yes():
|
||||||
|
'''
|
||||||
|
Returns ['--yes'] if on v0.9.9.0 or later, otherwise returns an empty list
|
||||||
|
'''
|
||||||
|
if 'chocolatey._yes' in __context__:
|
||||||
|
return __context__['chocolatey._yes']
|
||||||
|
if _LooseVersion(chocolatey_version()) >= _LooseVersion('0.9.9'):
|
||||||
|
answer = ['--yes']
|
||||||
|
else:
|
||||||
|
answer = []
|
||||||
|
__context__['chocolatey._yes'] = answer
|
||||||
|
return answer
|
||||||
|
|
||||||
|
|
||||||
def _find_chocolatey():
|
def _find_chocolatey():
|
||||||
'''
|
'''
|
||||||
Returns the full path to chocolatey.bat on the host.
|
Returns the full path to chocolatey.bat on the host.
|
||||||
'''
|
'''
|
||||||
try:
|
if 'chocolatey._path' in __context__:
|
||||||
return __context__['chocolatey._path']
|
return __context__['chocolatey._path']
|
||||||
except KeyError:
|
choc_defaults = ['C:\\Chocolatey\\bin\\chocolatey.bat',
|
||||||
choc_defaults = ['C:\\Chocolatey\\bin\\chocolatey.bat',
|
'C:\\ProgramData\\Chocolatey\\bin\\chocolatey.exe', ]
|
||||||
'C:\\ProgramData\\Chocolatey\\bin\\chocolatey.exe', ]
|
|
||||||
|
|
||||||
choc_path = __salt__['cmd.which']('chocolatey.exe')
|
choc_path = __salt__['cmd.which']('chocolatey.exe')
|
||||||
if not choc_path:
|
if not choc_path:
|
||||||
for choc_dir in choc_defaults:
|
for choc_dir in choc_defaults:
|
||||||
if __salt__['cmd.has_exec'](choc_dir):
|
if __salt__['cmd.has_exec'](choc_dir):
|
||||||
choc_path = choc_dir
|
choc_path = choc_dir
|
||||||
if not choc_path:
|
if not choc_path:
|
||||||
err = ('Chocolatey not installed. Use chocolatey.bootstrap to '
|
err = ('Chocolatey not installed. Use chocolatey.bootstrap to '
|
||||||
'install the Chocolatey package manager.')
|
'install the Chocolatey package manager.')
|
||||||
log.error(err)
|
log.error(err)
|
||||||
raise CommandExecutionError(err)
|
raise CommandExecutionError(err)
|
||||||
__context__['chocolatey._path'] = choc_path
|
__context__['chocolatey._path'] = choc_path
|
||||||
return choc_path
|
return choc_path
|
||||||
|
|
||||||
|
|
||||||
def chocolatey_version():
|
def chocolatey_version():
|
||||||
@ -78,20 +100,23 @@ def chocolatey_version():
|
|||||||
|
|
||||||
salt '*' chocolatey.chocolatey_version
|
salt '*' chocolatey.chocolatey_version
|
||||||
'''
|
'''
|
||||||
try:
|
if 'chocolatey._version' in __context__:
|
||||||
return __context__['chocolatey._version']
|
return __context__['chocolatey._version']
|
||||||
except KeyError:
|
cmd = [_find_chocolatey(), 'help']
|
||||||
cmd = [_find_chocolatey(), 'help']
|
out = __salt__['cmd.run'](cmd, python_shell=False)
|
||||||
out = __salt__['cmd.run'](cmd, python_shell=False)
|
for line in out.splitlines():
|
||||||
for line in out.splitlines():
|
line = line.lower()
|
||||||
if line.lower().startswith('version: '):
|
if line.startswith('chocolatey v'):
|
||||||
try:
|
__context__['chocolatey._version'] = line[12:]
|
||||||
__context__['chocolatey._version'] = \
|
return __context__['chocolatey._version']
|
||||||
line.split(None, 1)[-1].strip("'")
|
elif line.startswith('version: '):
|
||||||
return __context__['chocolatey._version']
|
try:
|
||||||
except Exception:
|
__context__['chocolatey._version'] = \
|
||||||
pass
|
line.split(None, 1)[-1].strip("'")
|
||||||
raise CommandExecutionError('Unable to determine Chocolatey version')
|
return __context__['chocolatey._version']
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
raise CommandExecutionError('Unable to determine Chocolatey version')
|
||||||
|
|
||||||
|
|
||||||
def bootstrap(force=False):
|
def bootstrap(force=False):
|
||||||
@ -193,12 +218,16 @@ def bootstrap(force=False):
|
|||||||
return result['stdout']
|
return result['stdout']
|
||||||
|
|
||||||
|
|
||||||
def list_(filter=None, all_versions=False, pre_versions=False, source=None, local_only=False):
|
def list_(narrow=None,
|
||||||
|
all_versions=False,
|
||||||
|
pre_versions=False,
|
||||||
|
source=None,
|
||||||
|
local_only=False):
|
||||||
'''
|
'''
|
||||||
Instructs Chocolatey to pull a vague package list from the repository.
|
Instructs Chocolatey to pull a vague package list from the repository.
|
||||||
|
|
||||||
filter
|
narrow
|
||||||
Term used to filter down results. Searches against name/description/tag.
|
Term used to narrow down results. Searches against name/description/tag.
|
||||||
|
|
||||||
all_versions
|
all_versions
|
||||||
Display all available package versions in results. Defaults to False.
|
Display all available package versions in results. Defaults to False.
|
||||||
@ -217,13 +246,13 @@ def list_(filter=None, all_versions=False, pre_versions=False, source=None, loca
|
|||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
salt '*' chocolatey.list <filter>
|
salt '*' chocolatey.list <narrow>
|
||||||
salt '*' chocolatey.list <filter> all_versions=True
|
salt '*' chocolatey.list <narrow> all_versions=True
|
||||||
'''
|
'''
|
||||||
choc_path = _find_chocolatey()
|
choc_path = _find_chocolatey()
|
||||||
cmd = [choc_path, 'list']
|
cmd = [choc_path, 'list']
|
||||||
if filter:
|
if narrow:
|
||||||
cmd.extend([filter])
|
cmd.append(narrow)
|
||||||
if salt.utils.is_true(all_versions):
|
if salt.utils.is_true(all_versions):
|
||||||
cmd.append('-AllVersions')
|
cmd.append('-AllVersions')
|
||||||
if salt.utils.is_true(pre_versions):
|
if salt.utils.is_true(pre_versions):
|
||||||
@ -255,7 +284,8 @@ def list_(filter=None, all_versions=False, pre_versions=False, source=None, loca
|
|||||||
|
|
||||||
def list_webpi():
|
def list_webpi():
|
||||||
'''
|
'''
|
||||||
Instructs Chocolatey to pull a full package list from the Microsoft Web PI repository.
|
Instructs Chocolatey to pull a full package list from the Microsoft Web PI
|
||||||
|
repository.
|
||||||
|
|
||||||
CLI Example:
|
CLI Example:
|
||||||
|
|
||||||
@ -298,7 +328,13 @@ def list_windowsfeatures():
|
|||||||
return result['stdout']
|
return result['stdout']
|
||||||
|
|
||||||
|
|
||||||
def install(name, version=None, source=None, force=False, install_args=None, override_args=False, force_x86=False):
|
def install(name,
|
||||||
|
version=None,
|
||||||
|
source=None,
|
||||||
|
force=False,
|
||||||
|
install_args=None,
|
||||||
|
override_args=False,
|
||||||
|
force_x86=False):
|
||||||
'''
|
'''
|
||||||
Instructs Chocolatey to install a package.
|
Instructs Chocolatey to install a package.
|
||||||
|
|
||||||
@ -350,12 +386,15 @@ def install(name, version=None, source=None, force=False, install_args=None, ove
|
|||||||
cmd.extend(['-OverrideArguments'])
|
cmd.extend(['-OverrideArguments'])
|
||||||
if force_x86:
|
if force_x86:
|
||||||
cmd.extend(['-forcex86'])
|
cmd.extend(['-forcex86'])
|
||||||
|
cmd.extend(_yes())
|
||||||
result = __salt__['cmd.run_all'](cmd, python_shell=False)
|
result = __salt__['cmd.run_all'](cmd, python_shell=False)
|
||||||
|
|
||||||
if result['retcode'] != 0:
|
if result['retcode'] != 0:
|
||||||
err = 'Running chocolatey failed: {0}'.format(result['stderr'])
|
err = 'Running chocolatey failed: {0}'.format(result['stderr'])
|
||||||
log.error(err)
|
log.error(err)
|
||||||
raise CommandExecutionError(err)
|
raise CommandExecutionError(err)
|
||||||
|
elif name == 'chocolatey':
|
||||||
|
_clear_context()
|
||||||
|
|
||||||
return result['stdout']
|
return result['stdout']
|
||||||
|
|
||||||
@ -389,6 +428,7 @@ def install_cygwin(name, install_args=None, override_args=False):
|
|||||||
cmd.extend(['-InstallArguments', install_args])
|
cmd.extend(['-InstallArguments', install_args])
|
||||||
if override_args:
|
if override_args:
|
||||||
cmd.extend(['-OverrideArguments'])
|
cmd.extend(['-OverrideArguments'])
|
||||||
|
cmd.extend(_yes())
|
||||||
result = __salt__['cmd.run_all'](cmd, python_shell=False)
|
result = __salt__['cmd.run_all'](cmd, python_shell=False)
|
||||||
|
|
||||||
if result['retcode'] != 0:
|
if result['retcode'] != 0:
|
||||||
@ -436,6 +476,7 @@ def install_gem(name, version=None, install_args=None, override_args=False):
|
|||||||
cmd.extend(['-InstallArguments', install_args])
|
cmd.extend(['-InstallArguments', install_args])
|
||||||
if override_args:
|
if override_args:
|
||||||
cmd.extend(['-OverrideArguments'])
|
cmd.extend(['-OverrideArguments'])
|
||||||
|
cmd.extend(_yes())
|
||||||
result = __salt__['cmd.run_all'](cmd, python_shell=False)
|
result = __salt__['cmd.run_all'](cmd, python_shell=False)
|
||||||
|
|
||||||
if result['retcode'] != 0:
|
if result['retcode'] != 0:
|
||||||
@ -485,6 +526,8 @@ def install_missing(name, version=None, source=None):
|
|||||||
cmd.extend(['-Version', version])
|
cmd.extend(['-Version', version])
|
||||||
if source:
|
if source:
|
||||||
cmd.extend(['-Source', source])
|
cmd.extend(['-Source', source])
|
||||||
|
# Shouldn't need this as this code should never run on v0.9.9 and newer
|
||||||
|
cmd.extend(_yes())
|
||||||
result = __salt__['cmd.run_all'](cmd, python_shell=False)
|
result = __salt__['cmd.run_all'](cmd, python_shell=False)
|
||||||
|
|
||||||
if result['retcode'] != 0:
|
if result['retcode'] != 0:
|
||||||
@ -531,6 +574,7 @@ def install_python(name, version=None, install_args=None, override_args=False):
|
|||||||
cmd.extend(['-InstallArguments', install_args])
|
cmd.extend(['-InstallArguments', install_args])
|
||||||
if override_args:
|
if override_args:
|
||||||
cmd.extend(['-OverrideArguments'])
|
cmd.extend(['-OverrideArguments'])
|
||||||
|
cmd.extend(_yes())
|
||||||
result = __salt__['cmd.run_all'](cmd, python_shell=False)
|
result = __salt__['cmd.run_all'](cmd, python_shell=False)
|
||||||
|
|
||||||
if result['retcode'] != 0:
|
if result['retcode'] != 0:
|
||||||
@ -557,6 +601,7 @@ def install_windowsfeatures(name):
|
|||||||
'''
|
'''
|
||||||
choc_path = _find_chocolatey()
|
choc_path = _find_chocolatey()
|
||||||
cmd = [choc_path, 'windowsfeatures', name]
|
cmd = [choc_path, 'windowsfeatures', name]
|
||||||
|
cmd.extend(_yes())
|
||||||
result = __salt__['cmd.run_all'](cmd, python_shell=False)
|
result = __salt__['cmd.run_all'](cmd, python_shell=False)
|
||||||
|
|
||||||
if result['retcode'] != 0:
|
if result['retcode'] != 0:
|
||||||
@ -596,6 +641,7 @@ def install_webpi(name, install_args=None, override_args=False):
|
|||||||
cmd.extend(['-InstallArguments', install_args])
|
cmd.extend(['-InstallArguments', install_args])
|
||||||
if override_args:
|
if override_args:
|
||||||
cmd.extend(['-OverrideArguments'])
|
cmd.extend(['-OverrideArguments'])
|
||||||
|
cmd.extend(_yes())
|
||||||
result = __salt__['cmd.run_all'](cmd, python_shell=False)
|
result = __salt__['cmd.run_all'](cmd, python_shell=False)
|
||||||
|
|
||||||
if result['retcode'] != 0:
|
if result['retcode'] != 0:
|
||||||
@ -643,6 +689,7 @@ def uninstall(name, version=None, uninstall_args=None, override_args=False):
|
|||||||
cmd.extend(['-UninstallArguments', uninstall_args])
|
cmd.extend(['-UninstallArguments', uninstall_args])
|
||||||
if override_args:
|
if override_args:
|
||||||
cmd.extend(['-OverrideArguments'])
|
cmd.extend(['-OverrideArguments'])
|
||||||
|
cmd.extend(_yes())
|
||||||
result = __salt__['cmd.run_all'](cmd, python_shell=False)
|
result = __salt__['cmd.run_all'](cmd, python_shell=False)
|
||||||
|
|
||||||
if result['retcode'] != 0:
|
if result['retcode'] != 0:
|
||||||
@ -682,6 +729,7 @@ def update(name, source=None, pre_versions=False):
|
|||||||
cmd.extend(['-Source', source])
|
cmd.extend(['-Source', source])
|
||||||
if salt.utils.is_true(pre_versions):
|
if salt.utils.is_true(pre_versions):
|
||||||
cmd.append('-PreRelease')
|
cmd.append('-PreRelease')
|
||||||
|
cmd.extend(_yes())
|
||||||
result = __salt__['cmd.run_all'](cmd, python_shell=False)
|
result = __salt__['cmd.run_all'](cmd, python_shell=False)
|
||||||
|
|
||||||
if result['retcode'] != 0:
|
if result['retcode'] != 0:
|
||||||
|
@ -172,6 +172,7 @@ def _run(cmd,
|
|||||||
timeout=None,
|
timeout=None,
|
||||||
with_communicate=True,
|
with_communicate=True,
|
||||||
reset_system_locale=True,
|
reset_system_locale=True,
|
||||||
|
ignore_retcode=False,
|
||||||
saltenv='base',
|
saltenv='base',
|
||||||
use_vt=False):
|
use_vt=False):
|
||||||
'''
|
'''
|
||||||
@ -462,7 +463,10 @@ def _run(cmd,
|
|||||||
finally:
|
finally:
|
||||||
proc.close(terminate=True, kill=True)
|
proc.close(terminate=True, kill=True)
|
||||||
try:
|
try:
|
||||||
__context__['retcode'] = ret['retcode']
|
if ignore_retcode:
|
||||||
|
__context__['retcode'] = 0
|
||||||
|
else:
|
||||||
|
__context__['retcode'] = ret['retcode']
|
||||||
except NameError:
|
except NameError:
|
||||||
# Ignore the context error during grain generation
|
# Ignore the context error during grain generation
|
||||||
pass
|
pass
|
||||||
@ -626,6 +630,7 @@ def run(cmd,
|
|||||||
output_loglevel=output_loglevel,
|
output_loglevel=output_loglevel,
|
||||||
timeout=timeout,
|
timeout=timeout,
|
||||||
reset_system_locale=reset_system_locale,
|
reset_system_locale=reset_system_locale,
|
||||||
|
ignore_retcode=ignore_retcode,
|
||||||
saltenv=saltenv,
|
saltenv=saltenv,
|
||||||
use_vt=use_vt)
|
use_vt=use_vt)
|
||||||
|
|
||||||
@ -818,6 +823,7 @@ def run_stdout(cmd,
|
|||||||
output_loglevel=output_loglevel,
|
output_loglevel=output_loglevel,
|
||||||
timeout=timeout,
|
timeout=timeout,
|
||||||
reset_system_locale=reset_system_locale,
|
reset_system_locale=reset_system_locale,
|
||||||
|
ignore_retcode=ignore_retcode,
|
||||||
saltenv=saltenv,
|
saltenv=saltenv,
|
||||||
use_vt=use_vt)
|
use_vt=use_vt)
|
||||||
|
|
||||||
@ -905,6 +911,7 @@ def run_stderr(cmd,
|
|||||||
output_loglevel=output_loglevel,
|
output_loglevel=output_loglevel,
|
||||||
timeout=timeout,
|
timeout=timeout,
|
||||||
reset_system_locale=reset_system_locale,
|
reset_system_locale=reset_system_locale,
|
||||||
|
ignore_retcode=ignore_retcode,
|
||||||
use_vt=use_vt,
|
use_vt=use_vt,
|
||||||
saltenv=saltenv)
|
saltenv=saltenv)
|
||||||
|
|
||||||
@ -992,6 +999,7 @@ def run_all(cmd,
|
|||||||
output_loglevel=output_loglevel,
|
output_loglevel=output_loglevel,
|
||||||
timeout=timeout,
|
timeout=timeout,
|
||||||
reset_system_locale=reset_system_locale,
|
reset_system_locale=reset_system_locale,
|
||||||
|
ignore_retcode=ignore_retcode,
|
||||||
saltenv=saltenv,
|
saltenv=saltenv,
|
||||||
use_vt=use_vt)
|
use_vt=use_vt)
|
||||||
|
|
||||||
@ -1076,6 +1084,7 @@ def retcode(cmd,
|
|||||||
output_loglevel=output_loglevel,
|
output_loglevel=output_loglevel,
|
||||||
timeout=timeout,
|
timeout=timeout,
|
||||||
reset_system_locale=reset_system_locale,
|
reset_system_locale=reset_system_locale,
|
||||||
|
ignore_retcode=ignore_retcode,
|
||||||
saltenv=saltenv,
|
saltenv=saltenv,
|
||||||
use_vt=use_vt)
|
use_vt=use_vt)
|
||||||
|
|
||||||
@ -1246,6 +1255,7 @@ def script(source,
|
|||||||
|
|
||||||
|
|
||||||
def script_retcode(source,
|
def script_retcode(source,
|
||||||
|
args=None,
|
||||||
cwd=None,
|
cwd=None,
|
||||||
stdin=None,
|
stdin=None,
|
||||||
runas=None,
|
runas=None,
|
||||||
@ -1278,6 +1288,8 @@ def script_retcode(source,
|
|||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
salt '*' cmd.script_retcode salt://scripts/runme.sh
|
salt '*' cmd.script_retcode salt://scripts/runme.sh
|
||||||
|
salt '*' cmd.script_retcode salt://scripts/runme.sh 'arg1 arg2 "arg 3"'
|
||||||
|
salt '*' cmd.script_retcode salt://scripts/windows_task.ps1 args=' -Input c:\\tmp\\infile.txt' shell='powershell'
|
||||||
|
|
||||||
A string of standard input can be specified for the command to be run using
|
A string of standard input can be specified for the command to be run using
|
||||||
the ``stdin`` parameter. This can be useful in cases where sensitive
|
the ``stdin`` parameter. This can be useful in cases where sensitive
|
||||||
@ -1303,6 +1315,7 @@ def script_retcode(source,
|
|||||||
saltenv = __env__
|
saltenv = __env__
|
||||||
|
|
||||||
return script(source=source,
|
return script(source=source,
|
||||||
|
args=args,
|
||||||
cwd=cwd,
|
cwd=cwd,
|
||||||
stdin=stdin,
|
stdin=stdin,
|
||||||
runas=runas,
|
runas=runas,
|
||||||
|
@ -12,6 +12,8 @@ import logging
|
|||||||
|
|
||||||
# Import salt libs
|
# Import salt libs
|
||||||
import salt.utils
|
import salt.utils
|
||||||
|
import salt.utils.cloud
|
||||||
|
import salt._compat
|
||||||
import salt.syspaths as syspaths
|
import salt.syspaths as syspaths
|
||||||
import salt.utils.sdb as sdb
|
import salt.utils.sdb as sdb
|
||||||
|
|
||||||
@ -381,6 +383,9 @@ def gather_bootstrap_script(bootstrap=None):
|
|||||||
'''
|
'''
|
||||||
Download the salt-bootstrap script, and return its location
|
Download the salt-bootstrap script, and return its location
|
||||||
|
|
||||||
|
bootstrap
|
||||||
|
URL of alternate bootstrap script
|
||||||
|
|
||||||
CLI Example:
|
CLI Example:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
99
salt/modules/container_resource.py
Normal file
99
salt/modules/container_resource.py
Normal file
@ -0,0 +1,99 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
'''
|
||||||
|
Common resources for LXC and systemd-nspawn containers
|
||||||
|
|
||||||
|
These functions are not designed to be called directly, but instead from the
|
||||||
|
:mod:`lxc <salt.modules.lxc>` and the (future) :mod:`nspawn
|
||||||
|
<salt.modules.nspawn>` execution modules.
|
||||||
|
'''
|
||||||
|
|
||||||
|
# Import python libs
|
||||||
|
from __future__ import absolute_import
|
||||||
|
import logging
|
||||||
|
import time
|
||||||
|
import traceback
|
||||||
|
|
||||||
|
# Import salt libs
|
||||||
|
from salt.exceptions import SaltInvocationError
|
||||||
|
from salt.utils import vt
|
||||||
|
|
||||||
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def run(name,
|
||||||
|
cmd,
|
||||||
|
output=None,
|
||||||
|
no_start=False,
|
||||||
|
stdin=None,
|
||||||
|
python_shell=True,
|
||||||
|
output_loglevel='debug',
|
||||||
|
ignore_retcode=False,
|
||||||
|
use_vt=False):
|
||||||
|
'''
|
||||||
|
Common logic for running shell commands in containers
|
||||||
|
|
||||||
|
Requires the full command to be passed to :mod:`cmd.run
|
||||||
|
<salt.modules.cmdmod.run>`/:mod:`cmd.run_all <salt.modules.cmdmod.run_all>`
|
||||||
|
'''
|
||||||
|
valid_output = ('stdout', 'stderr', 'retcode', 'all')
|
||||||
|
if output is None:
|
||||||
|
cmd_func = 'cmd.run'
|
||||||
|
elif output not in valid_output:
|
||||||
|
raise SaltInvocationError(
|
||||||
|
'\'output\' param must be one of the following: {0}'
|
||||||
|
.format(', '.join(valid_output))
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
cmd_func = 'cmd.run_all'
|
||||||
|
|
||||||
|
if not use_vt:
|
||||||
|
ret = __salt__[cmd_func](cmd,
|
||||||
|
stdin=stdin,
|
||||||
|
python_shell=python_shell,
|
||||||
|
output_loglevel=output_loglevel,
|
||||||
|
ignore_retcode=ignore_retcode)
|
||||||
|
else:
|
||||||
|
stdout, stderr = '', ''
|
||||||
|
try:
|
||||||
|
proc = vt.Terminal(cmd,
|
||||||
|
shell=python_shell,
|
||||||
|
log_stdin_level=output_loglevel if
|
||||||
|
output_loglevel == 'quiet'
|
||||||
|
else 'info',
|
||||||
|
log_stdout_level=output_loglevel,
|
||||||
|
log_stderr_level=output_loglevel,
|
||||||
|
log_stdout=True,
|
||||||
|
log_stderr=True,
|
||||||
|
stream_stdout=False,
|
||||||
|
stream_stderr=False)
|
||||||
|
# Consume output
|
||||||
|
while proc.has_unread_data:
|
||||||
|
try:
|
||||||
|
cstdout, cstderr = proc.recv()
|
||||||
|
if cstdout:
|
||||||
|
stdout += cstdout
|
||||||
|
if cstderr:
|
||||||
|
if output is None:
|
||||||
|
stdout += cstderr
|
||||||
|
else:
|
||||||
|
stderr += cstderr
|
||||||
|
time.sleep(0.5)
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
break
|
||||||
|
ret = stdout if output is None \
|
||||||
|
else {'retcode': proc.exitstatus,
|
||||||
|
'pid': 2,
|
||||||
|
'stdout': stdout,
|
||||||
|
'stderr': stderr}
|
||||||
|
except vt.TerminalException:
|
||||||
|
trace = traceback.format_exc()
|
||||||
|
log.error(trace)
|
||||||
|
ret = stdout if output is None \
|
||||||
|
else {'retcode': 127,
|
||||||
|
'pid': 2,
|
||||||
|
'stdout': stdout,
|
||||||
|
'stderr': stderr}
|
||||||
|
finally:
|
||||||
|
proc.terminate()
|
||||||
|
|
||||||
|
return ret
|
@ -1193,7 +1193,7 @@ def replace(path,
|
|||||||
raise SaltInvocationError('Choose between append or prepend_if_not_found')
|
raise SaltInvocationError('Choose between append or prepend_if_not_found')
|
||||||
|
|
||||||
flags_num = _get_flags(flags)
|
flags_num = _get_flags(flags)
|
||||||
cpattern = re.compile(pattern, flags_num)
|
cpattern = re.compile(str(pattern), flags_num)
|
||||||
if bufsize == 'file':
|
if bufsize == 'file':
|
||||||
bufsize = os.path.getsize(path)
|
bufsize = os.path.getsize(path)
|
||||||
|
|
||||||
|
@ -267,7 +267,7 @@ def remove(mod, persist=False, comment=True):
|
|||||||
salt '*' kmod.remove kvm
|
salt '*' kmod.remove kvm
|
||||||
'''
|
'''
|
||||||
pre_mods = lsmod()
|
pre_mods = lsmod()
|
||||||
__salt__['cmd.run_all']('modprobe -r {0}'.format(mod), python_shell=False)
|
__salt__['cmd.run_all']('rmmod {0}'.format(mod), python_shell=False)
|
||||||
post_mods = lsmod()
|
post_mods = lsmod()
|
||||||
mods = _rm_mods(pre_mods, post_mods)
|
mods = _rm_mods(pre_mods, post_mods)
|
||||||
persist_mods = set()
|
persist_mods = set()
|
||||||
|
File diff suppressed because it is too large
Load Diff
@ -228,11 +228,15 @@ def create(name,
|
|||||||
For more info, read the ``mdadm(8)`` manpage
|
For more info, read the ``mdadm(8)`` manpage
|
||||||
'''
|
'''
|
||||||
opts = []
|
opts = []
|
||||||
|
raid_devices = len(devices)
|
||||||
|
|
||||||
for key in kwargs:
|
for key in kwargs:
|
||||||
if not key.startswith('__'):
|
if not key.startswith('__'):
|
||||||
opts.append('--{0}'.format(key))
|
opts.append('--{0}'.format(key))
|
||||||
if kwargs[key] is not True:
|
if kwargs[key] is not True:
|
||||||
opts.append(str(kwargs[key]))
|
opts.append(str(kwargs[key]))
|
||||||
|
if key == 'spare-devices':
|
||||||
|
raid_devices -= int(kwargs[key])
|
||||||
|
|
||||||
cmd = ['mdadm',
|
cmd = ['mdadm',
|
||||||
'-C', name,
|
'-C', name,
|
||||||
@ -240,7 +244,7 @@ def create(name,
|
|||||||
'-v'] + opts + [
|
'-v'] + opts + [
|
||||||
'-l', str(level),
|
'-l', str(level),
|
||||||
'-e', metadata,
|
'-e', metadata,
|
||||||
'-n', str(len(devices))] + devices
|
'-n', str(raid_devices)] + devices
|
||||||
|
|
||||||
cmd_str = ' '.join(cmd)
|
cmd_str = ' '.join(cmd)
|
||||||
|
|
||||||
|
@ -17,12 +17,13 @@ from __future__ import absolute_import
|
|||||||
|
|
||||||
# Import python libs
|
# Import python libs
|
||||||
import logging
|
import logging
|
||||||
|
from distutils.version import LooseVersion # pylint: disable=import-error,no-name-in-module
|
||||||
import json
|
import json
|
||||||
from distutils.version import StrictVersion # pylint: disable=import-error,no-name-in-module
|
|
||||||
|
|
||||||
# Import salt libs
|
# Import salt libs
|
||||||
from salt.ext.six import string_types
|
from salt.ext.six import string_types
|
||||||
|
|
||||||
|
|
||||||
# Import third party libs
|
# Import third party libs
|
||||||
try:
|
try:
|
||||||
import pymongo
|
import pymongo
|
||||||
@ -172,7 +173,7 @@ def user_list(user=None, password=None, host=None, port=None, database='admin'):
|
|||||||
output = []
|
output = []
|
||||||
mongodb_version = mdb.eval('db.version()')
|
mongodb_version = mdb.eval('db.version()')
|
||||||
|
|
||||||
if StrictVersion(mongodb_version) >= StrictVersion('2.6'):
|
if LooseVersion(mongodb_version) >= LooseVersion('2.6'):
|
||||||
for user in mdb.eval('db.getUsers()'):
|
for user in mdb.eval('db.getUsers()'):
|
||||||
output.append([
|
output.append([
|
||||||
('user', user['user']),
|
('user', user['user']),
|
||||||
|
@ -24,6 +24,20 @@ def __virtual__():
|
|||||||
return __virtualname__ if __opts__.get('transport', '') == 'zeromq' else False
|
return __virtualname__ if __opts__.get('transport', '') == 'zeromq' else False
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_args(arg):
|
||||||
|
'''
|
||||||
|
yamlify `arg` and ensure it's outermost datatype is a list
|
||||||
|
'''
|
||||||
|
yaml_args = salt.utils.args.yamlify_arg(arg)
|
||||||
|
|
||||||
|
if yaml_args is None:
|
||||||
|
return []
|
||||||
|
elif not isinstance(yaml_args, list):
|
||||||
|
return [yaml_args]
|
||||||
|
else:
|
||||||
|
return yaml_args
|
||||||
|
|
||||||
|
|
||||||
def _publish(
|
def _publish(
|
||||||
tgt,
|
tgt,
|
||||||
fun,
|
fun,
|
||||||
@ -56,12 +70,7 @@ def _publish(
|
|||||||
log.info('Cannot publish publish calls. Returning {}')
|
log.info('Cannot publish publish calls. Returning {}')
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
if not isinstance(arg, list):
|
arg = _parse_args(arg)
|
||||||
arg = [salt.utils.args.yamlify_arg(arg)]
|
|
||||||
else:
|
|
||||||
arg = [salt.utils.args.yamlify_arg(x) for x in arg]
|
|
||||||
if len(arg) == 1 and arg[0] is None:
|
|
||||||
arg = []
|
|
||||||
|
|
||||||
log.info('Publishing {0!r} to {master_uri}'.format(fun, **__opts__))
|
log.info('Publishing {0!r} to {master_uri}'.format(fun, **__opts__))
|
||||||
auth = salt.crypt.SAuth(__opts__)
|
auth = salt.crypt.SAuth(__opts__)
|
||||||
@ -246,12 +255,7 @@ def runner(fun, arg=None, timeout=5):
|
|||||||
|
|
||||||
salt publish.runner manage.down
|
salt publish.runner manage.down
|
||||||
'''
|
'''
|
||||||
if not isinstance(arg, list):
|
arg = _parse_args(arg)
|
||||||
arg = [salt.utils.args.yamlify_arg(arg)]
|
|
||||||
else:
|
|
||||||
arg = [salt.utils.args.yamlify_arg(x) for x in arg]
|
|
||||||
if len(arg) == 1 and arg[0] is None:
|
|
||||||
arg = []
|
|
||||||
|
|
||||||
if 'master_uri' not in __opts__:
|
if 'master_uri' not in __opts__:
|
||||||
return 'No access to master. If using salt-call with --local, please remove.'
|
return 'No access to master. If using salt-call with --local, please remove.'
|
||||||
|
@ -23,6 +23,20 @@ def __virtual__():
|
|||||||
return __virtualname__ if __opts__.get('transport', '') == 'raet' else False
|
return __virtualname__ if __opts__.get('transport', '') == 'raet' else False
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_args(arg):
|
||||||
|
'''
|
||||||
|
yamlify `arg` and ensure it's outermost datatype is a list
|
||||||
|
'''
|
||||||
|
yaml_args = salt.utils.args.yamlify_arg(arg)
|
||||||
|
|
||||||
|
if yaml_args is None:
|
||||||
|
return []
|
||||||
|
elif not isinstance(yaml_args, list):
|
||||||
|
return [yaml_args]
|
||||||
|
else:
|
||||||
|
return yaml_args
|
||||||
|
|
||||||
|
|
||||||
def _publish(
|
def _publish(
|
||||||
tgt,
|
tgt,
|
||||||
fun,
|
fun,
|
||||||
@ -54,9 +68,7 @@ def _publish(
|
|||||||
log.info('Function name is \'publish.publish\'. Returning {}')
|
log.info('Function name is \'publish.publish\'. Returning {}')
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
arg = [salt.utils.args.yamlify_arg(arg)]
|
arg = _parse_args(arg)
|
||||||
if len(arg) == 1 and arg[0] is None:
|
|
||||||
arg = []
|
|
||||||
|
|
||||||
load = {'cmd': 'minion_pub',
|
load = {'cmd': 'minion_pub',
|
||||||
'fun': fun,
|
'fun': fun,
|
||||||
@ -191,9 +203,7 @@ def runner(fun, arg=None, timeout=5):
|
|||||||
|
|
||||||
salt publish.runner manage.down
|
salt publish.runner manage.down
|
||||||
'''
|
'''
|
||||||
arg = [salt.utils.args.yamlify_arg(arg)]
|
arg = _parse_args(arg)
|
||||||
if len(arg) == 1 and arg[0] is None:
|
|
||||||
arg = []
|
|
||||||
|
|
||||||
load = {'cmd': 'minion_runner',
|
load = {'cmd': 'minion_runner',
|
||||||
'fun': fun,
|
'fun': fun,
|
||||||
|
@ -265,7 +265,7 @@ def _format_host(host, data):
|
|||||||
subsequent_indent=u' ' * 14
|
subsequent_indent=u' ' * 14
|
||||||
)
|
)
|
||||||
hstrs.append(
|
hstrs.append(
|
||||||
u' {colors[YELLOW]} Warnings: {0}{colors[ENDC]}'.format(
|
u' {colors[LIGHT_RED]} Warnings: {0}{colors[ENDC]}'.format(
|
||||||
wrapper.fill('\n'.join(ret['warnings'])).lstrip(),
|
wrapper.fill('\n'.join(ret['warnings'])).lstrip(),
|
||||||
colors=colors
|
colors=colors
|
||||||
)
|
)
|
||||||
@ -340,7 +340,7 @@ def _format_host(host, data):
|
|||||||
if num_warnings:
|
if num_warnings:
|
||||||
hstrs.append(
|
hstrs.append(
|
||||||
colorfmt.format(
|
colorfmt.format(
|
||||||
colors['YELLOW'],
|
colors['LIGHT_RED'],
|
||||||
_counts(rlabel['warnings'], num_warnings),
|
_counts(rlabel['warnings'], num_warnings),
|
||||||
colors
|
colors
|
||||||
)
|
)
|
||||||
|
@ -8,30 +8,24 @@ from __future__ import absolute_import
|
|||||||
import salt.fileserver
|
import salt.fileserver
|
||||||
|
|
||||||
|
|
||||||
def dir_list(saltenv='base', outputter='nested'):
|
|
||||||
'''
|
|
||||||
List all directories in the given environment
|
|
||||||
|
|
||||||
CLI Example:
|
|
||||||
|
|
||||||
.. code-block:: bash
|
|
||||||
|
|
||||||
salt-run fileserver.dir_list
|
|
||||||
salt-run fileserver.dir_list saltenv=prod
|
|
||||||
'''
|
|
||||||
fileserver = salt.fileserver.Fileserver(__opts__)
|
|
||||||
load = {'saltenv': saltenv}
|
|
||||||
output = fileserver.dir_list(load=load)
|
|
||||||
|
|
||||||
if outputter:
|
|
||||||
return {'outputter': outputter, 'data': output}
|
|
||||||
else:
|
|
||||||
return output
|
|
||||||
|
|
||||||
|
|
||||||
def envs(backend=None, sources=False, outputter='nested'):
|
def envs(backend=None, sources=False, outputter='nested'):
|
||||||
'''
|
'''
|
||||||
Return the environments for the named backend or all back-ends
|
Return the available fileserver environments. If no backend is provided,
|
||||||
|
then the environments for all configured backends will be returned.
|
||||||
|
|
||||||
|
backend
|
||||||
|
Narrow fileserver backends to a subset of the enabled ones.
|
||||||
|
|
||||||
|
.. versionchanged:: 2015.2.0::
|
||||||
|
If all passed backends start with a minus sign (``-``), then these
|
||||||
|
backends will be excluded from the enabled backends. However, if
|
||||||
|
there is a mix of backends with and without a minus sign (ex:
|
||||||
|
``backend=-roots,git``) then the ones starting with a minus
|
||||||
|
sign will be disregarded.
|
||||||
|
|
||||||
|
Additionally, fileserver backends can now be passed as a
|
||||||
|
comma-separated list. In earlier versions, they needed to be passed
|
||||||
|
as a python list (ex: ``backend="['roots', 'git']"``)
|
||||||
|
|
||||||
CLI Example:
|
CLI Example:
|
||||||
|
|
||||||
@ -39,7 +33,8 @@ def envs(backend=None, sources=False, outputter='nested'):
|
|||||||
|
|
||||||
salt-run fileserver.envs
|
salt-run fileserver.envs
|
||||||
salt-run fileserver.envs outputter=nested
|
salt-run fileserver.envs outputter=nested
|
||||||
salt-run fileserver.envs backend='["root", "git"]'
|
salt-run fileserver.envs backend=roots,git
|
||||||
|
salt-run fileserver.envs git
|
||||||
'''
|
'''
|
||||||
fileserver = salt.fileserver.Fileserver(__opts__)
|
fileserver = salt.fileserver.Fileserver(__opts__)
|
||||||
output = fileserver.envs(back=backend, sources=sources)
|
output = fileserver.envs(back=backend, sources=sources)
|
||||||
@ -50,40 +45,71 @@ def envs(backend=None, sources=False, outputter='nested'):
|
|||||||
return output
|
return output
|
||||||
|
|
||||||
|
|
||||||
def file_list(saltenv='base', outputter='nested'):
|
def file_list(saltenv='base', backend=None, outputter='nested'):
|
||||||
'''
|
'''
|
||||||
Return a list of files from the dominant environment
|
Return a list of files from the salt fileserver
|
||||||
|
|
||||||
CLI Example:
|
saltenv : base
|
||||||
|
The salt fileserver environment to be listed
|
||||||
|
|
||||||
|
backend
|
||||||
|
Narrow fileserver backends to a subset of the enabled ones. If all
|
||||||
|
passed backends start with a minus sign (``-``), then these backends
|
||||||
|
will be excluded from the enabled backends. However, if there is a mix
|
||||||
|
of backends with and without a minus sign (ex:
|
||||||
|
``backend=-roots,git``) then the ones starting with a minus sign will
|
||||||
|
be disregarded.
|
||||||
|
|
||||||
|
.. versionadded:: 2015.2.0
|
||||||
|
|
||||||
|
CLI Examples:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
salt-run fileserver.file_list
|
salt-run fileserver.file_list
|
||||||
salt-run fileserver.file_list saltenv=prod
|
salt-run fileserver.file_list saltenv=prod
|
||||||
|
salt-run fileserver.file_list saltenv=dev backend=git
|
||||||
|
salt-run fileserver.file_list base hg,roots
|
||||||
|
salt-run fileserver.file_list -git
|
||||||
'''
|
'''
|
||||||
fileserver = salt.fileserver.Fileserver(__opts__)
|
fileserver = salt.fileserver.Fileserver(__opts__)
|
||||||
load = {'saltenv': saltenv}
|
load = {'saltenv': saltenv, 'fsbackend': backend}
|
||||||
output = fileserver.file_list(load=load)
|
output = fileserver.file_list(load=load)
|
||||||
|
|
||||||
if outputter:
|
if outputter:
|
||||||
return {'outputter': outputter, 'data': output}
|
salt.output.display_output(output, outputter, opts=__opts__)
|
||||||
else:
|
return output
|
||||||
return output
|
|
||||||
|
|
||||||
|
|
||||||
def symlink_list(saltenv='base', outputter='nested'):
|
def symlink_list(saltenv='base', backend=None, outputter='nested'):
|
||||||
'''
|
'''
|
||||||
Return a list of symlinked files and dirs
|
Return a list of symlinked files and dirs
|
||||||
|
|
||||||
|
saltenv : base
|
||||||
|
The salt fileserver environment to be listed
|
||||||
|
|
||||||
|
backend
|
||||||
|
Narrow fileserver backends to a subset of the enabled ones. If all
|
||||||
|
passed backends start with a minus sign (``-``), then these backends
|
||||||
|
will be excluded from the enabled backends. However, if there is a mix
|
||||||
|
of backends with and without a minus sign (ex:
|
||||||
|
``backend=-roots,git``) then the ones starting with a minus sign will
|
||||||
|
be disregarded.
|
||||||
|
|
||||||
|
.. versionadded:: 2015.2.0
|
||||||
|
|
||||||
CLI Example:
|
CLI Example:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
salt-run fileserver.symlink_list
|
salt-run fileserver.symlink_list
|
||||||
salt-run fileserver.symlink_list saltenv=prod
|
salt-run fileserver.symlink_list saltenv=prod
|
||||||
|
salt-run fileserver.symlink_list saltenv=dev backend=git
|
||||||
|
salt-run fileserver.symlink_list base hg,roots
|
||||||
|
salt-run fileserver.symlink_list -git
|
||||||
'''
|
'''
|
||||||
fileserver = salt.fileserver.Fileserver(__opts__)
|
fileserver = salt.fileserver.Fileserver(__opts__)
|
||||||
load = {'saltenv': saltenv}
|
load = {'saltenv': saltenv, 'fsbackend': backend}
|
||||||
output = fileserver.symlink_list(load=load)
|
output = fileserver.symlink_list(load=load)
|
||||||
|
|
||||||
if outputter:
|
if outputter:
|
||||||
@ -92,19 +118,230 @@ def symlink_list(saltenv='base', outputter='nested'):
|
|||||||
return output
|
return output
|
||||||
|
|
||||||
|
|
||||||
|
def dir_list(saltenv='base', backend=None, outputter='nested'):
|
||||||
|
'''
|
||||||
|
Return a list of directories in the given environment
|
||||||
|
|
||||||
|
saltenv : base
|
||||||
|
The salt fileserver environment to be listed
|
||||||
|
|
||||||
|
backend
|
||||||
|
Narrow fileserver backends to a subset of the enabled ones. If all
|
||||||
|
passed backends start with a minus sign (``-``), then these backends
|
||||||
|
will be excluded from the enabled backends. However, if there is a mix
|
||||||
|
of backends with and without a minus sign (ex:
|
||||||
|
``backend=-roots,git``) then the ones starting with a minus sign will
|
||||||
|
be disregarded.
|
||||||
|
|
||||||
|
.. versionadded:: 2015.2.0
|
||||||
|
|
||||||
|
CLI Example:
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
salt-run fileserver.dir_list
|
||||||
|
salt-run fileserver.dir_list saltenv=prod
|
||||||
|
salt-run fileserver.dir_list saltenv=dev backend=git
|
||||||
|
salt-run fileserver.dir_list base hg,roots
|
||||||
|
salt-run fileserver.dir_list -git
|
||||||
|
'''
|
||||||
|
fileserver = salt.fileserver.Fileserver(__opts__)
|
||||||
|
load = {'saltenv': saltenv, 'fsbackend': backend}
|
||||||
|
output = fileserver.dir_list(load=load)
|
||||||
|
|
||||||
|
if outputter:
|
||||||
|
salt.output.display_output(output, outputter, opts=__opts__)
|
||||||
|
return output
|
||||||
|
|
||||||
|
|
||||||
|
def empty_dir_list(saltenv='base', backend=None, outputter='nested'):
|
||||||
|
'''
|
||||||
|
.. versionadded:: 2015.2.0
|
||||||
|
|
||||||
|
Return a list of empty directories in the given environment
|
||||||
|
|
||||||
|
saltenv : base
|
||||||
|
The salt fileserver environment to be listed
|
||||||
|
|
||||||
|
backend
|
||||||
|
Narrow fileserver backends to a subset of the enabled ones. If all
|
||||||
|
passed backends start with a minus sign (``-``), then these backends
|
||||||
|
will be excluded from the enabled backends. However, if there is a mix
|
||||||
|
of backends with and without a minus sign (ex:
|
||||||
|
``backend=-roots,git``) then the ones starting with a minus sign will
|
||||||
|
be disregarded.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
Some backends (such as :mod:`git <salt.fileserver.gitfs>` and
|
||||||
|
:mod:`hg <salt.fileserver.hgfs>`) do not support empty directories.
|
||||||
|
So, passing ``backend=git`` or ``backend=hg`` will result in an
|
||||||
|
empty list being returned.
|
||||||
|
|
||||||
|
CLI Example:
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
salt-run fileserver.empty_dir_list
|
||||||
|
salt-run fileserver.empty_dir_list saltenv=prod
|
||||||
|
salt-run fileserver.empty_dir_list backend=roots
|
||||||
|
'''
|
||||||
|
fileserver = salt.fileserver.Fileserver(__opts__)
|
||||||
|
load = {'saltenv': saltenv, 'fsbackend': backend}
|
||||||
|
output = fileserver.file_list_emptydirs(load=load)
|
||||||
|
|
||||||
|
if outputter:
|
||||||
|
salt.output.display_output(output, outputter, opts=__opts__)
|
||||||
|
return output
|
||||||
|
|
||||||
|
|
||||||
def update(backend=None):
|
def update(backend=None):
|
||||||
'''
|
'''
|
||||||
Update all of the file-servers that support the update function or the
|
Update the fileserver cache. If no backend is provided, then the cache for
|
||||||
named fileserver only.
|
all configured backends will be updated.
|
||||||
|
|
||||||
|
backend
|
||||||
|
Narrow fileserver backends to a subset of the enabled ones.
|
||||||
|
|
||||||
|
.. versionchanged:: 2015.2.0
|
||||||
|
If all passed backends start with a minus sign (``-``), then these
|
||||||
|
backends will be excluded from the enabled backends. However, if
|
||||||
|
there is a mix of backends with and without a minus sign (ex:
|
||||||
|
``backend=-roots,git``) then the ones starting with a minus
|
||||||
|
sign will be disregarded.
|
||||||
|
|
||||||
|
Additionally, fileserver backends can now be passed as a
|
||||||
|
comma-separated list. In earlier versions, they needed to be passed
|
||||||
|
as a python list (ex: ``backend="['roots', 'git']"``)
|
||||||
|
|
||||||
CLI Example:
|
CLI Example:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
salt-run fileserver.update
|
salt-run fileserver.update
|
||||||
salt-run fileserver.update backend='["root", "git"]'
|
salt-run fileserver.update backend=roots,git
|
||||||
'''
|
'''
|
||||||
fileserver = salt.fileserver.Fileserver(__opts__)
|
fileserver = salt.fileserver.Fileserver(__opts__)
|
||||||
fileserver.update(back=backend)
|
fileserver.update(back=backend)
|
||||||
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def clear_cache(backend=None):
|
||||||
|
'''
|
||||||
|
.. versionadded:: 2015.2.0
|
||||||
|
|
||||||
|
Clear the fileserver cache from VCS fileserver backends (:mod:`git
|
||||||
|
<salt.fileserver.gitfs>`, :mod:`hg <salt.fileserver.hgfs>`, :mod:`svn
|
||||||
|
<salt.fileserver.svnfs>`). Executing this runner with no arguments will
|
||||||
|
clear the cache for all enabled VCS fileserver backends, but this
|
||||||
|
can be narrowed using the ``backend`` argument.
|
||||||
|
|
||||||
|
backend
|
||||||
|
Only clear the update lock for the specified backend(s). If all passed
|
||||||
|
backends start with a minus sign (``-``), then these backends will be
|
||||||
|
excluded from the enabled backends. However, if there is a mix of
|
||||||
|
backends with and without a minus sign (ex: ``backend=-roots,git``)
|
||||||
|
then the ones starting with a minus sign will be disregarded.
|
||||||
|
|
||||||
|
CLI Example:
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
salt-run fileserver.clear_cache
|
||||||
|
salt-run fileserver.clear_cache backend=git,hg
|
||||||
|
salt-run fileserver.clear_cache hg
|
||||||
|
salt-run fileserver.clear_cache -roots
|
||||||
|
'''
|
||||||
|
fileserver = salt.fileserver.Fileserver(__opts__)
|
||||||
|
cleared, errors = fileserver.clear_cache(back=backend)
|
||||||
|
ret = {}
|
||||||
|
if cleared:
|
||||||
|
ret['cleared'] = cleared
|
||||||
|
if errors:
|
||||||
|
ret['errors'] = errors
|
||||||
|
if not ret:
|
||||||
|
ret = 'No cache was cleared'
|
||||||
|
salt.output.display_output(ret, 'nested', opts=__opts__)
|
||||||
|
|
||||||
|
|
||||||
|
def clear_lock(backend=None, remote=None):
|
||||||
|
'''
|
||||||
|
.. versionadded:: 2015.2.0
|
||||||
|
|
||||||
|
Clear the fileserver update lock from VCS fileserver backends (:mod:`git
|
||||||
|
<salt.fileserver.gitfs>`, :mod:`hg <salt.fileserver.hgfs>`, :mod:`svn
|
||||||
|
<salt.fileserver.svnfs>`). This should only need to be done if a fileserver
|
||||||
|
update was interrupted and a remote is not updating (generating a warning
|
||||||
|
in the Master's log file). Executing this runner with no arguments will
|
||||||
|
remove all update locks from all enabled VCS fileserver backends, but this
|
||||||
|
can be narrowed by using the following arguments:
|
||||||
|
|
||||||
|
backend
|
||||||
|
Only clear the update lock for the specified backend(s).
|
||||||
|
|
||||||
|
remote
|
||||||
|
If not None, then any remotes which contain the passed string will have
|
||||||
|
their lock cleared. For example, a ``remote`` value of **github** will
|
||||||
|
remove the lock from all github.com remotes.
|
||||||
|
|
||||||
|
CLI Example:
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
salt-run fileserver.clear_lock
|
||||||
|
salt-run fileserver.clear_lock backend=git,hg
|
||||||
|
salt-run fileserver.clear_lock backend=git remote=github
|
||||||
|
salt-run fileserver.clear_lock remote=bitbucket
|
||||||
|
'''
|
||||||
|
fileserver = salt.fileserver.Fileserver(__opts__)
|
||||||
|
cleared, errors = fileserver.clear_lock(back=backend, remote=remote)
|
||||||
|
ret = {}
|
||||||
|
if cleared:
|
||||||
|
ret['cleared'] = cleared
|
||||||
|
if errors:
|
||||||
|
ret['errors'] = errors
|
||||||
|
if not ret:
|
||||||
|
ret = 'No locks were removed'
|
||||||
|
salt.output.display_output(ret, 'nested', opts=__opts__)
|
||||||
|
|
||||||
|
|
||||||
|
def lock(backend=None, remote=None):
|
||||||
|
'''
|
||||||
|
.. versionadded:: 2015.2.0
|
||||||
|
|
||||||
|
Set a fileserver update lock for VCS fileserver backends (:mod:`git
|
||||||
|
<salt.fileserver.gitfs>`, :mod:`hg <salt.fileserver.hgfs>`, :mod:`svn
|
||||||
|
<salt.fileserver.svnfs>`).
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
This will only operate on enabled backends (those configured in
|
||||||
|
:master_conf:`fileserver_backend`).
|
||||||
|
|
||||||
|
backend
|
||||||
|
Only set the update lock for the specified backend(s).
|
||||||
|
|
||||||
|
remote
|
||||||
|
If not None, then any remotes which contain the passed string will have
|
||||||
|
their lock cleared. For example, a ``remote`` value of ``*github.com*``
|
||||||
|
will remove the lock from all github.com remotes.
|
||||||
|
|
||||||
|
CLI Example:
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
salt-run fileserver.lock
|
||||||
|
salt-run fileserver.lock backend=git,hg
|
||||||
|
salt-run fileserver.lock backend=git remote='*github.com*'
|
||||||
|
salt-run fileserver.lock remote=bitbucket
|
||||||
|
'''
|
||||||
|
fileserver = salt.fileserver.Fileserver(__opts__)
|
||||||
|
locked, errors = fileserver.lock(back=backend, remote=remote)
|
||||||
|
ret = {}
|
||||||
|
if locked:
|
||||||
|
ret['locked'] = locked
|
||||||
|
if errors:
|
||||||
|
ret['errors'] = errors
|
||||||
|
if not ret:
|
||||||
|
ret = 'No locks were set'
|
||||||
|
salt.output.display_output(ret, 'nested', opts=__opts__)
|
||||||
|
@ -289,7 +289,8 @@ def init(names, host=None, saltcloud_mode=False, quiet=False, **kwargs):
|
|||||||
if not container.get('result', False):
|
if not container.get('result', False):
|
||||||
error = container
|
error = container
|
||||||
else:
|
else:
|
||||||
error = 'Invalid return for {0}'.format(container_name)
|
error = 'Invalid return for {0}: {1} {2}'.format(
|
||||||
|
container_name, container, sub_ret)
|
||||||
else:
|
else:
|
||||||
error = sub_ret
|
error = sub_ret
|
||||||
if not error:
|
if not error:
|
||||||
|
@ -63,7 +63,7 @@ def pv_present(name, **kwargs):
|
|||||||
|
|
||||||
if __salt__['lvm.pvdisplay'](name):
|
if __salt__['lvm.pvdisplay'](name):
|
||||||
ret['comment'] = 'Created Physical Volume {0}'.format(name)
|
ret['comment'] = 'Created Physical Volume {0}'.format(name)
|
||||||
ret['changes'] = changes
|
ret['changes']['created'] = changes
|
||||||
else:
|
else:
|
||||||
ret['comment'] = 'Failed to create Physical Volume {0}'.format(name)
|
ret['comment'] = 'Failed to create Physical Volume {0}'.format(name)
|
||||||
ret['result'] = False
|
ret['result'] = False
|
||||||
@ -96,7 +96,7 @@ def pv_absent(name):
|
|||||||
ret['result'] = False
|
ret['result'] = False
|
||||||
else:
|
else:
|
||||||
ret['comment'] = 'Removed Physical Volume {0}'.format(name)
|
ret['comment'] = 'Removed Physical Volume {0}'.format(name)
|
||||||
ret['changes'] = changes
|
ret['changes']['removed'] = changes
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
|
|
||||||
@ -159,7 +159,7 @@ def vg_present(name, devices=None, **kwargs):
|
|||||||
|
|
||||||
if __salt__['lvm.vgdisplay'](name):
|
if __salt__['lvm.vgdisplay'](name):
|
||||||
ret['comment'] = 'Created Volume Group {0}'.format(name)
|
ret['comment'] = 'Created Volume Group {0}'.format(name)
|
||||||
ret['changes'] = changes
|
ret['changes']['created'] = changes
|
||||||
else:
|
else:
|
||||||
ret['comment'] = 'Failed to create Volume Group {0}'.format(name)
|
ret['comment'] = 'Failed to create Volume Group {0}'.format(name)
|
||||||
ret['result'] = False
|
ret['result'] = False
|
||||||
@ -189,7 +189,7 @@ def vg_absent(name):
|
|||||||
|
|
||||||
if not __salt__['lvm.vgdisplay'](name):
|
if not __salt__['lvm.vgdisplay'](name):
|
||||||
ret['comment'] = 'Removed Volume Group {0}'.format(name)
|
ret['comment'] = 'Removed Volume Group {0}'.format(name)
|
||||||
ret['changes'] = changes
|
ret['changes']['removed'] = changes
|
||||||
else:
|
else:
|
||||||
ret['comment'] = 'Failed to remove Volume Group {0}'.format(name)
|
ret['comment'] = 'Failed to remove Volume Group {0}'.format(name)
|
||||||
ret['result'] = False
|
ret['result'] = False
|
||||||
@ -258,7 +258,7 @@ def lv_present(name,
|
|||||||
|
|
||||||
if __salt__['lvm.lvdisplay'](lvpath):
|
if __salt__['lvm.lvdisplay'](lvpath):
|
||||||
ret['comment'] = 'Created Logical Volume {0}'.format(name)
|
ret['comment'] = 'Created Logical Volume {0}'.format(name)
|
||||||
ret['changes'] = changes
|
ret['changes']['created'] = changes
|
||||||
else:
|
else:
|
||||||
ret['comment'] = 'Failed to create Logical Volume {0}'.format(name)
|
ret['comment'] = 'Failed to create Logical Volume {0}'.format(name)
|
||||||
ret['result'] = False
|
ret['result'] = False
|
||||||
@ -292,7 +292,7 @@ def lv_absent(name, vgname=None):
|
|||||||
|
|
||||||
if not __salt__['lvm.lvdisplay'](lvpath):
|
if not __salt__['lvm.lvdisplay'](lvpath):
|
||||||
ret['comment'] = 'Removed Logical Volume {0}'.format(name)
|
ret['comment'] = 'Removed Logical Volume {0}'.format(name)
|
||||||
ret['changes'] = changes
|
ret['changes']['removed'] = changes
|
||||||
else:
|
else:
|
||||||
ret['comment'] = 'Failed to remove Logical Volume {0}'.format(name)
|
ret['comment'] = 'Failed to remove Logical Volume {0}'.format(name)
|
||||||
ret['result'] = False
|
ret['result'] = False
|
||||||
|
@ -201,6 +201,8 @@ def mounted(name,
|
|||||||
'reconnect',
|
'reconnect',
|
||||||
'retry',
|
'retry',
|
||||||
'soft',
|
'soft',
|
||||||
|
'auto',
|
||||||
|
'users',
|
||||||
]
|
]
|
||||||
# options which are provided as key=value (e.g. password=Zohp5ohb)
|
# options which are provided as key=value (e.g. password=Zohp5ohb)
|
||||||
mount_invisible_keys = [
|
mount_invisible_keys = [
|
||||||
@ -227,6 +229,9 @@ def mounted(name,
|
|||||||
if size_match.group('size_unit') == 'g':
|
if size_match.group('size_unit') == 'g':
|
||||||
converted_size = int(size_match.group('size_value')) * 1024 * 1024
|
converted_size = int(size_match.group('size_value')) * 1024 * 1024
|
||||||
opt = "size={0}k".format(converted_size)
|
opt = "size={0}k".format(converted_size)
|
||||||
|
# make cifs option user synonym for option username which is reported by /proc/mounts
|
||||||
|
if fstype in ['cifs'] and opt.split('=')[0] == 'user':
|
||||||
|
opt = "username={0}".format(opt.split('=')[1])
|
||||||
|
|
||||||
if opt not in active[real_name]['opts'] \
|
if opt not in active[real_name]['opts'] \
|
||||||
and ('superopts' in active[real_name]
|
and ('superopts' in active[real_name]
|
||||||
|
@ -1390,13 +1390,10 @@ def is_fcntl_available(check_sunos=False):
|
|||||||
Simple function to check if the `fcntl` module is available or not.
|
Simple function to check if the `fcntl` module is available or not.
|
||||||
|
|
||||||
If `check_sunos` is passed as `True` an additional check to see if host is
|
If `check_sunos` is passed as `True` an additional check to see if host is
|
||||||
SunOS is also made. For additional information check commit:
|
SunOS is also made. For additional information see: http://goo.gl/159FF8
|
||||||
http://goo.gl/159FF8
|
|
||||||
'''
|
'''
|
||||||
if HAS_FCNTL is False:
|
if check_sunos and is_sunos():
|
||||||
return False
|
return False
|
||||||
if check_sunos is True:
|
|
||||||
return HAS_FCNTL and is_sunos()
|
|
||||||
return HAS_FCNTL
|
return HAS_FCNTL
|
||||||
|
|
||||||
|
|
||||||
|
@ -1039,7 +1039,7 @@ def deploy_script(host,
|
|||||||
if key_filename:
|
if key_filename:
|
||||||
log.debug('Using {0} as the key_filename'.format(key_filename))
|
log.debug('Using {0} as the key_filename'.format(key_filename))
|
||||||
ssh_kwargs['key_filename'] = key_filename
|
ssh_kwargs['key_filename'] = key_filename
|
||||||
elif password and 'has_ssh_agent' in kwargs and kwargs['has_ssh_agent'] is False:
|
elif password and kwargs.get('has_ssh_agent', False) is False:
|
||||||
log.debug('Using {0} as the password'.format(password))
|
log.debug('Using {0} as the password'.format(password))
|
||||||
ssh_kwargs['password'] = password
|
ssh_kwargs['password'] = password
|
||||||
|
|
||||||
|
@ -371,8 +371,12 @@ class SerializerExtension(Extension, object):
|
|||||||
return Markup(json.dumps(value, sort_keys=sort_keys, indent=indent).strip())
|
return Markup(json.dumps(value, sort_keys=sort_keys, indent=indent).strip())
|
||||||
|
|
||||||
def format_yaml(self, value, flow_style=True):
|
def format_yaml(self, value, flow_style=True):
|
||||||
return Markup(yaml.dump(value, default_flow_style=flow_style,
|
yaml_txt = yaml.dump(value, default_flow_style=flow_style,
|
||||||
Dumper=OrderedDictDumper).strip())
|
Dumper=OrderedDictDumper).strip()
|
||||||
|
if yaml_txt.endswith('\n...\n'):
|
||||||
|
log.info('Yaml filter ended with "\n...\n". This trailing string '
|
||||||
|
'will be removed in Boron.')
|
||||||
|
return Markup(yaml_txt)
|
||||||
|
|
||||||
def format_python(self, value):
|
def format_python(self, value):
|
||||||
return Markup(pprint.pformat(value).strip())
|
return Markup(pprint.pformat(value).strip())
|
||||||
|
@ -5,6 +5,7 @@ Manage the master configuration file
|
|||||||
from __future__ import absolute_import
|
from __future__ import absolute_import
|
||||||
|
|
||||||
# Import python libs
|
# Import python libs
|
||||||
|
import logging
|
||||||
import os
|
import os
|
||||||
|
|
||||||
# Import third party libs
|
# Import third party libs
|
||||||
@ -13,6 +14,8 @@ import yaml
|
|||||||
# Import salt libs
|
# Import salt libs
|
||||||
import salt.config
|
import salt.config
|
||||||
|
|
||||||
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
def values():
|
def values():
|
||||||
'''
|
'''
|
||||||
@ -39,3 +42,48 @@ def apply(key, value):
|
|||||||
data[key] = value
|
data[key] = value
|
||||||
with salt.utils.fopen(path, 'w+') as fp_:
|
with salt.utils.fopen(path, 'w+') as fp_:
|
||||||
fp_.write(yaml.dump(data, default_flow_style=False))
|
fp_.write(yaml.dump(data, default_flow_style=False))
|
||||||
|
|
||||||
|
|
||||||
|
def update_config(file_name, yaml_contents):
|
||||||
|
'''
|
||||||
|
Update master config with
|
||||||
|
``yaml_contents``.
|
||||||
|
|
||||||
|
|
||||||
|
Writes ``yaml_contents`` to a file named
|
||||||
|
``file_name.conf`` under the folder
|
||||||
|
specified by ``default_include``.
|
||||||
|
This folder is named ``master.d`` by
|
||||||
|
default. Please look at
|
||||||
|
http://docs.saltstack.com/en/latest/ref/configuration/master.html#include-configuration
|
||||||
|
for more information.
|
||||||
|
|
||||||
|
|
||||||
|
Example low data
|
||||||
|
data = {
|
||||||
|
'username': 'salt',
|
||||||
|
'password': 'salt',
|
||||||
|
'fun': 'config.update_config',
|
||||||
|
'file_name': 'gui',
|
||||||
|
'yaml_contents': {'id': 1},
|
||||||
|
'client': 'wheel',
|
||||||
|
'eauth': 'pam',
|
||||||
|
}
|
||||||
|
'''
|
||||||
|
file_name = '{0}{1}'.format(file_name, '.conf')
|
||||||
|
dir_path = os.path.join(__opts__['config_dir'],
|
||||||
|
os.path.dirname(__opts__['default_include']))
|
||||||
|
try:
|
||||||
|
yaml_out = yaml.safe_dump(yaml_contents, default_flow_style=False)
|
||||||
|
|
||||||
|
if not os.path.exists(dir_path):
|
||||||
|
log.debug('Creating directory {0}'.format(dir_path))
|
||||||
|
os.makedirs(dir_path, 755)
|
||||||
|
|
||||||
|
file_path = os.path.join(dir_path, file_name)
|
||||||
|
with salt.utils.fopen(file_path, 'w') as fp_:
|
||||||
|
fp_.write(yaml_out)
|
||||||
|
|
||||||
|
return 'Wrote {0}'.format(file_name)
|
||||||
|
except (IOError, OSError, yaml.YAMLError, ValueError) as err:
|
||||||
|
return str(err)
|
||||||
|
1
setup.py
1
setup.py
@ -849,6 +849,7 @@ class SaltDistribution(distutils.dist.Distribution):
|
|||||||
|
|
||||||
if IS_WINDOWS_PLATFORM:
|
if IS_WINDOWS_PLATFORM:
|
||||||
freezer_includes.extend([
|
freezer_includes.extend([
|
||||||
|
'imp',
|
||||||
'win32api',
|
'win32api',
|
||||||
'win32file',
|
'win32file',
|
||||||
'win32con',
|
'win32con',
|
||||||
|
@ -13,6 +13,7 @@ import string
|
|||||||
from salttesting.helpers import ensure_in_syspath, expensiveTest
|
from salttesting.helpers import ensure_in_syspath, expensiveTest
|
||||||
|
|
||||||
ensure_in_syspath('../../../')
|
ensure_in_syspath('../../../')
|
||||||
|
|
||||||
# Import Salt Libs
|
# Import Salt Libs
|
||||||
import integration
|
import integration
|
||||||
from salt.config import cloud_providers_config
|
from salt.config import cloud_providers_config
|
||||||
@ -32,6 +33,7 @@ def __random_name(size=6):
|
|||||||
|
|
||||||
# Create the cloud instance name to be used throughout the tests
|
# Create the cloud instance name to be used throughout the tests
|
||||||
INSTANCE_NAME = __random_name()
|
INSTANCE_NAME = __random_name()
|
||||||
|
PROVIDER_NAME = 'digital_ocean'
|
||||||
|
|
||||||
|
|
||||||
class DigitalOceanTest(integration.ShellCase):
|
class DigitalOceanTest(integration.ShellCase):
|
||||||
@ -47,36 +49,62 @@ class DigitalOceanTest(integration.ShellCase):
|
|||||||
super(DigitalOceanTest, self).setUp()
|
super(DigitalOceanTest, self).setUp()
|
||||||
|
|
||||||
# check if appropriate cloud provider and profile files are present
|
# check if appropriate cloud provider and profile files are present
|
||||||
profile_str = 'digitalocean-config:'
|
profile_str = 'digitalocean-config'
|
||||||
provider = 'digital_ocean'
|
|
||||||
providers = self.run_cloud('--list-providers')
|
providers = self.run_cloud('--list-providers')
|
||||||
if profile_str not in providers:
|
if profile_str + ':' not in providers:
|
||||||
self.skipTest(
|
self.skipTest(
|
||||||
'Configuration file for {0} was not found. Check {0}.conf files '
|
'Configuration file for {0} was not found. Check {0}.conf files '
|
||||||
'in tests/integration/files/conf/cloud.*.d/ to run these tests.'
|
'in tests/integration/files/conf/cloud.*.d/ to run these tests.'
|
||||||
.format(provider)
|
.format(PROVIDER_NAME)
|
||||||
)
|
)
|
||||||
|
|
||||||
# check if client_key and api_key are present
|
# check if client_key, api_key, ssh_key_file, and ssh_key_name are present
|
||||||
path = os.path.join(integration.FILES,
|
path = os.path.join(integration.FILES,
|
||||||
'conf',
|
'conf',
|
||||||
'cloud.providers.d',
|
'cloud.providers.d',
|
||||||
provider + '.conf')
|
PROVIDER_NAME + '.conf')
|
||||||
config = cloud_providers_config(path)
|
config = cloud_providers_config(path)
|
||||||
|
|
||||||
api = config['digitalocean-config']['digital_ocean']['api_key']
|
api = config[profile_str][PROVIDER_NAME]['api_key']
|
||||||
client = config['digitalocean-config']['digital_ocean']['client_key']
|
client = config[profile_str][PROVIDER_NAME]['client_key']
|
||||||
ssh_file = config['digitalocean-config']['digital_ocean']['ssh_key_file']
|
ssh_file = config[profile_str][PROVIDER_NAME]['ssh_key_file']
|
||||||
ssh_name = config['digitalocean-config']['digital_ocean']['ssh_key_name']
|
ssh_name = config[profile_str][PROVIDER_NAME]['ssh_key_name']
|
||||||
|
|
||||||
if api == '' or client == '' or ssh_file == '' or ssh_name == '':
|
if api == '' or client == '' or ssh_file == '' or ssh_name == '':
|
||||||
self.skipTest(
|
self.skipTest(
|
||||||
'A client key, an api key, an ssh key file, and an ssh key name '
|
'A client key, an api key, an ssh key file, and an ssh key name '
|
||||||
'must be provided to run these tests. Check '
|
'must be provided to run these tests. Check '
|
||||||
'tests/integration/files/conf/cloud.providers.d/{0}.conf'
|
'tests/integration/files/conf/cloud.providers.d/{0}.conf'
|
||||||
.format(provider)
|
.format(PROVIDER_NAME)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def test_list_images(self):
|
||||||
|
'''
|
||||||
|
Tests the return of running the --list-images command for digital ocean
|
||||||
|
'''
|
||||||
|
image_name = '14.10 x64'
|
||||||
|
ret_str = ' {0}'.format(image_name)
|
||||||
|
list_images = self.run_cloud('--list-images {0}'.format(PROVIDER_NAME))
|
||||||
|
self.assertIn(ret_str, list_images)
|
||||||
|
|
||||||
|
def test_list_locations(self):
|
||||||
|
'''
|
||||||
|
Tests the return of running the --list-locations command for digital ocean
|
||||||
|
'''
|
||||||
|
location_name = 'San Francisco 1'
|
||||||
|
ret_str = ' {0}'.format(location_name)
|
||||||
|
list_locations = self.run_cloud('--list-locations {0}'.format(PROVIDER_NAME))
|
||||||
|
self.assertIn(ret_str, list_locations)
|
||||||
|
|
||||||
|
def test_list_sizes(self):
|
||||||
|
'''
|
||||||
|
Tests the return of running the --list-sizes command for digital ocean
|
||||||
|
'''
|
||||||
|
size_name = '16GB'
|
||||||
|
ret_str = ' {0}'.format(size_name)
|
||||||
|
list_sizes = self.run_cloud('--list-sizes {0}'.format(PROVIDER_NAME))
|
||||||
|
self.assertIn(ret_str, list_sizes)
|
||||||
|
|
||||||
def test_instance(self):
|
def test_instance(self):
|
||||||
'''
|
'''
|
||||||
Test creating an instance on DigitalOcean
|
Test creating an instance on DigitalOcean
|
||||||
@ -101,10 +129,9 @@ class DigitalOceanTest(integration.ShellCase):
|
|||||||
except AssertionError:
|
except AssertionError:
|
||||||
raise
|
raise
|
||||||
|
|
||||||
def tearDown(self):
|
# Final clean-up of created instance, in case something went wrong.
|
||||||
'''
|
# This was originally in a tearDown function, but that didn't make sense
|
||||||
Clean up after tests
|
# To run this for each test when not all tests create instances.
|
||||||
'''
|
|
||||||
query = self.run_cloud('--query')
|
query = self.run_cloud('--query')
|
||||||
ret_str = ' {0}:'.format(INSTANCE_NAME)
|
ret_str = ' {0}:'.format(INSTANCE_NAME)
|
||||||
|
|
||||||
|
115
tests/integration/cloud/providers/joyent.py
Normal file
115
tests/integration/cloud/providers/joyent.py
Normal file
@ -0,0 +1,115 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
'''
|
||||||
|
:codeauthor: :email:`Nicole Thomas <nicole@saltstack.com>`
|
||||||
|
'''
|
||||||
|
|
||||||
|
# Import Python Libs
|
||||||
|
import os
|
||||||
|
import random
|
||||||
|
import string
|
||||||
|
|
||||||
|
# Import Salt Testing Libs
|
||||||
|
from salttesting.helpers import ensure_in_syspath, expensiveTest
|
||||||
|
|
||||||
|
ensure_in_syspath('../../../')
|
||||||
|
|
||||||
|
# Import Salt Libs
|
||||||
|
import integration
|
||||||
|
from salt.config import cloud_providers_config
|
||||||
|
|
||||||
|
|
||||||
|
def __random_name(size=6):
|
||||||
|
'''
|
||||||
|
Generates a random cloud instance name
|
||||||
|
'''
|
||||||
|
return 'CLOUD-TEST-' + ''.join(
|
||||||
|
random.choice(string.ascii_uppercase + string.digits)
|
||||||
|
for x in range(size)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create the cloud instance name to be used throughout the tests
|
||||||
|
INSTANCE_NAME = __random_name()
|
||||||
|
|
||||||
|
|
||||||
|
class JoyentTest(integration.ShellCase):
|
||||||
|
'''
|
||||||
|
Integration tests for the Joyent cloud provider in Salt-Cloud
|
||||||
|
'''
|
||||||
|
|
||||||
|
@expensiveTest
|
||||||
|
def setUp(self):
|
||||||
|
'''
|
||||||
|
Sets up the test requirements
|
||||||
|
'''
|
||||||
|
super(JoyentTest, self).setUp()
|
||||||
|
|
||||||
|
# check if appropriate cloud provider and profile files are present
|
||||||
|
profile_str = 'joyent-config:'
|
||||||
|
provider = 'joyent'
|
||||||
|
providers = self.run_cloud('--list-providers')
|
||||||
|
if profile_str not in providers:
|
||||||
|
self.skipTest(
|
||||||
|
'Configuration file for {0} was not found. Check {0}.conf files '
|
||||||
|
'in tests/integration/files/conf/cloud.*.d/ to run these tests.'
|
||||||
|
.format(provider)
|
||||||
|
)
|
||||||
|
|
||||||
|
# check if user, password, private_key, and keyname are present
|
||||||
|
path = os.path.join(integration.FILES,
|
||||||
|
'conf',
|
||||||
|
'cloud.providers.d',
|
||||||
|
provider + '.conf')
|
||||||
|
config = cloud_providers_config(path)
|
||||||
|
|
||||||
|
user = config['joyent-config'][provider]['user']
|
||||||
|
password = config['joyent-config'][provider]['password']
|
||||||
|
private_key = config['joyent-config'][provider]['private_key']
|
||||||
|
keyname = config['joyent-config'][provider]['keyname']
|
||||||
|
|
||||||
|
if user == '' or password == '' or private_key == '' or keyname == '':
|
||||||
|
self.skipTest(
|
||||||
|
'A user name, password, private_key file path, and a key name '
|
||||||
|
'must be provided to run these tests. Check '
|
||||||
|
'tests/integration/files/conf/cloud.providers.d/{0}.conf'
|
||||||
|
.format(provider)
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_instance(self):
|
||||||
|
'''
|
||||||
|
Test creating and deleting instance on Joyent
|
||||||
|
'''
|
||||||
|
|
||||||
|
# create the instance
|
||||||
|
instance = self.run_cloud('-p joyent-test {0}'.format(INSTANCE_NAME))
|
||||||
|
ret_str = ' {0}'.format(INSTANCE_NAME)
|
||||||
|
|
||||||
|
# check if instance with salt installed returned
|
||||||
|
try:
|
||||||
|
self.assertIn(ret_str, instance)
|
||||||
|
except AssertionError:
|
||||||
|
self.run_cloud('-d {0} --assume-yes'.format(INSTANCE_NAME))
|
||||||
|
raise
|
||||||
|
|
||||||
|
# delete the instance
|
||||||
|
delete = self.run_cloud('-d {0} --assume-yes'.format(INSTANCE_NAME))
|
||||||
|
ret_str = ' True'
|
||||||
|
try:
|
||||||
|
self.assertIn(ret_str, delete)
|
||||||
|
except AssertionError:
|
||||||
|
raise
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
'''
|
||||||
|
Clean up after tests
|
||||||
|
'''
|
||||||
|
query = self.run_cloud('--query')
|
||||||
|
ret_str = ' {0}:'.format(INSTANCE_NAME)
|
||||||
|
|
||||||
|
# if test instance is still present, delete it
|
||||||
|
if ret_str in query:
|
||||||
|
self.run_cloud('-d {0} --assume-yes'.format(INSTANCE_NAME))
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
from integration import run_tests
|
||||||
|
run_tests(JoyentTest)
|
@ -0,0 +1,5 @@
|
|||||||
|
joyent-test:
|
||||||
|
provider: joyent-config
|
||||||
|
size: Extra Small 512 MB
|
||||||
|
image: ubuntu-certified-14.04
|
||||||
|
location: us-east-1
|
@ -0,0 +1,6 @@
|
|||||||
|
joyent-config:
|
||||||
|
provider: joyent
|
||||||
|
user: ''
|
||||||
|
password: ''
|
||||||
|
private_key: ''
|
||||||
|
keyname: ''
|
@ -44,7 +44,21 @@ class CMDModuleTest(integration.ModuleCase):
|
|||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
self.run_function('cmd.run',
|
self.run_function('cmd.run',
|
||||||
['echo $SHELL',
|
['echo $SHELL',
|
||||||
'shell={0}'.format(shell)], python_shell=True).rstrip(), shell)
|
'shell={0}'.format(shell)],
|
||||||
|
python_shell=True).rstrip(), shell)
|
||||||
|
self.assertEqual(self.run_function('cmd.run',
|
||||||
|
['ls / | grep etc'],
|
||||||
|
python_shell=True), 'etc')
|
||||||
|
self.assertEqual(self.run_function('cmd.run',
|
||||||
|
['echo {{grains.id}} | awk "{print $1}"'],
|
||||||
|
template='jinja',
|
||||||
|
python_shell=True), 'minion')
|
||||||
|
self.assertEqual(self.run_function('cmd.run',
|
||||||
|
['grep f'],
|
||||||
|
stdin='one\ntwo\nthree\nfour\nfive\n'), 'four\nfive')
|
||||||
|
self.assertEqual(self.run_function('cmd.run',
|
||||||
|
['echo "a=b" | sed -e s/=/:/g'],
|
||||||
|
python_shell=True), 'a:b')
|
||||||
|
|
||||||
@patch('pwd.getpwnam')
|
@patch('pwd.getpwnam')
|
||||||
@patch('subprocess.Popen')
|
@patch('subprocess.Popen')
|
||||||
|
@ -49,6 +49,39 @@ class PublishModuleTest(integration.ModuleCase,
|
|||||||
self.assertEqual(ret['__pub_id'], 'minion')
|
self.assertEqual(ret['__pub_id'], 'minion')
|
||||||
self.assertEqual(ret['__pub_fun'], 'test.kwarg')
|
self.assertEqual(ret['__pub_fun'], 'test.kwarg')
|
||||||
|
|
||||||
|
def test_publish_yaml_args(self):
|
||||||
|
'''
|
||||||
|
test publish.publish yaml args formatting
|
||||||
|
'''
|
||||||
|
ret = self.run_function('publish.publish', ['minion', 'test.ping'])
|
||||||
|
self.assertEqual(ret, {'minion': True})
|
||||||
|
|
||||||
|
test_args_list = ['saltines, si', 'crackers, nein', 'cheese, indeed']
|
||||||
|
test_args = '["{args[0]}", "{args[1]}", "{args[2]}"]'.format(args=test_args_list)
|
||||||
|
ret = self.run_function(
|
||||||
|
'publish.publish',
|
||||||
|
['minion', 'test.arg', test_args]
|
||||||
|
)
|
||||||
|
ret = ret['minion']
|
||||||
|
|
||||||
|
check_true = (
|
||||||
|
'__pub_arg',
|
||||||
|
'__pub_fun',
|
||||||
|
'__pub_id',
|
||||||
|
'__pub_jid',
|
||||||
|
'__pub_ret',
|
||||||
|
'__pub_tgt',
|
||||||
|
'__pub_tgt_type',
|
||||||
|
)
|
||||||
|
for name in check_true:
|
||||||
|
if name not in ret['kwargs']:
|
||||||
|
print(name)
|
||||||
|
self.assertTrue(name in ret['kwargs'])
|
||||||
|
|
||||||
|
self.assertEqual(ret['args'], test_args_list)
|
||||||
|
self.assertEqual(ret['kwargs']['__pub_id'], 'minion')
|
||||||
|
self.assertEqual(ret['kwargs']['__pub_fun'], 'test.arg')
|
||||||
|
|
||||||
def test_full_data(self):
|
def test_full_data(self):
|
||||||
'''
|
'''
|
||||||
publish.full_data
|
publish.full_data
|
||||||
|
@ -68,6 +68,7 @@ class SysModuleTest(integration.ModuleCase):
|
|||||||
'runtests_decorators.missing_depends_will_fallback',
|
'runtests_decorators.missing_depends_will_fallback',
|
||||||
'yumpkg.expand_repo_def',
|
'yumpkg.expand_repo_def',
|
||||||
'yumpkg5.expand_repo_def',
|
'yumpkg5.expand_repo_def',
|
||||||
|
'container_resource.run',
|
||||||
)
|
)
|
||||||
|
|
||||||
for fun in docs:
|
for fun in docs:
|
||||||
|
@ -22,7 +22,7 @@ class ManageTest(integration.ShellCase):
|
|||||||
fileserver.dir_list
|
fileserver.dir_list
|
||||||
'''
|
'''
|
||||||
ret = self.run_run_plus(fun='fileserver.dir_list')
|
ret = self.run_run_plus(fun='fileserver.dir_list')
|
||||||
self.assertIsInstance(ret['fun'], dict)
|
self.assertIsInstance(ret['fun'], list)
|
||||||
|
|
||||||
def test_envs(self):
|
def test_envs(self):
|
||||||
'''
|
'''
|
||||||
@ -39,7 +39,7 @@ class ManageTest(integration.ShellCase):
|
|||||||
fileserver.file_list
|
fileserver.file_list
|
||||||
'''
|
'''
|
||||||
ret = self.run_run_plus(fun='fileserver.file_list')
|
ret = self.run_run_plus(fun='fileserver.file_list')
|
||||||
self.assertIsInstance(ret['fun'], dict)
|
self.assertIsInstance(ret['fun'], list)
|
||||||
|
|
||||||
def test_symlink_list(self):
|
def test_symlink_list(self):
|
||||||
'''
|
'''
|
||||||
|
@ -24,7 +24,7 @@ class BatchTestCase(TestCase):
|
|||||||
'''
|
'''
|
||||||
|
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
opts = {'batch': '', 'conf_file': {}, 'tgt': '', 'timeout': ''}
|
opts = {'batch': '', 'conf_file': {}, 'tgt': '', 'transport': ''}
|
||||||
mock_client = MagicMock()
|
mock_client = MagicMock()
|
||||||
with patch('salt.client.get_local_client', MagicMock(return_value=mock_client)):
|
with patch('salt.client.get_local_client', MagicMock(return_value=mock_client)):
|
||||||
with patch('salt.client.LocalClient.cmd_iter', MagicMock(return_value=[])):
|
with patch('salt.client.LocalClient.cmd_iter', MagicMock(return_value=[])):
|
||||||
|
Loading…
Reference in New Issue
Block a user