mirror of
https://github.com/valitydev/salt.git
synced 2024-11-07 08:58:59 +00:00
Merge branch 'develop' into source-ip-port-opts
This commit is contained in:
commit
c843914045
4
.github/stale.yml
vendored
4
.github/stale.yml
vendored
@ -1,8 +1,8 @@
|
||||
# Probot Stale configuration file
|
||||
|
||||
# Number of days of inactivity before an issue becomes stale
|
||||
# 875 is approximately 2 years and 5 months
|
||||
daysUntilStale: 875
|
||||
# 860 is approximately 2 years and 4 months
|
||||
daysUntilStale: 860
|
||||
|
||||
# Number of days of inactivity before a stale issue is closed
|
||||
daysUntilClose: 7
|
||||
|
1
.gitignore
vendored
1
.gitignore
vendored
@ -88,6 +88,7 @@ tests/integration/cloud/providers/logs
|
||||
|
||||
# Private keys from the integration tests
|
||||
tests/integration/cloud/providers/pki/minions
|
||||
/helpers/
|
||||
|
||||
# Ignore tox virtualenvs
|
||||
/.tox/
|
||||
|
@ -1292,7 +1292,7 @@ The password used for HTTP proxy access.
|
||||
|
||||
proxy_password: obolus
|
||||
|
||||
Minion Module Management
|
||||
Minion Execution Module Management
|
||||
========================
|
||||
|
||||
.. conf_minion:: disable_modules
|
||||
@ -1300,11 +1300,12 @@ Minion Module Management
|
||||
``disable_modules``
|
||||
-------------------
|
||||
|
||||
Default: ``[]`` (all modules are enabled by default)
|
||||
Default: ``[]`` (all execution modules are enabled by default)
|
||||
|
||||
The event may occur in which the administrator desires that a minion should not
|
||||
be able to execute a certain module. The ``sys`` module is built into the minion
|
||||
and cannot be disabled.
|
||||
be able to execute a certain module.
|
||||
|
||||
However, the ``sys`` module is built into the minion and cannot be disabled.
|
||||
|
||||
This setting can also tune the minion. Because all modules are loaded into system
|
||||
memory, disabling modules will lower the minion's memory footprint.
|
||||
@ -1343,7 +1344,8 @@ Default: ``[]`` (Module whitelisting is disabled. Adding anything to the config
|
||||
will cause only the listed modules to be enabled. Modules not in the list will
|
||||
not be loaded.)
|
||||
|
||||
This option is the reverse of disable_modules.
|
||||
This option is the reverse of disable_modules. If enabled, only execution modules in this
|
||||
list will be loaded and executed on the minion.
|
||||
|
||||
Note that this is a very large hammer and it can be quite difficult to keep the minion working
|
||||
the way you think it should since Salt uses many modules internally itself. At a bare minimum
|
||||
@ -1836,9 +1838,15 @@ enabled and can be disabled by changing this value to ``False``.
|
||||
If ``extmod_whitelist`` is specified, modules which are not whitelisted will also be cleaned here.
|
||||
|
||||
.. conf_minion:: environment
|
||||
.. conf_minion:: saltenv
|
||||
|
||||
``environment``
|
||||
---------------
|
||||
``saltenv``
|
||||
-----------
|
||||
|
||||
.. versionchanged:: Oxygen
|
||||
Renamed from ``environment`` to ``saltenv``. If ``environment`` is used,
|
||||
``saltenv`` will take its value. If both are used, ``environment`` will be
|
||||
ignored and ``saltenv`` will be used.
|
||||
|
||||
Normally the minion is not isolated to any single environment on the master
|
||||
when running states, but the environment can be isolated on the minion side
|
||||
@ -1847,7 +1855,25 @@ environments is to isolate via the top file.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
environment: dev
|
||||
saltenv: dev
|
||||
|
||||
.. conf_minion:: lock_saltenv
|
||||
|
||||
``lock_saltenv``
|
||||
----------------
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
Default: ``False``
|
||||
|
||||
For purposes of running states, this option prevents using the ``saltenv``
|
||||
argument to manually set the environment. This is useful to keep a minion which
|
||||
has the :conf_minion:`saltenv` option set to ``dev`` from running states from
|
||||
an environment other than ``dev``.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
lock_saltenv: True
|
||||
|
||||
.. conf_minion:: snapper_states
|
||||
|
||||
|
@ -29,7 +29,7 @@ the salt-master:
|
||||
my-saltify-config:
|
||||
minion:
|
||||
master: 111.222.333.444
|
||||
provider: saltify
|
||||
driver: saltify
|
||||
|
||||
However, if you wish to use the more advanced capabilities of salt-cloud, such as
|
||||
rebooting, listing, and disconnecting machines, then the salt master must fill
|
||||
|
@ -47,6 +47,7 @@ Available in
|
||||
- State Modules
|
||||
- Returners
|
||||
- Runners
|
||||
- SDB Modules
|
||||
|
||||
``__salt__`` contains the execution module functions. This allows for all
|
||||
functions to be called as they have been set up by the salt loader.
|
||||
|
@ -137,6 +137,43 @@ can specify the "name" argument to avoid conflicting IDs:
|
||||
- kwarg:
|
||||
remove_existing: true
|
||||
|
||||
.. _orchestrate-runner-fail-functions:
|
||||
|
||||
Fail Functions
|
||||
**************
|
||||
|
||||
When running a remote execution function in orchestration, certain return
|
||||
values for those functions may indicate failure, while the function itself
|
||||
doesn't set a return code. For those circumstances, using a "fail function"
|
||||
allows for a more flexible means of assessing success or failure.
|
||||
|
||||
A fail function can be written as part of a :ref:`custom execution module
|
||||
<writing-execution-modules>`. The function should accept one argument, and
|
||||
return a boolean result. For example:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
def check_func_result(retval):
|
||||
if some_condition:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
The function can then be referenced in orchestration SLS like so:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
do_stuff:
|
||||
salt.function:
|
||||
- name: modname.funcname
|
||||
- tgt: '*'
|
||||
- fail_function: mymod.check_func_result
|
||||
|
||||
.. important::
|
||||
Fail functions run *on the master*, so they must be synced using ``salt-run
|
||||
saltutil.sync_modules``.
|
||||
|
||||
State
|
||||
^^^^^
|
||||
|
||||
@ -221,6 +258,7 @@ To execute with pillar data.
|
||||
salt-run state.orch orch.deploy pillar='{"servers": "newsystem1",
|
||||
"master": "mymaster"}'
|
||||
|
||||
.. _orchestrate-runner-return-codes-runner-wheel:
|
||||
|
||||
Return Codes in Runner/Wheel Jobs
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
@ -298,3 +336,270 @@ Given the above setup, the orchestration will be carried out as follows:
|
||||
.. note::
|
||||
|
||||
Remember, salt-run is always executed on the master.
|
||||
|
||||
.. _orchestrate-runner-parsing-results-programatically:
|
||||
|
||||
Parsing Results Programatically
|
||||
-------------------------------
|
||||
|
||||
Orchestration jobs return output in a specific data structure. That data
|
||||
structure is represented differently depending on the outputter used. With the
|
||||
default outputter for orchestration, you get a nice human-readable output.
|
||||
Assume the following orchestration SLS:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
good_state:
|
||||
salt.state:
|
||||
- tgt: myminion
|
||||
- sls:
|
||||
- succeed_with_changes
|
||||
|
||||
bad_state:
|
||||
salt.state:
|
||||
- tgt: myminion
|
||||
- sls:
|
||||
- fail_with_changes
|
||||
|
||||
mymod.myfunc:
|
||||
salt.function:
|
||||
- tgt: myminion
|
||||
|
||||
mymod.myfunc_false_result:
|
||||
salt.function:
|
||||
- tgt: myminion
|
||||
|
||||
|
||||
Running this using the default outputter would produce output which looks like
|
||||
this:
|
||||
|
||||
.. code-block:: text
|
||||
|
||||
fa5944a73aa8_master:
|
||||
----------
|
||||
ID: good_state
|
||||
Function: salt.state
|
||||
Result: True
|
||||
Comment: States ran successfully. Updating myminion.
|
||||
Started: 21:08:02.681604
|
||||
Duration: 265.565 ms
|
||||
Changes:
|
||||
myminion:
|
||||
----------
|
||||
ID: test succeed with changes
|
||||
Function: test.succeed_with_changes
|
||||
Result: True
|
||||
Comment: Success!
|
||||
Started: 21:08:02.835893
|
||||
Duration: 0.375 ms
|
||||
Changes:
|
||||
----------
|
||||
testing:
|
||||
----------
|
||||
new:
|
||||
Something pretended to change
|
||||
old:
|
||||
Unchanged
|
||||
|
||||
Summary for myminion
|
||||
------------
|
||||
Succeeded: 1 (changed=1)
|
||||
Failed: 0
|
||||
------------
|
||||
Total states run: 1
|
||||
Total run time: 0.375 ms
|
||||
----------
|
||||
ID: bad_state
|
||||
Function: salt.state
|
||||
Result: False
|
||||
Comment: Run failed on minions: myminion
|
||||
Started: 21:08:02.947702
|
||||
Duration: 177.01 ms
|
||||
Changes:
|
||||
myminion:
|
||||
----------
|
||||
ID: test fail with changes
|
||||
Function: test.fail_with_changes
|
||||
Result: False
|
||||
Comment: Failure!
|
||||
Started: 21:08:03.116634
|
||||
Duration: 0.502 ms
|
||||
Changes:
|
||||
----------
|
||||
testing:
|
||||
----------
|
||||
new:
|
||||
Something pretended to change
|
||||
old:
|
||||
Unchanged
|
||||
|
||||
Summary for myminion
|
||||
------------
|
||||
Succeeded: 0 (changed=1)
|
||||
Failed: 1
|
||||
------------
|
||||
Total states run: 1
|
||||
Total run time: 0.502 ms
|
||||
----------
|
||||
ID: mymod.myfunc
|
||||
Function: salt.function
|
||||
Result: True
|
||||
Comment: Function ran successfully. Function mymod.myfunc ran on myminion.
|
||||
Started: 21:08:03.125011
|
||||
Duration: 159.488 ms
|
||||
Changes:
|
||||
myminion:
|
||||
True
|
||||
----------
|
||||
ID: mymod.myfunc_false_result
|
||||
Function: salt.function
|
||||
Result: False
|
||||
Comment: Running function mymod.myfunc_false_result failed on minions: myminion. Function mymod.myfunc_false_result ran on myminion.
|
||||
Started: 21:08:03.285148
|
||||
Duration: 176.787 ms
|
||||
Changes:
|
||||
myminion:
|
||||
False
|
||||
|
||||
Summary for fa5944a73aa8_master
|
||||
------------
|
||||
Succeeded: 2 (changed=4)
|
||||
Failed: 2
|
||||
------------
|
||||
Total states run: 4
|
||||
Total run time: 778.850 ms
|
||||
|
||||
|
||||
However, using the ``json`` outputter, you can get the output in an easily
|
||||
loadable and parsable format:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt-run state.orchestrate test --out=json
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"outputter": "highstate",
|
||||
"data": {
|
||||
"fa5944a73aa8_master": {
|
||||
"salt_|-good_state_|-good_state_|-state": {
|
||||
"comment": "States ran successfully. Updating myminion.",
|
||||
"name": "good_state",
|
||||
"start_time": "21:35:16.868345",
|
||||
"result": true,
|
||||
"duration": 267.299,
|
||||
"__run_num__": 0,
|
||||
"__jid__": "20171130213516897392",
|
||||
"__sls__": "test",
|
||||
"changes": {
|
||||
"ret": {
|
||||
"myminion": {
|
||||
"test_|-test succeed with changes_|-test succeed with changes_|-succeed_with_changes": {
|
||||
"comment": "Success!",
|
||||
"name": "test succeed with changes",
|
||||
"start_time": "21:35:17.022592",
|
||||
"result": true,
|
||||
"duration": 0.362,
|
||||
"__run_num__": 0,
|
||||
"__sls__": "succeed_with_changes",
|
||||
"changes": {
|
||||
"testing": {
|
||||
"new": "Something pretended to change",
|
||||
"old": "Unchanged"
|
||||
}
|
||||
},
|
||||
"__id__": "test succeed with changes"
|
||||
}
|
||||
}
|
||||
},
|
||||
"out": "highstate"
|
||||
},
|
||||
"__id__": "good_state"
|
||||
},
|
||||
"salt_|-bad_state_|-bad_state_|-state": {
|
||||
"comment": "Run failed on minions: test",
|
||||
"name": "bad_state",
|
||||
"start_time": "21:35:17.136511",
|
||||
"result": false,
|
||||
"duration": 197.635,
|
||||
"__run_num__": 1,
|
||||
"__jid__": "20171130213517202203",
|
||||
"__sls__": "test",
|
||||
"changes": {
|
||||
"ret": {
|
||||
"myminion": {
|
||||
"test_|-test fail with changes_|-test fail with changes_|-fail_with_changes": {
|
||||
"comment": "Failure!",
|
||||
"name": "test fail with changes",
|
||||
"start_time": "21:35:17.326268",
|
||||
"result": false,
|
||||
"duration": 0.509,
|
||||
"__run_num__": 0,
|
||||
"__sls__": "fail_with_changes",
|
||||
"changes": {
|
||||
"testing": {
|
||||
"new": "Something pretended to change",
|
||||
"old": "Unchanged"
|
||||
}
|
||||
},
|
||||
"__id__": "test fail with changes"
|
||||
}
|
||||
}
|
||||
},
|
||||
"out": "highstate"
|
||||
},
|
||||
"__id__": "bad_state"
|
||||
},
|
||||
"salt_|-mymod.myfunc_|-mymod.myfunc_|-function": {
|
||||
"comment": "Function ran successfully. Function mymod.myfunc ran on myminion.",
|
||||
"name": "mymod.myfunc",
|
||||
"start_time": "21:35:17.334373",
|
||||
"result": true,
|
||||
"duration": 151.716,
|
||||
"__run_num__": 2,
|
||||
"__jid__": "20171130213517361706",
|
||||
"__sls__": "test",
|
||||
"changes": {
|
||||
"ret": {
|
||||
"myminion": true
|
||||
},
|
||||
"out": "highstate"
|
||||
},
|
||||
"__id__": "mymod.myfunc"
|
||||
},
|
||||
"salt_|-mymod.myfunc_false_result-mymod.myfunc_false_result-function": {
|
||||
"comment": "Running function mymod.myfunc_false_result failed on minions: myminion. Function mymod.myfunc_false_result ran on myminion.",
|
||||
"name": "mymod.myfunc_false_result",
|
||||
"start_time": "21:35:17.486625",
|
||||
"result": false,
|
||||
"duration": 174.241,
|
||||
"__run_num__": 3,
|
||||
"__jid__": "20171130213517536270",
|
||||
"__sls__": "test",
|
||||
"changes": {
|
||||
"ret": {
|
||||
"myminion": false
|
||||
},
|
||||
"out": "highstate"
|
||||
},
|
||||
"__id__": "mymod.myfunc_false_result"
|
||||
}
|
||||
}
|
||||
},
|
||||
"retcode": 1
|
||||
}
|
||||
|
||||
|
||||
The Oxygen release includes a couple fixes to make parsing this data easier and
|
||||
more accurate. The first is the ability to set a :ref:`return code
|
||||
<orchestrate-runner-return-codes-runner-wheel>` in a custom runner or wheel
|
||||
function, as noted above. The second is a change to how failures are included
|
||||
in the return data. Prior to the Oxygen release, minions that failed a
|
||||
``salt.state`` orchestration job would show up in the ``comment`` field of the
|
||||
return data, in a human-readable string that was not easily parsed. They are
|
||||
now included in the ``changes`` dictionary alongside the minions that
|
||||
succeeded. In addition, ``salt.function`` jobs which failed because the
|
||||
:ref:`fail function <orchestrate-runner-fail-functions>` returned ``False``
|
||||
used to handle their failures in the same way ``salt.state`` jobs did, and this
|
||||
has likewise been corrected.
|
||||
|
@ -65,6 +65,7 @@ noon PST so the Stormpath external authentication module has been removed.
|
||||
|
||||
https://stormpath.com/oktaplusstormpath
|
||||
|
||||
|
||||
New (Proxy) Minion Configuration Options
|
||||
----------------------------------------
|
||||
|
||||
@ -76,6 +77,37 @@ or port, the following options have been added:
|
||||
- :conf_minion:`source_ret_port`
|
||||
- :conf_minion:`source_publish_port`
|
||||
|
||||
:conf_minion:`environment` config option renamed to :conf_minion:`saltenv`
|
||||
--------------------------------------------------------------------------
|
||||
|
||||
The :conf_minion:`environment` config option predates referring to a salt
|
||||
fileserver environment as a **saltenv**. To pin a minion to a single
|
||||
environment for running states, one would use :conf_minion:`environment`, but
|
||||
overriding that environment would be done with the ``saltenv`` argument. For
|
||||
consistency, :conf_minion:`environment` is now simply referred to as
|
||||
:conf_minion:`saltenv`. There are no plans to deprecate or remove
|
||||
:conf_minion:`environment`, if used it will log a warning and its value will be
|
||||
used as :conf_minion:`saltenv`.
|
||||
|
||||
:conf_minion:`lock_saltenv` config option added
|
||||
-----------------------------------------------
|
||||
|
||||
If set to ``True``, this option will prevent a minion from allowing the
|
||||
``saltenv`` argument to override the value set in :conf_minion:`saltenv` when
|
||||
running states.
|
||||
|
||||
Failed Minions for State/Function Orchestration Jobs Added to Changes Dictionary
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
For orchestration jobs which run states (or run remote execution functions and
|
||||
also use a :ref:`fail function <orchestrate-runner-fail-functions>` to indicate
|
||||
success or failure), minions which have ``False`` results were previously
|
||||
included as a formatted string in the comment field of the return for that
|
||||
orchestration job. This made the failed returns difficult to :ref:`parse
|
||||
programatically <orchestrate-runner-parsing-results-programatically>`. The
|
||||
failed returns in these cases are now included in the changes dictionary,
|
||||
making for much easier parsing.
|
||||
|
||||
New Grains
|
||||
----------
|
||||
|
||||
|
@ -78,7 +78,7 @@ UNIX systems
|
||||
|
||||
**BSD**:
|
||||
|
||||
- OpenBSD (``pip`` installation)
|
||||
- OpenBSD
|
||||
- FreeBSD 9/10/11
|
||||
|
||||
**SunOS**:
|
||||
@ -272,66 +272,118 @@ Here's a summary of the command line options:
|
||||
|
||||
$ sh bootstrap-salt.sh -h
|
||||
|
||||
Usage : bootstrap-salt.sh [options] <install-type> <install-type-args>
|
||||
|
||||
Installation types:
|
||||
- stable (default)
|
||||
- stable [version] (ubuntu specific)
|
||||
- daily (ubuntu specific)
|
||||
- testing (redhat specific)
|
||||
- git
|
||||
- stable Install latest stable release. This is the default
|
||||
install type
|
||||
- stable [branch] Install latest version on a branch. Only supported
|
||||
for packages available at repo.saltstack.com
|
||||
- stable [version] Install a specific version. Only supported for
|
||||
packages available at repo.saltstack.com
|
||||
- daily Ubuntu specific: configure SaltStack Daily PPA
|
||||
- testing RHEL-family specific: configure EPEL testing repo
|
||||
- git Install from the head of the develop branch
|
||||
- git [ref] Install from any git ref (such as a branch, tag, or
|
||||
commit)
|
||||
|
||||
Examples:
|
||||
- bootstrap-salt.sh
|
||||
- bootstrap-salt.sh stable
|
||||
- bootstrap-salt.sh stable 2014.7
|
||||
- bootstrap-salt.sh stable 2017.7
|
||||
- bootstrap-salt.sh stable 2017.7.2
|
||||
- bootstrap-salt.sh daily
|
||||
- bootstrap-salt.sh testing
|
||||
- bootstrap-salt.sh git
|
||||
- bootstrap-salt.sh git develop
|
||||
- bootstrap-salt.sh git v0.17.0
|
||||
- bootstrap-salt.sh git 8c3fadf15ec183e5ce8c63739850d543617e4357
|
||||
- bootstrap-salt.sh git 2017.7
|
||||
- bootstrap-salt.sh git v2017.7.2
|
||||
- bootstrap-salt.sh git 06f249901a2e2f1ed310d58ea3921a129f214358
|
||||
|
||||
Options:
|
||||
-h Display this message
|
||||
-v Display script version
|
||||
-n No colours.
|
||||
-D Show debug output.
|
||||
-c Temporary configuration directory
|
||||
-g Salt repository URL. (default: git://github.com/saltstack/salt.git)
|
||||
-G Instead of cloning from git://github.com/saltstack/salt.git, clone from https://github.com/saltstack/salt.git (Usually necessary on systems which have the regular git protocol port blocked, where https usually is not)
|
||||
-k Temporary directory holding the minion keys which will pre-seed
|
||||
the master.
|
||||
-s Sleep time used when waiting for daemons to start, restart and when checking
|
||||
for the services running. Default: 3
|
||||
-M Also install salt-master
|
||||
-S Also install salt-syndic
|
||||
-N Do not install salt-minion
|
||||
-X Do not start daemons after installation
|
||||
-C Only run the configuration function. This option automatically
|
||||
bypasses any installation.
|
||||
-P Allow pip based installations. On some distributions the required salt
|
||||
packages or its dependencies are not available as a package for that
|
||||
distribution. Using this flag allows the script to use pip as a last
|
||||
resort method. NOTE: This only works for functions which actually
|
||||
implement pip based installations.
|
||||
-F Allow copied files to overwrite existing(config, init.d, etc)
|
||||
-U If set, fully upgrade the system prior to bootstrapping salt
|
||||
-K If set, keep the temporary files in the temporary directories specified
|
||||
with -c and -k.
|
||||
-I If set, allow insecure connections while downloading any files. For
|
||||
example, pass '--no-check-certificate' to 'wget' or '--insecure' to 'curl'
|
||||
-A Pass the salt-master DNS name or IP. This will be stored under
|
||||
${BS_SALT_ETC_DIR}/minion.d/99-master-address.conf
|
||||
-i Pass the salt-minion id. This will be stored under
|
||||
${BS_SALT_ETC_DIR}/minion_id
|
||||
-L Install the Apache Libcloud package if possible(required for salt-cloud)
|
||||
-p Extra-package to install while installing salt dependencies. One package
|
||||
per -p flag. You're responsible for providing the proper package name.
|
||||
-d Disable check_service functions. Setting this flag disables the
|
||||
'install_<distro>_check_services' checks. You can also do this by
|
||||
touching /tmp/disable_salt_checks on the target host. Defaults ${BS_FALSE}
|
||||
-H Use the specified http proxy for the installation
|
||||
-Z Enable external software source for newer ZeroMQ(Only available for RHEL/CentOS/Fedora/Ubuntu based distributions)
|
||||
-b Assume that dependencies are already installed and software sources are set up.
|
||||
If git is selected, git tree is still checked out as dependency step.
|
||||
-h Display this message
|
||||
-v Display script version
|
||||
-n No colours
|
||||
-D Show debug output
|
||||
-c Temporary configuration directory
|
||||
-g Salt Git repository URL. Default: https://github.com/saltstack/salt.git
|
||||
-w Install packages from downstream package repository rather than
|
||||
upstream, saltstack package repository. This is currently only
|
||||
implemented for SUSE.
|
||||
-k Temporary directory holding the minion keys which will pre-seed
|
||||
the master.
|
||||
-s Sleep time used when waiting for daemons to start, restart and when
|
||||
checking for the services running. Default: 3
|
||||
-L Also install salt-cloud and required python-libcloud package
|
||||
-M Also install salt-master
|
||||
-S Also install salt-syndic
|
||||
-N Do not install salt-minion
|
||||
-X Do not start daemons after installation
|
||||
-d Disables checking if Salt services are enabled to start on system boot.
|
||||
You can also do this by touching /tmp/disable_salt_checks on the target
|
||||
host. Default: ${BS_FALSE}
|
||||
-P Allow pip based installations. On some distributions the required salt
|
||||
packages or its dependencies are not available as a package for that
|
||||
distribution. Using this flag allows the script to use pip as a last
|
||||
resort method. NOTE: This only works for functions which actually
|
||||
implement pip based installations.
|
||||
-U If set, fully upgrade the system prior to bootstrapping Salt
|
||||
-I If set, allow insecure connections while downloading any files. For
|
||||
example, pass '--no-check-certificate' to 'wget' or '--insecure' to
|
||||
'curl'. On Debian and Ubuntu, using this option with -U allows to obtain
|
||||
GnuPG archive keys insecurely if distro has changed release signatures.
|
||||
-F Allow copied files to overwrite existing (config, init.d, etc)
|
||||
-K If set, keep the temporary files in the temporary directories specified
|
||||
with -c and -k
|
||||
-C Only run the configuration function. Implies -F (forced overwrite).
|
||||
To overwrite Master or Syndic configs, -M or -S, respectively, must
|
||||
also be specified. Salt installation will be ommitted, but some of the
|
||||
dependencies could be installed to write configuration with -j or -J.
|
||||
-A Pass the salt-master DNS name or IP. This will be stored under
|
||||
${BS_SALT_ETC_DIR}/minion.d/99-master-address.conf
|
||||
-i Pass the salt-minion id. This will be stored under
|
||||
${BS_SALT_ETC_DIR}/minion_id
|
||||
-p Extra-package to install while installing Salt dependencies. One package
|
||||
per -p flag. You're responsible for providing the proper package name.
|
||||
-H Use the specified HTTP proxy for all download URLs (including https://).
|
||||
For example: http://myproxy.example.com:3128
|
||||
-Z Enable additional package repository for newer ZeroMQ
|
||||
(only available for RHEL/CentOS/Fedora/Ubuntu based distributions)
|
||||
-b Assume that dependencies are already installed and software sources are
|
||||
set up. If git is selected, git tree is still checked out as dependency
|
||||
step.
|
||||
-f Force shallow cloning for git installations.
|
||||
This may result in an "n/a" in the version number.
|
||||
-l Disable ssl checks. When passed, switches "https" calls to "http" where
|
||||
possible.
|
||||
-V Install Salt into virtualenv
|
||||
(only available for Ubuntu based distributions)
|
||||
-a Pip install all Python pkg dependencies for Salt. Requires -V to install
|
||||
all pip pkgs into the virtualenv.
|
||||
(Only available for Ubuntu based distributions)
|
||||
-r Disable all repository configuration performed by this script. This
|
||||
option assumes all necessary repository configuration is already present
|
||||
on the system.
|
||||
-R Specify a custom repository URL. Assumes the custom repository URL
|
||||
points to a repository that mirrors Salt packages located at
|
||||
repo.saltstack.com. The option passed with -R replaces the
|
||||
"repo.saltstack.com". If -R is passed, -r is also set. Currently only
|
||||
works on CentOS/RHEL and Debian based distributions.
|
||||
-J Replace the Master config file with data passed in as a JSON string. If
|
||||
a Master config file is found, a reasonable effort will be made to save
|
||||
the file with a ".bak" extension. If used in conjunction with -C or -F,
|
||||
no ".bak" file will be created as either of those options will force
|
||||
a complete overwrite of the file.
|
||||
-j Replace the Minion config file with data passed in as a JSON string. If
|
||||
a Minion config file is found, a reasonable effort will be made to save
|
||||
the file with a ".bak" extension. If used in conjunction with -C or -F,
|
||||
no ".bak" file will be created as either of those options will force
|
||||
a complete overwrite of the file.
|
||||
-q Quiet salt installation from git (setup.py install -q)
|
||||
-x Changes the python version used to install a git version of salt. Currently
|
||||
this is considered experimental and has only been tested on Centos 6. This
|
||||
only works for git installations.
|
||||
-y Installs a different python version on host. Currently this has only been
|
||||
tested with Centos 6 and is considered experimental. This will install the
|
||||
ius repo on the box if disable repo is false. This must be used in conjunction
|
||||
with -x <pythonversion>. For example:
|
||||
sh bootstrap.sh -P -y -x python2.7 git v2016.11.3
|
||||
The above will install python27 and install the git version of salt using the
|
||||
python2.7 executable. This only works for git and pip installations.
|
||||
|
@ -1050,7 +1050,7 @@ class Single(object):
|
||||
popts,
|
||||
opts_pkg[u'grains'],
|
||||
opts_pkg[u'id'],
|
||||
opts_pkg.get(u'environment', u'base')
|
||||
opts_pkg.get(u'saltenv', u'base')
|
||||
)
|
||||
pillar_data = pillar.compile_pillar()
|
||||
|
||||
|
@ -2473,7 +2473,12 @@ def create(vm_):
|
||||
# Either a datacenter or a folder can be optionally specified when cloning, required when creating.
|
||||
# If not specified when cloning, the existing VM/template\'s parent folder is used.
|
||||
if folder:
|
||||
folder_ref = salt.utils.vmware.get_mor_by_property(si, vim.Folder, folder, container_ref=container_ref)
|
||||
folder_parts = folder.split('/')
|
||||
search_reference = container_ref
|
||||
for folder_part in folder_parts:
|
||||
if folder_part:
|
||||
folder_ref = salt.utils.vmware.get_mor_by_property(si, vim.Folder, folder_part, container_ref=search_reference)
|
||||
search_reference = folder_ref
|
||||
if not folder_ref:
|
||||
log.error("Specified folder: '{0}' does not exist".format(folder))
|
||||
log.debug("Using folder in which {0} {1} is present".format(clone_type, vm_['clonefrom']))
|
||||
@ -3690,6 +3695,49 @@ def remove_all_snapshots(name, kwargs=None, call=None):
|
||||
return 'removed all snapshots'
|
||||
|
||||
|
||||
def convert_to_template(name, kwargs=None, call=None):
|
||||
'''
|
||||
Convert the specified virtual machine to template.
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt-cloud -a convert_to_template vmname
|
||||
'''
|
||||
if call != 'action':
|
||||
raise SaltCloudSystemExit(
|
||||
'The convert_to_template action must be called with '
|
||||
'-a or --action.'
|
||||
)
|
||||
|
||||
vm_ref = salt.utils.vmware.get_mor_by_property(_get_si(), vim.VirtualMachine, name)
|
||||
|
||||
if vm_ref.config.template:
|
||||
raise SaltCloudSystemExit(
|
||||
'{0} already a template'.format(
|
||||
name
|
||||
)
|
||||
)
|
||||
|
||||
try:
|
||||
vm_ref.MarkAsTemplate()
|
||||
except Exception as exc:
|
||||
log.error(
|
||||
'Error while converting VM to template {0}: {1}'.format(
|
||||
name,
|
||||
exc
|
||||
),
|
||||
# Show the traceback if the debug logging level is enabled
|
||||
exc_info_on_loglevel=logging.DEBUG
|
||||
)
|
||||
return 'failed to convert to teamplate'
|
||||
|
||||
return '{0} converted to template'.format(
|
||||
name
|
||||
)
|
||||
|
||||
|
||||
def add_host(kwargs=None, call=None):
|
||||
'''
|
||||
Add a host system to the specified cluster or datacenter in this VMware environment
|
||||
|
@ -255,7 +255,10 @@ VALID_OPTS = {
|
||||
'autoload_dynamic_modules': bool,
|
||||
|
||||
# Force the minion into a single environment when it fetches files from the master
|
||||
'environment': str,
|
||||
'saltenv': str,
|
||||
|
||||
# Prevent saltenv from being overriden on the command line
|
||||
'lock_saltenv': bool,
|
||||
|
||||
# Force the minion into a single pillar root when it fetches pillar data from the master
|
||||
'pillarenv': str,
|
||||
@ -1194,7 +1197,8 @@ DEFAULT_MINION_OPTS = {
|
||||
'random_startup_delay': 0,
|
||||
'failhard': False,
|
||||
'autoload_dynamic_modules': True,
|
||||
'environment': None,
|
||||
'saltenv': None,
|
||||
'lock_saltenv': False,
|
||||
'pillarenv': None,
|
||||
'pillarenv_from_saltenv': False,
|
||||
'pillar_opts': False,
|
||||
@ -1471,7 +1475,8 @@ DEFAULT_MASTER_OPTS = {
|
||||
},
|
||||
'top_file_merging_strategy': 'merge',
|
||||
'env_order': [],
|
||||
'environment': None,
|
||||
'saltenv': None,
|
||||
'lock_saltenv': False,
|
||||
'default_top': 'base',
|
||||
'file_client': 'local',
|
||||
'git_pillar_base': 'master',
|
||||
@ -3609,6 +3614,24 @@ def apply_minion_config(overrides=None,
|
||||
if overrides:
|
||||
opts.update(overrides)
|
||||
|
||||
if u'environment' in opts:
|
||||
if u'saltenv' in opts:
|
||||
log.warning(
|
||||
u'The \'saltenv\' and \'environment\' minion config options '
|
||||
u'cannot both be used. Ignoring \'environment\' in favor of '
|
||||
u'\'saltenv\'.',
|
||||
)
|
||||
# Set environment to saltenv in case someone's custom module is
|
||||
# refrencing __opts__['environment']
|
||||
opts[u'environment'] = opts[u'saltenv']
|
||||
else:
|
||||
log.warning(
|
||||
u'The \'environment\' minion config option has been renamed '
|
||||
u'to \'saltenv\'. Using %s as the \'saltenv\' config value.',
|
||||
opts[u'environment']
|
||||
)
|
||||
opts[u'saltenv'] = opts[u'environment']
|
||||
|
||||
opts['__cli'] = os.path.basename(sys.argv[0])
|
||||
|
||||
# No ID provided. Will getfqdn save us?
|
||||
@ -3761,6 +3784,24 @@ def apply_master_config(overrides=None, defaults=None):
|
||||
if overrides:
|
||||
opts.update(overrides)
|
||||
|
||||
if u'environment' in opts:
|
||||
if u'saltenv' in opts:
|
||||
log.warning(
|
||||
u'The \'saltenv\' and \'environment\' master config options '
|
||||
u'cannot both be used. Ignoring \'environment\' in favor of '
|
||||
u'\'saltenv\'.',
|
||||
)
|
||||
# Set environment to saltenv in case someone's custom runner is
|
||||
# refrencing __opts__['environment']
|
||||
opts[u'environment'] = opts[u'saltenv']
|
||||
else:
|
||||
log.warning(
|
||||
u'The \'environment\' master config option has been renamed '
|
||||
u'to \'saltenv\'. Using %s as the \'saltenv\' config value.',
|
||||
opts[u'environment']
|
||||
)
|
||||
opts[u'saltenv'] = opts[u'environment']
|
||||
|
||||
if len(opts['sock_dir']) > len(opts['cachedir']) + 10:
|
||||
opts['sock_dir'] = os.path.join(opts['cachedir'], '.salt-unix')
|
||||
|
||||
|
395
salt/config/schemas/esxvm.py
Normal file
395
salt/config/schemas/esxvm.py
Normal file
@ -0,0 +1,395 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
:codeauthor: :email:`Agnes Tevesz (agnes.tevesz@morganstanley.com)`
|
||||
|
||||
salt.config.schemas.esxvm
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
ESX Virtual Machine configuration schemas
|
||||
'''
|
||||
|
||||
# Import Python libs
|
||||
from __future__ import absolute_import
|
||||
|
||||
from salt.utils.schema import (DefinitionsSchema,
|
||||
ComplexSchemaItem,
|
||||
ArrayItem,
|
||||
IntegerItem,
|
||||
NumberItem,
|
||||
BooleanItem,
|
||||
StringItem,
|
||||
IPv4Item,
|
||||
AnyOfItem,
|
||||
NullItem)
|
||||
|
||||
|
||||
class ESXVirtualMachineSerialBackingItem(ComplexSchemaItem):
|
||||
'''
|
||||
Configuration Schema Item for ESX Virtual Machine Serial Port Backing
|
||||
'''
|
||||
title = 'ESX Virtual Machine Serial Port Backing'
|
||||
description = 'ESX virtual machine serial port backing'
|
||||
required = True
|
||||
|
||||
uri = StringItem()
|
||||
direction = StringItem(enum=('client', 'server'))
|
||||
filename = StringItem()
|
||||
|
||||
|
||||
class ESXVirtualMachineDeviceConnectionItem(ComplexSchemaItem):
|
||||
'''
|
||||
Configuration Schema Item for ESX Virtual Machine Serial Port Connection
|
||||
'''
|
||||
title = 'ESX Virtual Machine Serial Port Connection'
|
||||
description = 'ESX virtual machine serial port connection'
|
||||
required = True
|
||||
|
||||
allow_guest_control = BooleanItem(default=True)
|
||||
start_connected = BooleanItem(default=True)
|
||||
|
||||
|
||||
class ESXVirtualMachinePlacementSchemaItem(ComplexSchemaItem):
|
||||
'''
|
||||
Configuration Schema Item for ESX Virtual Machine Placement
|
||||
'''
|
||||
title = 'ESX Virtual Machine Placement Information'
|
||||
description = 'ESX virtual machine placement property'
|
||||
required = True
|
||||
|
||||
cluster = StringItem(title='Virtual Machine Cluster',
|
||||
description='Cluster of the virtual machine if it is placed to a cluster')
|
||||
host = StringItem(title='Virtual Machine Host',
|
||||
description='Host of the virtual machine if it is placed to a standalone host')
|
||||
resourcepool = StringItem(title='Virtual Machine Resource Pool',
|
||||
description='Resource pool of the virtual machine if it is placed to a resource pool')
|
||||
folder = StringItem(title='Virtual Machine Folder',
|
||||
description='Folder of the virtual machine where it should be deployed, default is the datacenter vmFolder')
|
||||
|
||||
|
||||
class ESXVirtualMachineCdDriveClientSchemaItem(ComplexSchemaItem):
|
||||
'''
|
||||
Configuration Schema Item for ESX Virtual Machine CD Drive Client
|
||||
'''
|
||||
title = 'ESX Virtual Machine Serial CD Client'
|
||||
description = 'ESX virtual machine CD/DVD drive client properties'
|
||||
|
||||
mode = StringItem(required=True, enum=('passthrough', 'atapi'))
|
||||
|
||||
|
||||
class ESXVirtualMachineCdDriveIsoSchemaItem(ComplexSchemaItem):
|
||||
'''
|
||||
Configuration Schema Item for ESX Virtual Machine CD Drive ISO
|
||||
'''
|
||||
title = 'ESX Virtual Machine Serial CD ISO'
|
||||
description = 'ESX virtual machine CD/DVD drive ISO properties'
|
||||
|
||||
path = StringItem(required=True)
|
||||
|
||||
|
||||
class ESXVirtualMachineCdDriveSchemaItem(ComplexSchemaItem):
|
||||
'''
|
||||
Configuration Schema Item for ESX Virtual Machine CD Drives
|
||||
'''
|
||||
title = 'ESX Virtual Machine Serial CD'
|
||||
description = 'ESX virtual machine CD/DVD drive properties'
|
||||
|
||||
adapter = StringItem(title='Virtual Machine CD/DVD Adapter',
|
||||
description='Unique adapter name for virtual machine cd/dvd drive',
|
||||
required=True)
|
||||
controller = StringItem(required=True)
|
||||
device_type = StringItem(title='Virtual Machine Device Type',
|
||||
description='CD/DVD drive of the virtual machine if it is placed to a cluster',
|
||||
required=True,
|
||||
default='client_device',
|
||||
enum=('datastore_iso_file', 'client_device'))
|
||||
client_device = ESXVirtualMachineCdDriveClientSchemaItem()
|
||||
datastore_iso_file = ESXVirtualMachineCdDriveIsoSchemaItem()
|
||||
connectable = ESXVirtualMachineDeviceConnectionItem()
|
||||
|
||||
|
||||
class ESXVirtualMachineSerialSchemaItem(ComplexSchemaItem):
|
||||
'''
|
||||
Configuration Schema Item for ESX Virtual Machine Serial Port
|
||||
'''
|
||||
title = 'ESX Virtual Machine Serial Port Configuration'
|
||||
description = 'ESX virtual machine serial port properties'
|
||||
|
||||
type = StringItem(title='Virtual Machine Serial Port Type',
|
||||
required=True,
|
||||
enum=('network', 'pipe', 'file', 'device'))
|
||||
adapter = StringItem(title='Virtual Machine Serial Port Name',
|
||||
description='Unique adapter name for virtual machine serial port'
|
||||
'for creation an arbitrary value should be specified',
|
||||
required=True)
|
||||
backing = ESXVirtualMachineSerialBackingItem()
|
||||
connectable = ESXVirtualMachineDeviceConnectionItem()
|
||||
yield_port = BooleanItem(title='Serial Port Yield',
|
||||
description='Serial port yield',
|
||||
default=False)
|
||||
|
||||
|
||||
class ESXVirtualMachineScsiSchemaItem(ComplexSchemaItem):
|
||||
'''
|
||||
Configuration Schema Item for ESX Virtual Machine SCSI Controller
|
||||
'''
|
||||
title = 'ESX Virtual Machine SCSI Controller Configuration'
|
||||
description = 'ESX virtual machine scsi controller properties'
|
||||
required = True
|
||||
|
||||
adapter = StringItem(title='Virtual Machine SCSI Controller Name',
|
||||
description='Unique SCSI controller name'
|
||||
'for creation an arbitrary value should be specified',
|
||||
required=True)
|
||||
type = StringItem(title='Virtual Machine SCSI type',
|
||||
description='Type of the SCSI controller',
|
||||
required=True,
|
||||
enum=('lsilogic', 'lsilogic_sas', 'paravirtual', 'buslogic'))
|
||||
bus_sharing = StringItem(title='Virtual Machine SCSI bus sharing',
|
||||
description='Sharing type of the SCSI bus',
|
||||
required=True,
|
||||
enum=('virtual_sharing', 'physical_sharing', 'no_sharing'))
|
||||
bus_number = NumberItem(title='Virtual Machine SCSI bus number',
|
||||
description='Unique bus number of the SCSI device',
|
||||
required=True)
|
||||
|
||||
|
||||
class ESXVirtualMachineSataSchemaItem(ComplexSchemaItem):
|
||||
'''
|
||||
Configuration Schema Item for ESX Virtual Machine SATA Controller
|
||||
'''
|
||||
title = 'ESX Virtual Machine SATA Controller Configuration'
|
||||
description = 'ESX virtual machine SATA controller properties'
|
||||
required = False
|
||||
adapter = StringItem(title='Virtual Machine SATA Controller Name',
|
||||
description='Unique SATA controller name'
|
||||
'for creation an arbitrary value should be specified',
|
||||
required=True)
|
||||
bus_number = NumberItem(title='Virtual Machine SATA bus number',
|
||||
description='Unique bus number of the SATA device',
|
||||
required=True)
|
||||
|
||||
|
||||
class ESXVirtualMachineDiskSchemaItem(ComplexSchemaItem):
|
||||
'''
|
||||
Configuration Schema Item for ESX Virtual Machine Disk
|
||||
'''
|
||||
title = 'ESX Virtual Machine Disk Configuration'
|
||||
description = 'ESX virtual machine disk properties'
|
||||
required = True
|
||||
|
||||
size = NumberItem(title='Disk size',
|
||||
description='Size of the disk in GB',
|
||||
required=True)
|
||||
unit = StringItem(title='Disk size unit',
|
||||
description='Unit of the disk size, to VMware a '
|
||||
'GB is the same as GiB = 1024MiB',
|
||||
required=False,
|
||||
default='GB',
|
||||
enum=('KB', 'MB', 'GB'))
|
||||
adapter = StringItem(title='Virtual Machine Adapter Name',
|
||||
description='Unique adapter name for virtual machine'
|
||||
'for creation an arbitrary value should be specified',
|
||||
required=True)
|
||||
filename = StringItem(title='Virtual Machine Disk File',
|
||||
description='File name of the virtual machine vmdk')
|
||||
datastore = StringItem(title='Virtual Machine Disk Datastore',
|
||||
description='Disk datastore where the virtual machine files will be placed',
|
||||
required=True)
|
||||
address = StringItem(title='Virtual Machine SCSI Address',
|
||||
description='Address of the SCSI adapter for the virtual machine',
|
||||
pattern=r'\d:\d')
|
||||
thin_provision = BooleanItem(title='Virtual Machine Disk Provision Type',
|
||||
description='Provision type of the disk',
|
||||
default=True,
|
||||
required=False)
|
||||
eagerly_scrub = AnyOfItem(required=False,
|
||||
items=[BooleanItem(), NullItem()])
|
||||
controller = StringItem(title='Virtual Machine SCSI Adapter',
|
||||
description='Name of the SCSI adapter where the disk will be connected',
|
||||
required=True)
|
||||
|
||||
|
||||
class ESXVirtualMachineNicMapSchemaItem(ComplexSchemaItem):
|
||||
'''
|
||||
Configuration Schema Item for ESX Virtual Machine Nic Map
|
||||
'''
|
||||
title = 'ESX Virtual Machine Nic Configuration'
|
||||
description = 'ESX Virtual Machine nic properties'
|
||||
required = False
|
||||
|
||||
domain = StringItem()
|
||||
gateway = IPv4Item()
|
||||
ip_addr = IPv4Item()
|
||||
subnet_mask = IPv4Item()
|
||||
|
||||
|
||||
class ESXVirtualMachineInterfaceSchemaItem(ComplexSchemaItem):
|
||||
'''
|
||||
Configuration Schema Item for ESX Virtual Machine Network Interface
|
||||
'''
|
||||
title = 'ESX Virtual Machine Network Interface Configuration'
|
||||
description = 'ESX Virtual Machine network adapter properties'
|
||||
required = True
|
||||
|
||||
name = StringItem(title='Virtual Machine Port Group',
|
||||
description='Specifies the port group name for the virtual machine connection',
|
||||
required=True)
|
||||
adapter = StringItem(title='Virtual Machine Network Adapter',
|
||||
description='Unique name of the network adapter, '
|
||||
'for creation an arbitrary value should be specified',
|
||||
required=True)
|
||||
adapter_type = StringItem(title='Virtual Machine Adapter Type',
|
||||
description='Network adapter type of the virtual machine',
|
||||
required=True,
|
||||
enum=('vmxnet', 'vmxnet2', 'vmxnet3', 'e1000', 'e1000e'),
|
||||
default='vmxnet3')
|
||||
switch_type = StringItem(title='Virtual Machine Switch Type',
|
||||
description='Specifies the type of the virtual switch for the virtual machine connection',
|
||||
required=True,
|
||||
default='standard',
|
||||
enum=('standard', 'distributed'))
|
||||
mac = StringItem(title='Virtual Machine MAC Address',
|
||||
description='Mac address of the virtual machine',
|
||||
required=False,
|
||||
pattern='^([0-9a-f]{1,2}[:]){5}([0-9a-f]{1,2})$')
|
||||
mapping = ESXVirtualMachineNicMapSchemaItem()
|
||||
connectable = ESXVirtualMachineDeviceConnectionItem()
|
||||
|
||||
|
||||
class ESXVirtualMachineMemorySchemaItem(ComplexSchemaItem):
|
||||
'''
|
||||
Configurtation Schema Item for ESX Virtual Machine Memory
|
||||
'''
|
||||
title = 'ESX Virtual Machine Memory Configuration'
|
||||
description = 'ESX Virtual Machine memory property'
|
||||
required = True
|
||||
|
||||
size = IntegerItem(title='Memory size',
|
||||
description='Size of the memory',
|
||||
required=True)
|
||||
|
||||
unit = StringItem(title='Memory unit',
|
||||
description='Unit of the memory, to VMware a '
|
||||
'GB is the same as GiB = 1024MiB',
|
||||
required=False,
|
||||
default='MB',
|
||||
enum=('MB', 'GB'))
|
||||
hotadd = BooleanItem(required=False, default=False)
|
||||
reservation_max = BooleanItem(required=False, default=False)
|
||||
|
||||
|
||||
class ESXVirtualMachineCpuSchemaItem(ComplexSchemaItem):
|
||||
'''
|
||||
Configurtation Schema Item for ESX Virtual Machine CPU
|
||||
'''
|
||||
title = 'ESX Virtual Machine Memory Configuration'
|
||||
description = 'ESX Virtual Machine memory property'
|
||||
required = True
|
||||
|
||||
count = IntegerItem(title='CPU core count',
|
||||
description='CPU core count',
|
||||
required=True)
|
||||
cores_per_socket = IntegerItem(title='CPU cores per socket',
|
||||
description='CPU cores per socket count',
|
||||
required=False)
|
||||
nested = BooleanItem(title='Virtual Machine Nested Property',
|
||||
description='Nested virtualization support',
|
||||
default=False)
|
||||
hotadd = BooleanItem(title='Virtual Machine CPU hot add',
|
||||
description='CPU hot add',
|
||||
default=False)
|
||||
hotremove = BooleanItem(title='Virtual Machine CPU hot remove',
|
||||
description='CPU hot remove',
|
||||
default=False)
|
||||
|
||||
|
||||
class ESXVirtualMachineConfigSchema(DefinitionsSchema):
|
||||
'''
|
||||
Configuration Schema for ESX Virtual Machines
|
||||
'''
|
||||
title = 'ESX Virtual Machine Configuration Schema'
|
||||
description = 'ESX Virtual Machine configuration schema'
|
||||
|
||||
vm_name = StringItem(title='Virtual Machine name',
|
||||
description='Name of the virtual machine',
|
||||
required=True)
|
||||
cpu = ESXVirtualMachineCpuSchemaItem()
|
||||
memory = ESXVirtualMachineMemorySchemaItem()
|
||||
image = StringItem(title='Virtual Machine guest OS',
|
||||
description='Guest OS type',
|
||||
required=True)
|
||||
version = StringItem(title='Virtual Machine hardware version',
|
||||
description='Container hardware version property',
|
||||
required=True)
|
||||
interfaces = ArrayItem(items=ESXVirtualMachineInterfaceSchemaItem(),
|
||||
min_items=1,
|
||||
required=False,
|
||||
unique_items=True)
|
||||
disks = ArrayItem(items=ESXVirtualMachineDiskSchemaItem(),
|
||||
min_items=1,
|
||||
required=False,
|
||||
unique_items=True)
|
||||
scsi_devices = ArrayItem(items=ESXVirtualMachineScsiSchemaItem(),
|
||||
min_items=1,
|
||||
required=False,
|
||||
unique_items=True)
|
||||
serial_ports = ArrayItem(items=ESXVirtualMachineSerialSchemaItem(),
|
||||
min_items=0,
|
||||
required=False,
|
||||
unique_items=True)
|
||||
cd_dvd_drives = ArrayItem(items=ESXVirtualMachineCdDriveSchemaItem(),
|
||||
min_items=0,
|
||||
required=False,
|
||||
unique_items=True)
|
||||
sata_controllers = ArrayItem(items=ESXVirtualMachineSataSchemaItem(),
|
||||
min_items=0,
|
||||
required=False,
|
||||
unique_items=True)
|
||||
datacenter = StringItem(title='Virtual Machine Datacenter',
|
||||
description='Datacenter of the virtual machine',
|
||||
required=True)
|
||||
datastore = StringItem(title='Virtual Machine Datastore',
|
||||
description='Datastore of the virtual machine',
|
||||
required=True)
|
||||
placement = ESXVirtualMachinePlacementSchemaItem()
|
||||
template = BooleanItem(title='Virtual Machine Template',
|
||||
description='Template to create the virtual machine from',
|
||||
default=False)
|
||||
tools = BooleanItem(title='Virtual Machine VMware Tools',
|
||||
description='Install VMware tools on the guest machine',
|
||||
default=False)
|
||||
power_on = BooleanItem(title='Virtual Machine Power',
|
||||
description='Power on virtual machine afret creation',
|
||||
default=False)
|
||||
deploy = BooleanItem(title='Virtual Machine Deploy Salt',
|
||||
description='Deploy salt after successful installation',
|
||||
default=False)
|
||||
|
||||
|
||||
class ESXVirtualMachineRemoveSchema(DefinitionsSchema):
|
||||
'''
|
||||
Remove Schema for ESX Virtual Machines to delete or unregister virtual machines
|
||||
'''
|
||||
name = StringItem(title='Virtual Machine name',
|
||||
description='Name of the virtual machine',
|
||||
required=True)
|
||||
datacenter = StringItem(title='Virtual Machine Datacenter',
|
||||
description='Datacenter of the virtual machine',
|
||||
required=True)
|
||||
placement = AnyOfItem(required=False,
|
||||
items=[ESXVirtualMachinePlacementSchemaItem(), NullItem()])
|
||||
power_off = BooleanItem(title='Power off vm',
|
||||
description='Power off vm before delete operation',
|
||||
required=False)
|
||||
|
||||
|
||||
class ESXVirtualMachineDeleteSchema(ESXVirtualMachineRemoveSchema):
|
||||
'''
|
||||
Deletion Schema for ESX Virtual Machines
|
||||
'''
|
||||
|
||||
|
||||
class ESXVirtualMachineUnregisterSchema(ESXVirtualMachineRemoveSchema):
|
||||
'''
|
||||
Unregister Schema for ESX Virtual Machines
|
||||
'''
|
@ -737,7 +737,7 @@ class SaltLoadPillar(ioflo.base.deeding.Deed):
|
||||
'dst': (master.name, None, 'remote_cmd')}
|
||||
load = {'id': self.opts.value['id'],
|
||||
'grains': self.grains.value,
|
||||
'saltenv': self.opts.value['environment'],
|
||||
'saltenv': self.opts.value['saltenv'],
|
||||
'ver': '2',
|
||||
'cmd': '_pillar'}
|
||||
self.road_stack.value.transmit({'route': route, 'load': load},
|
||||
|
@ -397,6 +397,18 @@ class TemplateError(SaltException):
|
||||
'''
|
||||
|
||||
|
||||
class ArgumentValueError(CommandExecutionError):
|
||||
'''
|
||||
Used when an invalid argument was passed to a command execution
|
||||
'''
|
||||
|
||||
|
||||
class CheckError(CommandExecutionError):
|
||||
'''
|
||||
Used when a check fails
|
||||
'''
|
||||
|
||||
|
||||
# Validation related exceptions
|
||||
class InvalidConfigError(CommandExecutionError):
|
||||
'''
|
||||
@ -404,12 +416,6 @@ class InvalidConfigError(CommandExecutionError):
|
||||
'''
|
||||
|
||||
|
||||
class ArgumentValueError(CommandExecutionError):
|
||||
'''
|
||||
Used when an invalid argument was passed to a command execution
|
||||
'''
|
||||
|
||||
|
||||
class InvalidEntityError(CommandExecutionError):
|
||||
'''
|
||||
Used when an entity fails validation
|
||||
@ -443,13 +449,25 @@ class VMwareObjectRetrievalError(VMwareSaltError):
|
||||
'''
|
||||
|
||||
|
||||
class VMwareObjectNotFoundError(VMwareSaltError):
|
||||
'''
|
||||
Used when a VMware object was not found
|
||||
'''
|
||||
|
||||
|
||||
class VMwareObjectExistsError(VMwareSaltError):
|
||||
'''
|
||||
Used when a VMware object exists
|
||||
Used when a VMware object already exists
|
||||
'''
|
||||
|
||||
|
||||
class VMwareObjectNotFoundError(VMwareSaltError):
|
||||
class VMwareMultipleObjectsError(VMwareObjectRetrievalError):
|
||||
'''
|
||||
Used when multiple objects were retrieved (and one was expected)
|
||||
'''
|
||||
|
||||
|
||||
class VMwareNotFoundError(VMwareSaltError):
|
||||
'''
|
||||
Used when a VMware object was not found
|
||||
'''
|
||||
@ -461,7 +479,31 @@ class VMwareApiError(VMwareSaltError):
|
||||
'''
|
||||
|
||||
|
||||
class VMwareFileNotFoundError(VMwareApiError):
|
||||
'''
|
||||
Used when representing a generic VMware error if a file not found
|
||||
'''
|
||||
|
||||
|
||||
class VMwareSystemError(VMwareSaltError):
|
||||
'''
|
||||
Used when representing a generic VMware system error
|
||||
'''
|
||||
|
||||
|
||||
class VMwarePowerOnError(VMwareSaltError):
|
||||
'''
|
||||
Used when error occurred during power on
|
||||
'''
|
||||
|
||||
|
||||
class VMwareVmRegisterError(VMwareSaltError):
|
||||
'''
|
||||
Used when a configuration parameter is incorrect
|
||||
'''
|
||||
|
||||
|
||||
class VMwareVmCreationError(VMwareSaltError):
|
||||
'''
|
||||
Used when a configuration parameter is incorrect
|
||||
'''
|
||||
|
@ -474,8 +474,14 @@ def _sunos_memdata():
|
||||
grains['mem_total'] = int(comps[2].strip())
|
||||
|
||||
swap_cmd = salt.utils.path.which('swap')
|
||||
swap_total = __salt__['cmd.run']('{0} -s'.format(swap_cmd)).split()[1]
|
||||
grains['swap_total'] = int(swap_total) // 1024
|
||||
swap_data = __salt__['cmd.run']('{0} -s'.format(swap_cmd)).split()
|
||||
try:
|
||||
swap_avail = int(swap_data[-2][:-1])
|
||||
swap_used = int(swap_data[-4][:-1])
|
||||
swap_total = (swap_avail + swap_used) // 1024
|
||||
except ValueError:
|
||||
swap_total = None
|
||||
grains['swap_total'] = swap_total
|
||||
return grains
|
||||
|
||||
|
||||
@ -1476,6 +1482,9 @@ def os_data():
|
||||
grains['init'] = 'supervisord'
|
||||
elif init_cmdline == ['runit']:
|
||||
grains['init'] = 'runit'
|
||||
elif '/sbin/my_init' in init_cmdline:
|
||||
#Phusion Base docker container use runit for srv mgmt, but my_init as pid1
|
||||
grains['init'] = 'runit'
|
||||
else:
|
||||
log.info(
|
||||
'Could not determine init system from command line: ({0})'
|
||||
|
@ -884,6 +884,7 @@ def sdb(opts, functions=None, whitelist=None, utils=None):
|
||||
u'__sdb__': functions,
|
||||
u'__opts__': opts,
|
||||
u'__utils__': utils,
|
||||
u'__salt__': minion_mods(opts, utils),
|
||||
},
|
||||
whitelist=whitelist,
|
||||
)
|
||||
@ -1593,8 +1594,10 @@ class LazyLoader(salt.utils.lazy.LazyDict):
|
||||
Load a single item if you have it
|
||||
'''
|
||||
# if the key doesn't have a '.' then it isn't valid for this mod dict
|
||||
if not isinstance(key, six.string_types) or u'.' not in key:
|
||||
raise KeyError
|
||||
if not isinstance(key, six.string_types):
|
||||
raise KeyError(u'The key must be a string.')
|
||||
if u'.' not in key:
|
||||
raise KeyError(u'The key \'%s\' should contain a \'.\'', key)
|
||||
mod_name, _ = key.split(u'.', 1)
|
||||
if mod_name in self.missing_modules:
|
||||
return True
|
||||
|
@ -766,8 +766,8 @@ class SMinion(MinionBase):
|
||||
if not os.path.isdir(pdir):
|
||||
os.makedirs(pdir, 0o700)
|
||||
ptop = os.path.join(pdir, u'top.sls')
|
||||
if self.opts[u'environment'] is not None:
|
||||
penv = self.opts[u'environment']
|
||||
if self.opts[u'saltenv'] is not None:
|
||||
penv = self.opts[u'saltenv']
|
||||
else:
|
||||
penv = u'base'
|
||||
cache_top = {penv: {self.opts[u'id']: [u'cache']}}
|
||||
@ -803,7 +803,7 @@ class SMinion(MinionBase):
|
||||
self.opts,
|
||||
self.opts[u'grains'],
|
||||
self.opts[u'id'],
|
||||
self.opts[u'environment'],
|
||||
self.opts[u'saltenv'],
|
||||
pillarenv=self.opts.get(u'pillarenv'),
|
||||
).compile_pillar()
|
||||
|
||||
@ -1174,7 +1174,7 @@ class Minion(MinionBase):
|
||||
self.opts,
|
||||
self.opts[u'grains'],
|
||||
self.opts[u'id'],
|
||||
self.opts[u'environment'],
|
||||
self.opts[u'saltenv'],
|
||||
pillarenv=self.opts.get(u'pillarenv')
|
||||
).compile_pillar()
|
||||
|
||||
@ -2062,7 +2062,7 @@ class Minion(MinionBase):
|
||||
self.opts,
|
||||
self.opts[u'grains'],
|
||||
self.opts[u'id'],
|
||||
self.opts[u'environment'],
|
||||
self.opts[u'saltenv'],
|
||||
pillarenv=self.opts.get(u'pillarenv'),
|
||||
).compile_pillar()
|
||||
except SaltClientError:
|
||||
@ -2097,12 +2097,16 @@ class Minion(MinionBase):
|
||||
self.schedule.run_job(name)
|
||||
elif func == u'disable_job':
|
||||
self.schedule.disable_job(name, persist)
|
||||
elif func == u'postpone_job':
|
||||
self.schedule.postpone_job(name, data)
|
||||
elif func == u'reload':
|
||||
self.schedule.reload(schedule)
|
||||
elif func == u'list':
|
||||
self.schedule.list(where)
|
||||
elif func == u'save_schedule':
|
||||
self.schedule.save_schedule()
|
||||
elif func == u'get_next_fire_time':
|
||||
self.schedule.get_next_fire_time(name)
|
||||
|
||||
def manage_beacons(self, tag, data):
|
||||
'''
|
||||
@ -3379,7 +3383,7 @@ class ProxyMinion(Minion):
|
||||
self.opts,
|
||||
self.opts[u'grains'],
|
||||
self.opts[u'id'],
|
||||
saltenv=self.opts[u'environment'],
|
||||
saltenv=self.opts[u'saltenv'],
|
||||
pillarenv=self.opts.get(u'pillarenv'),
|
||||
).compile_pillar()
|
||||
|
||||
@ -3421,7 +3425,7 @@ class ProxyMinion(Minion):
|
||||
# we can then sync any proxymodules down from the master
|
||||
# we do a sync_all here in case proxy code was installed by
|
||||
# SPM or was manually placed in /srv/salt/_modules etc.
|
||||
self.functions[u'saltutil.sync_all'](saltenv=self.opts[u'environment'])
|
||||
self.functions[u'saltutil.sync_all'](saltenv=self.opts[u'saltenv'])
|
||||
|
||||
# Pull in the utils
|
||||
self.utils = salt.loader.utils(self.opts)
|
||||
|
@ -214,14 +214,12 @@ def __virtual__():
|
||||
'''
|
||||
ret = ansible is not None
|
||||
msg = not ret and "Ansible is not installed on this system" or None
|
||||
if msg:
|
||||
log.warning(msg)
|
||||
else:
|
||||
if ret:
|
||||
global _resolver
|
||||
global _caller
|
||||
_resolver = AnsibleModuleResolver(__opts__).resolve().install()
|
||||
_caller = AnsibleModuleCaller(_resolver)
|
||||
_set_callables(list())
|
||||
_set_callables(list())
|
||||
|
||||
return ret, msg
|
||||
|
||||
|
@ -219,7 +219,7 @@ def _gather_pillar(pillarenv, pillar_override):
|
||||
__opts__,
|
||||
__grains__,
|
||||
__opts__['id'],
|
||||
__opts__['environment'],
|
||||
__opts__['saltenv'],
|
||||
pillar_override=pillar_override,
|
||||
pillarenv=pillarenv
|
||||
)
|
||||
@ -589,11 +589,17 @@ def _run(cmd,
|
||||
out = proc.stdout.decode(__salt_system_encoding__)
|
||||
except AttributeError:
|
||||
out = u''
|
||||
except UnicodeDecodeError:
|
||||
log.error('UnicodeDecodeError while decoding output of cmd {0}'.format(cmd))
|
||||
out = proc.stdout.decode(__salt_system_encoding__, 'replace')
|
||||
|
||||
try:
|
||||
err = proc.stderr.decode(__salt_system_encoding__)
|
||||
except AttributeError:
|
||||
err = u''
|
||||
except UnicodeDecodeError:
|
||||
log.error('UnicodeDecodeError while decoding error of cmd {0}'.format(cmd))
|
||||
err = proc.stderr.decode(__salt_system_encoding__, 'replace')
|
||||
|
||||
if rstrip:
|
||||
if out is not None:
|
||||
|
@ -49,7 +49,7 @@ def _gather_pillar(pillarenv, pillar_override):
|
||||
__opts__,
|
||||
__grains__,
|
||||
__opts__['id'],
|
||||
__opts__['environment'],
|
||||
__opts__['saltenv'],
|
||||
pillar_override=pillar_override,
|
||||
pillarenv=pillarenv
|
||||
)
|
||||
|
@ -5290,7 +5290,7 @@ def _gather_pillar(pillarenv, pillar_override, **grains):
|
||||
grains,
|
||||
# Not sure if these two are correct
|
||||
__opts__['id'],
|
||||
__opts__['environment'],
|
||||
__opts__['saltenv'],
|
||||
pillar_override=pillar_override,
|
||||
pillarenv=pillarenv
|
||||
)
|
||||
|
29
salt/modules/esxvm.py
Normal file
29
salt/modules/esxvm.py
Normal file
@ -0,0 +1,29 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Module used to access the esx proxy connection methods
|
||||
'''
|
||||
from __future__ import absolute_import
|
||||
|
||||
# Import python libs
|
||||
import logging
|
||||
import salt.utils
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
__proxyenabled__ = ['esxvm']
|
||||
# Define the module's virtual name
|
||||
__virtualname__ = 'esxvm'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
'''
|
||||
Only work on proxy
|
||||
'''
|
||||
if salt.utils.platform.is_proxy():
|
||||
return __virtualname__
|
||||
return False
|
||||
|
||||
|
||||
def get_details():
|
||||
return __proxy__['esxvm.get_details']()
|
@ -225,7 +225,7 @@ def send(tag,
|
||||
data_dict['pillar'] = __pillar__
|
||||
|
||||
if with_env_opts:
|
||||
data_dict['saltenv'] = __opts__.get('environment', 'base')
|
||||
data_dict['saltenv'] = __opts__.get('saltenv', 'base')
|
||||
data_dict['pillarenv'] = __opts__.get('pillarenv')
|
||||
|
||||
if kwargs:
|
||||
|
@ -552,11 +552,11 @@ def lsattr(path):
|
||||
raise SaltInvocationError("File or directory does not exist.")
|
||||
|
||||
cmd = ['lsattr', path]
|
||||
result = __salt__['cmd.run'](cmd, python_shell=False)
|
||||
result = __salt__['cmd.run'](cmd, ignore_retcode=True, python_shell=False)
|
||||
|
||||
results = {}
|
||||
for line in result.splitlines():
|
||||
if not line.startswith('lsattr'):
|
||||
if not line.startswith('lsattr: '):
|
||||
vals = line.split(None, 1)
|
||||
results[vals[1]] = re.findall(r"[acdijstuADST]", vals[0])
|
||||
|
||||
@ -5203,13 +5203,18 @@ def manage_file(name,
|
||||
'Replace symbolic link with regular file'
|
||||
|
||||
if salt.utils.platform.is_windows():
|
||||
ret = check_perms(name,
|
||||
ret,
|
||||
kwargs.get('win_owner'),
|
||||
kwargs.get('win_perms'),
|
||||
kwargs.get('win_deny_perms'),
|
||||
None,
|
||||
kwargs.get('win_inheritance'))
|
||||
# This function resides in win_file.py and will be available
|
||||
# on Windows. The local function will be overridden
|
||||
# pylint: disable=E1120,E1121,E1123
|
||||
ret = check_perms(
|
||||
path=name,
|
||||
ret=ret,
|
||||
owner=kwargs.get('win_owner'),
|
||||
grant_perms=kwargs.get('win_perms'),
|
||||
deny_perms=kwargs.get('win_deny_perms'),
|
||||
inheritance=kwargs.get('win_inheritance', True),
|
||||
reset=kwargs.get('win_perms_reset', False))
|
||||
# pylint: enable=E1120,E1121,E1123
|
||||
else:
|
||||
ret, _ = check_perms(name, ret, user, group, mode, attrs, follow_symlinks)
|
||||
|
||||
@ -5250,13 +5255,15 @@ def manage_file(name,
|
||||
if salt.utils.platform.is_windows():
|
||||
# This function resides in win_file.py and will be available
|
||||
# on Windows. The local function will be overridden
|
||||
# pylint: disable=E1121
|
||||
makedirs_(name,
|
||||
kwargs.get('win_owner'),
|
||||
kwargs.get('win_perms'),
|
||||
kwargs.get('win_deny_perms'),
|
||||
kwargs.get('win_inheritance'))
|
||||
# pylint: enable=E1121
|
||||
# pylint: disable=E1120,E1121,E1123
|
||||
makedirs_(
|
||||
path=name,
|
||||
owner=kwargs.get('win_owner'),
|
||||
grant_perms=kwargs.get('win_perms'),
|
||||
deny_perms=kwargs.get('win_deny_perms'),
|
||||
inheritance=kwargs.get('win_inheritance', True),
|
||||
reset=kwargs.get('win_perms_reset', False))
|
||||
# pylint: enable=E1120,E1121,E1123
|
||||
else:
|
||||
makedirs_(name, user=user, group=group, mode=dir_mode)
|
||||
|
||||
@ -5369,13 +5376,18 @@ def manage_file(name,
|
||||
mode = oct((0o777 ^ mask) & 0o666)
|
||||
|
||||
if salt.utils.platform.is_windows():
|
||||
ret = check_perms(name,
|
||||
ret,
|
||||
kwargs.get('win_owner'),
|
||||
kwargs.get('win_perms'),
|
||||
kwargs.get('win_deny_perms'),
|
||||
None,
|
||||
kwargs.get('win_inheritance'))
|
||||
# This function resides in win_file.py and will be available
|
||||
# on Windows. The local function will be overridden
|
||||
# pylint: disable=E1120,E1121,E1123
|
||||
ret = check_perms(
|
||||
path=name,
|
||||
ret=ret,
|
||||
owner=kwargs.get('win_owner'),
|
||||
grant_perms=kwargs.get('win_perms'),
|
||||
deny_perms=kwargs.get('win_deny_perms'),
|
||||
inheritance=kwargs.get('win_inheritance', True),
|
||||
reset=kwargs.get('win_perms_reset', False))
|
||||
# pylint: enable=E1120,E1121,E1123
|
||||
else:
|
||||
ret, _ = check_perms(name, ret, user, group, mode, attrs)
|
||||
|
||||
|
199
salt/modules/glanceng.py
Normal file
199
salt/modules/glanceng.py
Normal file
@ -0,0 +1,199 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Glance module for interacting with OpenStack Glance
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
:depends:shade
|
||||
|
||||
Example configuration
|
||||
|
||||
.. code-block:: yaml
|
||||
glance:
|
||||
cloud: default
|
||||
|
||||
.. code-block:: yaml
|
||||
glance:
|
||||
auth:
|
||||
username: admin
|
||||
password: password123
|
||||
user_domain_name: mydomain
|
||||
project_name: myproject
|
||||
project_domain_name: myproject
|
||||
auth_url: https://example.org:5000/v3
|
||||
identity_api_version: 3
|
||||
'''
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
HAS_SHADE = False
|
||||
try:
|
||||
import shade
|
||||
HAS_SHADE = True
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
__virtualname__ = 'glanceng'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
'''
|
||||
Only load this module if shade python module is installed
|
||||
'''
|
||||
if HAS_SHADE:
|
||||
return __virtualname__
|
||||
return (False, 'The glanceng execution module failed to load: shade python module is not available')
|
||||
|
||||
|
||||
def compare_changes(obj, **kwargs):
|
||||
'''
|
||||
Compare two dicts returning only keys that exist in the first dict and are
|
||||
different in the second one
|
||||
'''
|
||||
changes = {}
|
||||
for k, v in obj.items():
|
||||
if k in kwargs:
|
||||
if v != kwargs[k]:
|
||||
changes[k] = kwargs[k]
|
||||
return changes
|
||||
|
||||
|
||||
def _clean_kwargs(keep_name=False, **kwargs):
|
||||
'''
|
||||
Sanatize the the arguments for use with shade
|
||||
'''
|
||||
if 'name' in kwargs and not keep_name:
|
||||
kwargs['name_or_id'] = kwargs.pop('name')
|
||||
|
||||
return __utils__['args.clean_kwargs'](**kwargs)
|
||||
|
||||
|
||||
def setup_clouds(auth=None):
|
||||
'''
|
||||
Call functions to create Shade cloud objects in __context__ to take
|
||||
advantage of Shade's in-memory caching across several states
|
||||
'''
|
||||
get_operator_cloud(auth)
|
||||
get_openstack_cloud(auth)
|
||||
|
||||
|
||||
def get_operator_cloud(auth=None):
|
||||
'''
|
||||
Return an operator_cloud
|
||||
'''
|
||||
if auth is None:
|
||||
auth = __salt__['config.option']('glance', {})
|
||||
if 'shade_opcloud' in __context__:
|
||||
if __context__['shade_opcloud'].auth == auth:
|
||||
return __context__['shade_opcloud']
|
||||
__context__['shade_opcloud'] = shade.operator_cloud(**auth)
|
||||
return __context__['shade_opcloud']
|
||||
|
||||
|
||||
def get_openstack_cloud(auth=None):
|
||||
'''
|
||||
Return an openstack_cloud
|
||||
'''
|
||||
if auth is None:
|
||||
auth = __salt__['config.option']('glance', {})
|
||||
if 'shade_oscloud' in __context__:
|
||||
if __context__['shade_oscloud'].auth == auth:
|
||||
return __context__['shade_oscloud']
|
||||
__context__['shade_oscloud'] = shade.openstack_cloud(**auth)
|
||||
return __context__['shade_oscloud']
|
||||
|
||||
|
||||
def image_create(auth=None, **kwargs):
|
||||
'''
|
||||
Create an image
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' glanceng.image_create name=cirros file=cirros.raw disk_format=raw
|
||||
salt '*' glanceng.image_create name=cirros file=cirros.raw disk_format=raw hw_scsi_model=virtio-scsi hw_disk_bus=scsi
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(keep_name=True, **kwargs)
|
||||
return cloud.create_image(**kwargs)
|
||||
|
||||
|
||||
def image_delete(auth=None, **kwargs):
|
||||
'''
|
||||
Delete an image
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' glanceng.image_delete name=image1
|
||||
salt '*' glanceng.image_delete name=0e4febc2a5ab4f2c8f374b054162506d
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.delete_image(**kwargs)
|
||||
|
||||
|
||||
def image_list(auth=None, **kwargs):
|
||||
'''
|
||||
List images
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' glanceng.image_list
|
||||
salt '*' glanceng.image_list
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.list_images(**kwargs)
|
||||
|
||||
|
||||
def image_search(auth=None, **kwargs):
|
||||
'''
|
||||
Search for images
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' glanceng.image_search name=image1
|
||||
salt '*' glanceng.image_search
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.search_images(**kwargs)
|
||||
|
||||
|
||||
def image_get(auth=None, **kwargs):
|
||||
'''
|
||||
Get a single image
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' glanceng.image_get name=image1
|
||||
salt '*' glanceng.image_get name=0e4febc2a5ab4f2c8f374b054162506d
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.get_image(**kwargs)
|
||||
|
||||
|
||||
def update_image_properties(auth=None, **kwargs):
|
||||
'''
|
||||
Update properties for an image
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' glanceng.update_image_properties name=image1 hw_scsi_model=virtio-scsi hw_disk_bus=scsi
|
||||
salt '*' glanceng.update_image_properties name=0e4febc2a5ab4f2c8f374b054162506d min_ram=1024
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.update_image_properties(**kwargs)
|
867
salt/modules/keystoneng.py
Normal file
867
salt/modules/keystoneng.py
Normal file
@ -0,0 +1,867 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Keystone module for interacting with OpenStack Keystone
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
:depends:shade
|
||||
|
||||
Example configuration
|
||||
|
||||
.. code-block:: yaml
|
||||
keystone:
|
||||
cloud: default
|
||||
|
||||
.. code-block:: yaml
|
||||
keystone:
|
||||
auth:
|
||||
username: admin
|
||||
password: password123
|
||||
user_domain_name: mydomain
|
||||
project_name: myproject
|
||||
project_domain_name: myproject
|
||||
auth_url: https://example.org:5000/v3
|
||||
identity_api_version: 3
|
||||
'''
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
HAS_SHADE = False
|
||||
try:
|
||||
import shade
|
||||
from shade.exc import OpenStackCloudException
|
||||
HAS_SHADE = True
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
__virtualname__ = 'keystoneng'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
'''
|
||||
Only load this module if shade python module is installed
|
||||
'''
|
||||
if HAS_SHADE:
|
||||
return __virtualname__
|
||||
return (False, 'The keystoneng execution module failed to load: shade python module is not available')
|
||||
|
||||
|
||||
def compare_changes(obj, **kwargs):
|
||||
'''
|
||||
Compare two dicts returning only keys that exist in the first dict and are
|
||||
different in the second one
|
||||
'''
|
||||
changes = {}
|
||||
for k, v in obj.items():
|
||||
if k in kwargs:
|
||||
if v != kwargs[k]:
|
||||
changes[k] = kwargs[k]
|
||||
return changes
|
||||
|
||||
|
||||
def get_entity(ent_type, **kwargs):
|
||||
'''
|
||||
Attempt to query Keystone for more information about an entity
|
||||
'''
|
||||
try:
|
||||
func = 'keystoneng.{}_get'.format(ent_type)
|
||||
ent = __salt__[func](**kwargs)
|
||||
except OpenStackCloudException as e:
|
||||
# NOTE(SamYaple): If this error was something other than Forbidden we
|
||||
# reraise the issue since we are not prepared to handle it
|
||||
if 'HTTP 403' not in e.inner_exception[1][0]:
|
||||
raise
|
||||
|
||||
# NOTE(SamYaple): The user may be authorized to perform the function
|
||||
# they are trying to do, but not authorized to search. In such a
|
||||
# situation we want to trust that the user has passed a valid id, even
|
||||
# though we cannot validate that this is a valid id
|
||||
ent = kwargs['name']
|
||||
|
||||
return ent
|
||||
|
||||
|
||||
def _clean_kwargs(keep_name=False, **kwargs):
|
||||
'''
|
||||
Sanatize the the arguments for use with shade
|
||||
'''
|
||||
if 'name' in kwargs and not keep_name:
|
||||
kwargs['name_or_id'] = kwargs.pop('name')
|
||||
|
||||
return __utils__['args.clean_kwargs'](**kwargs)
|
||||
|
||||
|
||||
def setup_clouds(auth=None):
|
||||
'''
|
||||
Call functions to create Shade cloud objects in __context__ to take
|
||||
advantage of Shade's in-memory caching across several states
|
||||
'''
|
||||
get_operator_cloud(auth)
|
||||
get_openstack_cloud(auth)
|
||||
|
||||
|
||||
def get_operator_cloud(auth=None):
|
||||
'''
|
||||
Return an operator_cloud
|
||||
'''
|
||||
if auth is None:
|
||||
auth = __salt__['config.option']('keystone', {})
|
||||
if 'shade_opcloud' in __context__:
|
||||
if __context__['shade_opcloud'].auth == auth:
|
||||
return __context__['shade_opcloud']
|
||||
__context__['shade_opcloud'] = shade.operator_cloud(**auth)
|
||||
return __context__['shade_opcloud']
|
||||
|
||||
|
||||
def get_openstack_cloud(auth=None):
|
||||
'''
|
||||
Return an openstack_cloud
|
||||
'''
|
||||
if auth is None:
|
||||
auth = __salt__['config.option']('keystone', {})
|
||||
if 'shade_oscloud' in __context__:
|
||||
if __context__['shade_oscloud'].auth == auth:
|
||||
return __context__['shade_oscloud']
|
||||
__context__['shade_oscloud'] = shade.openstack_cloud(**auth)
|
||||
return __context__['shade_oscloud']
|
||||
|
||||
|
||||
def group_create(auth=None, **kwargs):
|
||||
'''
|
||||
Create a group
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.group_create name=group1
|
||||
salt '*' keystoneng.group_create name=group2 domain=domain1 description='my group2'
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(keep_name=True, **kwargs)
|
||||
return cloud.create_group(**kwargs)
|
||||
|
||||
|
||||
def group_delete(auth=None, **kwargs):
|
||||
'''
|
||||
Delete a group
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.group_delete name=group1
|
||||
salt '*' keystoneng.group_delete name=group2 domain_id=b62e76fbeeff4e8fb77073f591cf211e
|
||||
salt '*' keystoneng.group_delete name=0e4febc2a5ab4f2c8f374b054162506d
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.delete_group(**kwargs)
|
||||
|
||||
|
||||
def group_update(auth=None, **kwargs):
|
||||
'''
|
||||
Update a group
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.group_update name=group1 description='new description'
|
||||
salt '*' keystoneng.group_create name=group2 domain_id=b62e76fbeeff4e8fb77073f591cf211e new_name=newgroupname
|
||||
salt '*' keystoneng.group_create name=0e4febc2a5ab4f2c8f374b054162506d new_name=newgroupname
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
if 'new_name' in kwargs:
|
||||
kwargs['name'] = kwargs.pop('new_name')
|
||||
return cloud.update_group(**kwargs)
|
||||
|
||||
|
||||
def group_list(auth=None, **kwargs):
|
||||
'''
|
||||
List groups
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.group_list
|
||||
salt '*' keystoneng.group_list domain_id=b62e76fbeeff4e8fb77073f591cf211e
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.list_groups(**kwargs)
|
||||
|
||||
|
||||
def group_search(auth=None, **kwargs):
|
||||
'''
|
||||
Search for groups
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.group_search name=group1
|
||||
salt '*' keystoneng.group_search domain_id=b62e76fbeeff4e8fb77073f591cf211e
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.search_groups(**kwargs)
|
||||
|
||||
|
||||
def group_get(auth=None, **kwargs):
|
||||
'''
|
||||
Get a single group
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.group_get name=group1
|
||||
salt '*' keystoneng.group_get name=group2 domain_id=b62e76fbeeff4e8fb77073f591cf211e
|
||||
salt '*' keystoneng.group_get name=0e4febc2a5ab4f2c8f374b054162506d
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.get_group(**kwargs)
|
||||
|
||||
|
||||
def project_create(auth=None, **kwargs):
|
||||
'''
|
||||
Create a project
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.project_create name=project1
|
||||
salt '*' keystoneng.project_create name=project2 domain_id=b62e76fbeeff4e8fb77073f591cf211e
|
||||
salt '*' keystoneng.project_create name=project3 enabled=False description='my project3'
|
||||
'''
|
||||
cloud = get_openstack_cloud(auth)
|
||||
kwargs = _clean_kwargs(keep_name=True, **kwargs)
|
||||
return cloud.create_project(**kwargs)
|
||||
|
||||
|
||||
def project_delete(auth=None, **kwargs):
|
||||
'''
|
||||
Delete a project
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.project_delete name=project1
|
||||
salt '*' keystoneng.project_delete name=project2 domain_id=b62e76fbeeff4e8fb77073f591cf211e
|
||||
salt '*' keystoneng.project_delete name=f315afcf12f24ad88c92b936c38f2d5a
|
||||
'''
|
||||
cloud = get_openstack_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.delete_project(**kwargs)
|
||||
|
||||
|
||||
def project_update(auth=None, **kwargs):
|
||||
'''
|
||||
Update a project
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.project_update name=project1 new_name=newproject
|
||||
salt '*' keystoneng.project_update name=project2 enabled=False description='new description'
|
||||
'''
|
||||
cloud = get_openstack_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
if 'new_name' in kwargs:
|
||||
kwargs['name'] = kwargs.pop('new_name')
|
||||
return cloud.update_project(**kwargs)
|
||||
|
||||
|
||||
def project_list(auth=None, **kwargs):
|
||||
'''
|
||||
List projects
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.project_list
|
||||
salt '*' keystoneng.project_list domain_id=b62e76fbeeff4e8fb77073f591cf211e
|
||||
'''
|
||||
cloud = get_openstack_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.list_projects(**kwargs)
|
||||
|
||||
|
||||
def project_search(auth=None, **kwargs):
|
||||
'''
|
||||
Search projects
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.project_search
|
||||
salt '*' keystoneng.project_search name=project1
|
||||
salt '*' keystoneng.project_search domain_id=b62e76fbeeff4e8fb77073f591cf211e
|
||||
'''
|
||||
cloud = get_openstack_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.search_projects(**kwargs)
|
||||
|
||||
|
||||
def project_get(auth=None, **kwargs):
|
||||
'''
|
||||
Get a single project
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.project_get name=project1
|
||||
salt '*' keystoneng.project_get name=project2 domain_id=b62e76fbeeff4e8fb77073f591cf211e
|
||||
salt '*' keystoneng.project_get name=f315afcf12f24ad88c92b936c38f2d5a
|
||||
'''
|
||||
cloud = get_openstack_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.get_project(**kwargs)
|
||||
|
||||
|
||||
def domain_create(auth=None, **kwargs):
|
||||
'''
|
||||
Create a domain
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.domain_create name=domain1
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(keep_name=True, **kwargs)
|
||||
return cloud.create_domain(**kwargs)
|
||||
|
||||
|
||||
def domain_delete(auth=None, **kwargs):
|
||||
'''
|
||||
Delete a domain
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.domain_delete name=domain1
|
||||
salt '*' keystoneng.domain_delete name=b62e76fbeeff4e8fb77073f591cf211e
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.delete_domain(**kwargs)
|
||||
|
||||
|
||||
def domain_update(auth=None, **kwargs):
|
||||
'''
|
||||
Update a domain
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.domain_update name=domain1 new_name=newdomain
|
||||
salt '*' keystoneng.domain_update name=domain1 enabled=True description='new description'
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
if 'new_name' in kwargs:
|
||||
kwargs['name'] = kwargs.pop('new_name')
|
||||
return cloud.update_domain(**kwargs)
|
||||
|
||||
|
||||
def domain_list(auth=None, **kwargs):
|
||||
'''
|
||||
List domains
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.domain_list
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.list_domains(**kwargs)
|
||||
|
||||
|
||||
def domain_search(auth=None, **kwargs):
|
||||
'''
|
||||
Search domains
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.domain_search
|
||||
salt '*' keystoneng.domain_search name=domain1
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.search_domains(**kwargs)
|
||||
|
||||
|
||||
def domain_get(auth=None, **kwargs):
|
||||
'''
|
||||
Get a single domain
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.domain_get name=domain1
|
||||
salt '*' keystoneng.domain_get name=b62e76fbeeff4e8fb77073f591cf211e
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.get_domain(**kwargs)
|
||||
|
||||
|
||||
def role_create(auth=None, **kwargs):
|
||||
'''
|
||||
Create a role
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.role_create name=role1
|
||||
salt '*' keystoneng.role_create name=role1 domain_id=b62e76fbeeff4e8fb77073f591cf211e
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(keep_name=True, **kwargs)
|
||||
return cloud.create_role(**kwargs)
|
||||
|
||||
|
||||
def role_delete(auth=None, **kwargs):
|
||||
'''
|
||||
Delete a role
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.role_delete name=role1 domain_id=b62e76fbeeff4e8fb77073f591cf211e
|
||||
salt '*' keystoneng.role_delete name=1eb6edd5525e4ac39af571adee673559
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.delete_role(**kwargs)
|
||||
|
||||
|
||||
def role_update(auth=None, **kwargs):
|
||||
'''
|
||||
Update a role
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.role_update name=role1 new_name=newrole
|
||||
salt '*' keystoneng.role_update name=1eb6edd5525e4ac39af571adee673559 new_name=newrole
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
if 'new_name' in kwargs:
|
||||
kwargs['name'] = kwargs.pop('new_name')
|
||||
return cloud.update_role(**kwargs)
|
||||
|
||||
|
||||
def role_list(auth=None, **kwargs):
|
||||
'''
|
||||
List roles
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.role_list
|
||||
salt '*' keystoneng.role_list domain_id=b62e76fbeeff4e8fb77073f591cf211e
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.list_roles(**kwargs)
|
||||
|
||||
|
||||
def role_search(auth=None, **kwargs):
|
||||
'''
|
||||
Search roles
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.role_search
|
||||
salt '*' keystoneng.role_search name=role1
|
||||
salt '*' keystoneng.role_search domain_id=b62e76fbeeff4e8fb77073f591cf211e
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.search_roles(**kwargs)
|
||||
|
||||
|
||||
def role_get(auth=None, **kwargs):
|
||||
'''
|
||||
Get a single role
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.role_get name=role1
|
||||
salt '*' keystoneng.role_get name=role1 domain_id=b62e76fbeeff4e8fb77073f591cf211e
|
||||
salt '*' keystoneng.role_get name=1eb6edd5525e4ac39af571adee673559
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.get_role(**kwargs)
|
||||
|
||||
|
||||
def user_create(auth=None, **kwargs):
|
||||
'''
|
||||
Create a user
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.user_create name=user1
|
||||
salt '*' keystoneng.user_create name=user2 password=1234 enabled=False
|
||||
salt '*' keystoneng.user_create name=user3 domain_id=b62e76fbeeff4e8fb77073f591cf211e
|
||||
'''
|
||||
cloud = get_openstack_cloud(auth)
|
||||
kwargs = _clean_kwargs(keep_name=True, **kwargs)
|
||||
return cloud.create_user(**kwargs)
|
||||
|
||||
|
||||
def user_delete(auth=None, **kwargs):
|
||||
'''
|
||||
Delete a user
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.user_delete name=user1
|
||||
salt '*' keystoneng.user_delete name=user2 domain_id=b62e76fbeeff4e8fb77073f591cf211e
|
||||
salt '*' keystoneng.user_delete name=a42cbbfa1e894e839fd0f584d22e321f
|
||||
'''
|
||||
cloud = get_openstack_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.delete_user(**kwargs)
|
||||
|
||||
|
||||
def user_update(auth=None, **kwargs):
|
||||
'''
|
||||
Update a user
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.user_update name=user1 enabled=False description='new description'
|
||||
salt '*' keystoneng.user_update name=user1 new_name=newuser
|
||||
'''
|
||||
cloud = get_openstack_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
if 'new_name' in kwargs:
|
||||
kwargs['name'] = kwargs.pop('new_name')
|
||||
return cloud.update_user(**kwargs)
|
||||
|
||||
|
||||
def user_list(auth=None, **kwargs):
|
||||
'''
|
||||
List users
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.user_list
|
||||
salt '*' keystoneng.user_list domain_id=b62e76fbeeff4e8fb77073f591cf211e
|
||||
'''
|
||||
cloud = get_openstack_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.list_users(**kwargs)
|
||||
|
||||
|
||||
def user_search(auth=None, **kwargs):
|
||||
'''
|
||||
List users
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.user_list
|
||||
salt '*' keystoneng.user_list domain_id=b62e76fbeeff4e8fb77073f591cf211e
|
||||
'''
|
||||
cloud = get_openstack_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.search_users(**kwargs)
|
||||
|
||||
|
||||
def user_get(auth=None, **kwargs):
|
||||
'''
|
||||
Get a single user
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.user_get name=user1
|
||||
salt '*' keystoneng.user_get name=user1 domain_id=b62e76fbeeff4e8fb77073f591cf211e
|
||||
salt '*' keystoneng.user_get name=02cffaa173b2460f98e40eda3748dae5
|
||||
'''
|
||||
cloud = get_openstack_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.get_user(**kwargs)
|
||||
|
||||
|
||||
def endpoint_create(auth=None, **kwargs):
|
||||
'''
|
||||
Create an endpoint
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.endpoint_create interface=admin service=glance url=https://example.org:9292
|
||||
salt '*' keystoneng.endpoint_create interface=public service=glance region=RegionOne url=https://example.org:9292
|
||||
salt '*' keystoneng.endpoint_create interface=admin service=glance url=https://example.org:9292 enabled=True
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(keep_name=True, **kwargs)
|
||||
return cloud.create_endpoint(**kwargs)
|
||||
|
||||
|
||||
def endpoint_delete(auth=None, **kwargs):
|
||||
'''
|
||||
Delete an endpoint
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.endpoint_delete id=3bee4bd8c2b040ee966adfda1f0bfca9
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.delete_endpoint(**kwargs)
|
||||
|
||||
|
||||
def endpoint_update(auth=None, **kwargs):
|
||||
'''
|
||||
Update an endpoint
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.endpoint_update endpoint_id=4f961ad09d2d48948896bbe7c6a79717 interface=public enabled=False
|
||||
salt '*' keystoneng.endpoint_update endpoint_id=4f961ad09d2d48948896bbe7c6a79717 region=newregion
|
||||
salt '*' keystoneng.endpoint_update endpoint_id=4f961ad09d2d48948896bbe7c6a79717 service_name_or_id=glance url=https://example.org:9292
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.update_endpoint(**kwargs)
|
||||
|
||||
|
||||
def endpoint_list(auth=None, **kwargs):
|
||||
'''
|
||||
List endpoints
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.endpoint_list
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.list_endpoints(**kwargs)
|
||||
|
||||
|
||||
def endpoint_search(auth=None, **kwargs):
|
||||
'''
|
||||
Search endpoints
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.endpoint_search
|
||||
salt '*' keystoneng.endpoint_search id=02cffaa173b2460f98e40eda3748dae5
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.search_endpoints(**kwargs)
|
||||
|
||||
|
||||
def endpoint_get(auth=None, **kwargs):
|
||||
'''
|
||||
Get a single endpoint
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.endpoint_get id=02cffaa173b2460f98e40eda3748dae5
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.get_endpoint(**kwargs)
|
||||
|
||||
|
||||
def service_create(auth=None, **kwargs):
|
||||
'''
|
||||
Create a service
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.service_create name=glance type=image
|
||||
salt '*' keystoneng.service_create name=glance type=image description="Image"
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(keep_name=True, **kwargs)
|
||||
return cloud.create_service(**kwargs)
|
||||
|
||||
|
||||
def service_delete(auth=None, **kwargs):
|
||||
'''
|
||||
Delete a service
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.service_delete name=glance
|
||||
salt '*' keystoneng.service_delete name=39cc1327cdf744ab815331554430e8ec
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.delete_service(**kwargs)
|
||||
|
||||
|
||||
def service_update(auth=None, **kwargs):
|
||||
'''
|
||||
Update a service
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.service_update name=cinder type=volumev2
|
||||
salt '*' keystoneng.service_update name=cinder description='new description'
|
||||
salt '*' keystoneng.service_update name=ab4d35e269f147b3ae2d849f77f5c88f enabled=False
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.update_service(**kwargs)
|
||||
|
||||
|
||||
def service_list(auth=None, **kwargs):
|
||||
'''
|
||||
List services
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.service_list
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.list_services(**kwargs)
|
||||
|
||||
|
||||
def service_search(auth=None, **kwargs):
|
||||
'''
|
||||
Search services
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.service_search
|
||||
salt '*' keystoneng.service_search name=glance
|
||||
salt '*' keystoneng.service_search name=135f0403f8e544dc9008c6739ecda860
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.search_services(**kwargs)
|
||||
|
||||
|
||||
def service_get(auth=None, **kwargs):
|
||||
'''
|
||||
Get a single service
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.service_get name=glance
|
||||
salt '*' keystoneng.service_get name=75a5804638944b3ab54f7fbfcec2305a
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.get_service(**kwargs)
|
||||
|
||||
|
||||
def role_assignment_list(auth=None, **kwargs):
|
||||
'''
|
||||
List role assignments
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.role_assignment_list
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.list_role_assignments(**kwargs)
|
||||
|
||||
|
||||
def role_grant(auth=None, **kwargs):
|
||||
'''
|
||||
Grant a role in a project/domain to a user/group
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.role_grant name=role1 user=user1 project=project1
|
||||
salt '*' keystoneng.role_grant name=ddbe3e0ed74e4c7f8027bad4af03339d group=user1 project=project1 domain=domain1
|
||||
salt '*' keystoneng.role_grant name=ddbe3e0ed74e4c7f8027bad4af03339d group=19573afd5e4241d8b65c42215bae9704 project=1dcac318a83b4610b7a7f7ba01465548
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.grant_role(**kwargs)
|
||||
|
||||
|
||||
def role_revoke(auth=None, **kwargs):
|
||||
'''
|
||||
Grant a role in a project/domain to a user/group
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' keystoneng.role_revoke name=role1 user=user1 project=project1
|
||||
salt '*' keystoneng.role_revoke name=ddbe3e0ed74e4c7f8027bad4af03339d group=user1 project=project1 domain=domain1
|
||||
salt '*' keystoneng.role_revoke name=ddbe3e0ed74e4c7f8027bad4af03339d group=19573afd5e4241d8b65c42215bae9704 project=1dcac318a83b4610b7a7f7ba01465548
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.revoke_role(**kwargs)
|
478
salt/modules/neutronng.py
Normal file
478
salt/modules/neutronng.py
Normal file
@ -0,0 +1,478 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Neutron module for interacting with OpenStack Neutron
|
||||
|
||||
.. versionadded:: Nitrogen
|
||||
|
||||
:depends:shade
|
||||
|
||||
Example configuration
|
||||
|
||||
.. code-block:: yaml
|
||||
neutron:
|
||||
cloud: default
|
||||
|
||||
.. code-block:: yaml
|
||||
neutron:
|
||||
auth:
|
||||
username: admin
|
||||
password: password123
|
||||
user_domain_name: mydomain
|
||||
project_name: myproject
|
||||
project_domain_name: myproject
|
||||
auth_url: https://example.org:5000/v3
|
||||
identity_api_version: 3
|
||||
'''
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
HAS_SHADE = False
|
||||
try:
|
||||
import shade
|
||||
HAS_SHADE = True
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
__virtualname__ = 'neutronng'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
'''
|
||||
Only load this module if shade python module is installed
|
||||
'''
|
||||
if HAS_SHADE:
|
||||
return __virtualname__
|
||||
return (False, 'The neutronng execution module failed to \
|
||||
load: shade python module is not available')
|
||||
|
||||
|
||||
def compare_changes(obj, **kwargs):
|
||||
'''
|
||||
Compare two dicts returning only keys that exist in the first dict and are
|
||||
different in the second one
|
||||
'''
|
||||
changes = {}
|
||||
for key, value in obj.items():
|
||||
if key in kwargs:
|
||||
if value != kwargs[key]:
|
||||
changes[key] = kwargs[key]
|
||||
return changes
|
||||
|
||||
|
||||
def _clean_kwargs(keep_name=False, **kwargs):
|
||||
'''
|
||||
Sanatize the the arguments for use with shade
|
||||
'''
|
||||
if 'name' in kwargs and not keep_name:
|
||||
kwargs['name_or_id'] = kwargs.pop('name')
|
||||
|
||||
return __utils__['args.clean_kwargs'](**kwargs)
|
||||
|
||||
|
||||
def setup_clouds(auth=None):
|
||||
'''
|
||||
Call functions to create Shade cloud objects in __context__ to take
|
||||
advantage of Shade's in-memory caching across several states
|
||||
'''
|
||||
get_operator_cloud(auth)
|
||||
get_openstack_cloud(auth)
|
||||
|
||||
|
||||
def get_operator_cloud(auth=None):
|
||||
'''
|
||||
Return an operator_cloud
|
||||
'''
|
||||
if auth is None:
|
||||
auth = __salt__['config.option']('neutron', {})
|
||||
if 'shade_opcloud' in __context__:
|
||||
if __context__['shade_opcloud'].auth == auth:
|
||||
return __context__['shade_opcloud']
|
||||
__context__['shade_opcloud'] = shade.operator_cloud(**auth)
|
||||
return __context__['shade_opcloud']
|
||||
|
||||
|
||||
def get_openstack_cloud(auth=None):
|
||||
'''
|
||||
Return an openstack_cloud
|
||||
'''
|
||||
if auth is None:
|
||||
auth = __salt__['config.option']('neutron', {})
|
||||
if 'shade_oscloud' in __context__:
|
||||
if __context__['shade_oscloud'].auth == auth:
|
||||
return __context__['shade_oscloud']
|
||||
__context__['shade_oscloud'] = shade.openstack_cloud(**auth)
|
||||
return __context__['shade_oscloud']
|
||||
|
||||
|
||||
def network_create(auth=None, **kwargs):
|
||||
'''
|
||||
Create a network
|
||||
|
||||
Parameters:
|
||||
Defaults: shared=False, admin_state_up=True, external=False,
|
||||
provider=None, project_id=None
|
||||
|
||||
name (string): Name of the network being created.
|
||||
shared (bool): Set the network as shared.
|
||||
admin_state_up (bool): Set the network administrative state to up.
|
||||
external (bool): Whether this network is externally accessible.
|
||||
provider (dict): A dict of network provider options.
|
||||
project_id (string): Specify the project ID this network will be created on.
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' neutronng.network_create name=network2 \
|
||||
shared=True admin_state_up=True external=True
|
||||
|
||||
salt '*' neutronng.network_create name=network3 \
|
||||
provider='{"network_type": "vlan",\
|
||||
"segmentation_id": "4010",\
|
||||
"physical_network": "provider"}' \
|
||||
project_id=1dcac318a83b4610b7a7f7ba01465548
|
||||
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(keep_name=True, **kwargs)
|
||||
return cloud.create_network(**kwargs)
|
||||
|
||||
|
||||
def network_delete(auth=None, **kwargs):
|
||||
'''
|
||||
Delete a network
|
||||
|
||||
Parameters:
|
||||
name: Name or ID of the network being deleted.
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' neutronng.network_delete name=network1
|
||||
salt '*' neutronng.network_delete \
|
||||
name=1dcac318a83b4610b7a7f7ba01465548
|
||||
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.delete_network(**kwargs)
|
||||
|
||||
|
||||
def list_networks(auth=None, **kwargs):
|
||||
'''
|
||||
List networks
|
||||
|
||||
Parameters:
|
||||
Defaults: filters=None
|
||||
|
||||
filters (dict): dict of filter conditions to push down
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' neutronng.list_networks
|
||||
salt '*' neutronng.list_networks \
|
||||
filters='{"tenant_id": "1dcac318a83b4610b7a7f7ba01465548"}'
|
||||
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.list_networks(**kwargs)
|
||||
|
||||
|
||||
def network_get(auth=None, **kwargs):
|
||||
'''
|
||||
Get a single network
|
||||
|
||||
Parameters:
|
||||
Defaults: filters=None
|
||||
|
||||
filters (dict): dict of filter conditions to push down
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' neutronng.network_get name=XLB4
|
||||
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.get_network(**kwargs)
|
||||
|
||||
|
||||
def subnet_create(auth=None, **kwargs):
|
||||
'''
|
||||
Create a subnet
|
||||
|
||||
Parameters:
|
||||
Defaults: cidr=None, ip_version=4, enable_dhcp=False, subnet_name=None,
|
||||
tenant_id=None, allocation_pools=None, gateway_ip=None,
|
||||
disable_gateway_ip=False, dns_nameservers=None, host_routes=None,
|
||||
ipv6_ra_mode=None, ipv6_address_mode=None,
|
||||
use_default_subnetpool=False
|
||||
|
||||
allocation_pools:
|
||||
A list of dictionaries of the start and end addresses for allocation pools.
|
||||
|
||||
dns_nameservers: A list of DNS name servers for the subnet.
|
||||
host_routes: A list of host route dictionaries for the subnet.
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' neutronng.subnet_create network_name_or_id=network1
|
||||
subnet_name=subnet1
|
||||
|
||||
salt '*' neutronng.subnet_create subnet_name=subnet2\
|
||||
network_name_or_id=network2 enable_dhcp=True \
|
||||
allocation_pools='[{"start": "192.168.199.2",\
|
||||
"end": "192.168.199.254"}]'\
|
||||
gateway_ip='192.168.199.1' cidr=192.168.199.0/24
|
||||
|
||||
salt '*' neutronng.subnet_create network_name_or_id=network1 \
|
||||
subnet_name=subnet1 dns_nameservers='["8.8.8.8", "8.8.8.7"]'
|
||||
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.create_subnet(**kwargs)
|
||||
|
||||
|
||||
def subnet_update(auth=None, **kwargs):
|
||||
'''
|
||||
Update a subnet
|
||||
|
||||
Parameters:
|
||||
Defaults: subnet_name=None, enable_dhcp=None, gateway_ip=None,\
|
||||
disable_gateway_ip=None, allocation_pools=None, \
|
||||
dns_nameservers=None, host_routes=None
|
||||
|
||||
name: Name or ID of the subnet to update.
|
||||
subnet_name: The new name of the subnet.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' neutronng.subnet_update name=subnet1 subnet_name=subnet2
|
||||
salt '*' neutronng.subnet_update name=subnet1\
|
||||
dns_nameservers='["8.8.8.8", "8.8.8.7"]'
|
||||
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.update_subnet(**kwargs)
|
||||
|
||||
|
||||
def subnet_delete(auth=None, **kwargs):
|
||||
'''
|
||||
Delete a subnet
|
||||
|
||||
Parameters:
|
||||
name: Name or ID of the subnet to update.
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' neutronng.subnet_delete name=subnet1
|
||||
salt '*' neutronng.subnet_delete \
|
||||
name=1dcac318a83b4610b7a7f7ba01465548
|
||||
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.delete_subnet(**kwargs)
|
||||
|
||||
|
||||
def list_subnets(auth=None, **kwargs):
|
||||
'''
|
||||
List subnets
|
||||
|
||||
Parameters:
|
||||
Defaults: filters=None
|
||||
|
||||
filters (dict): dict of filter conditions to push down
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' neutronng.list_subnets
|
||||
salt '*' neutronng.list_subnets \
|
||||
filters='{"tenant_id": "1dcac318a83b4610b7a7f7ba01465548"}'
|
||||
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.list_subnets(**kwargs)
|
||||
|
||||
|
||||
def subnet_get(auth=None, **kwargs):
|
||||
'''
|
||||
Get a single subnet
|
||||
|
||||
Parameters:
|
||||
Defaults: filters=None
|
||||
|
||||
filters (dict): dict of filter conditions to push down
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' neutronng.subnet_get name=subnet1
|
||||
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.get_subnet(**kwargs)
|
||||
|
||||
|
||||
def security_group_create(auth=None, **kwargs):
|
||||
'''
|
||||
Create a security group. Use security_group_get to create default.
|
||||
|
||||
Parameters:
|
||||
Defaults: project_id=None
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' neutronng.security_group_create name=secgroup1 \
|
||||
description="Very secure security group"
|
||||
salt '*' neutronng.security_group_create name=secgroup1 \
|
||||
description="Very secure security group" \
|
||||
project_id=1dcac318a83b4610b7a7f7ba01465548
|
||||
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(keep_name=True, **kwargs)
|
||||
return cloud.create_security_group(**kwargs)
|
||||
|
||||
|
||||
def security_group_update(secgroup=None, auth=None, **kwargs):
|
||||
'''
|
||||
Update a security group
|
||||
|
||||
secgroup: Name, ID or Raw Object of the security group to update.
|
||||
name: New name for the security group.
|
||||
description: New description for the security group.
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' neutronng.security_group_update secgroup=secgroup1 \
|
||||
description="Very secure security group"
|
||||
salt '*' neutronng.security_group_update secgroup=secgroup1 \
|
||||
description="Very secure security group" \
|
||||
project_id=1dcac318a83b4610b7a7f7ba01465548
|
||||
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(keep_name=True, **kwargs)
|
||||
return cloud.update_security_group(secgroup, **kwargs)
|
||||
|
||||
|
||||
def security_group_delete(auth=None, **kwargs):
|
||||
'''
|
||||
Delete a security group
|
||||
|
||||
Parameters:
|
||||
name: The name or unique ID of the security group.
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' neutronng.security_group_delete name=secgroup1
|
||||
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.delete_security_group(**kwargs)
|
||||
|
||||
|
||||
def security_group_get(auth=None, **kwargs):
|
||||
'''
|
||||
Get a single security group. This will create a default security group
|
||||
if one does not exist yet for a particular project id.
|
||||
|
||||
Parameters:
|
||||
Defaults: filters=None
|
||||
|
||||
filters (dict): dict of filter conditions to push down
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' neutronng.security_group_get \
|
||||
name=1dcac318a83b4610b7a7f7ba01465548
|
||||
|
||||
salt '*' neutronng.security_group_get \
|
||||
name=default\
|
||||
filters='{"tenant_id":"2e778bb64ca64a199eb526b5958d8710"}'
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.get_security_group(**kwargs)
|
||||
|
||||
|
||||
def security_group_rule_create(auth=None, **kwargs):
|
||||
'''
|
||||
Create a rule in a security group
|
||||
|
||||
Parameters:
|
||||
Defaults: port_range_min=None, port_range_max=None, protocol=None,
|
||||
remote_ip_prefix=None, remote_group_id=None, direction='ingress',
|
||||
ethertype='IPv4', project_id=None
|
||||
|
||||
secgroup_name_or_id:
|
||||
This is the Name or Id of security group you want to create a rule in.
|
||||
However, it throws errors on non-unique security group names like
|
||||
'default' even when you supply a project_id
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' neutronng.security_group_rule_create\
|
||||
secgroup_name_or_id=secgroup1
|
||||
|
||||
salt '*' neutronng.security_group_rule_create\
|
||||
secgroup_name_or_id=secgroup2 port_range_min=8080\
|
||||
port_range_max=8080 direction='egress'
|
||||
|
||||
salt '*' neutronng.security_group_rule_create\
|
||||
secgroup_name_or_id=c0e1d1ce-7296-405e-919d-1c08217be529\
|
||||
protocol=icmp project_id=1dcac318a83b4610b7a7f7ba01465548
|
||||
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.create_security_group_rule(**kwargs)
|
||||
|
||||
|
||||
def security_group_rule_delete(auth=None, **kwargs):
|
||||
'''
|
||||
Delete a security group
|
||||
|
||||
Parameters:
|
||||
rule_id (string): The unique ID of the security group rule.
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' neutronng.security_group_rule_delete\
|
||||
rule_id=1dcac318a83b4610b7a7f7ba01465548
|
||||
|
||||
'''
|
||||
cloud = get_operator_cloud(auth)
|
||||
kwargs = _clean_kwargs(**kwargs)
|
||||
return cloud.delete_security_group_rule(**kwargs)
|
@ -237,7 +237,7 @@ def items(*args, **kwargs):
|
||||
pillarenv = kwargs.get('pillarenv')
|
||||
if pillarenv is None:
|
||||
if __opts__.get('pillarenv_from_saltenv', False):
|
||||
pillarenv = kwargs.get('saltenv') or __opts__['environment']
|
||||
pillarenv = kwargs.get('saltenv') or __opts__['saltenv']
|
||||
else:
|
||||
pillarenv = __opts__['pillarenv']
|
||||
|
||||
@ -468,7 +468,7 @@ def ext(external, pillar=None):
|
||||
__opts__,
|
||||
__grains__,
|
||||
__opts__['id'],
|
||||
__opts__['environment'],
|
||||
__opts__['saltenv'],
|
||||
ext=external,
|
||||
pillar_override=pillar)
|
||||
|
||||
|
@ -509,7 +509,7 @@ class SaltCheck(object):
|
||||
# state cache should be updated before running this method
|
||||
search_list = []
|
||||
cachedir = __opts__.get('cachedir', None)
|
||||
environment = __opts__['environment']
|
||||
environment = __opts__['saltenv']
|
||||
if environment:
|
||||
path = cachedir + os.sep + "files" + os.sep + environment
|
||||
search_list.append(path)
|
||||
|
@ -474,7 +474,7 @@ def sync_returners(saltenv=None, refresh=True, extmod_whitelist=None, extmod_bla
|
||||
'''
|
||||
.. versionadded:: 0.10.0
|
||||
|
||||
Sync beacons from ``salt://_returners`` to the minion
|
||||
Sync returners from ``salt://_returners`` to the minion
|
||||
|
||||
saltenv
|
||||
The fileserver environment from which to sync. To sync from more than
|
||||
@ -666,7 +666,7 @@ def sync_clouds(saltenv=None, refresh=True, extmod_whitelist=None, extmod_blackl
|
||||
'''
|
||||
.. versionadded:: 2017.7.0
|
||||
|
||||
Sync utility modules from ``salt://_cloud`` to the minion
|
||||
Sync cloud modules from ``salt://_cloud`` to the minion
|
||||
|
||||
saltenv : base
|
||||
The fileserver environment from which to sync. To sync from more than
|
||||
|
@ -10,6 +10,7 @@ Module for managing the Salt schedule on a minion
|
||||
from __future__ import absolute_import
|
||||
import copy as pycopy
|
||||
import difflib
|
||||
import logging
|
||||
import os
|
||||
import yaml
|
||||
|
||||
@ -23,7 +24,6 @@ from salt.ext import six
|
||||
|
||||
__proxyenabled__ = ['*']
|
||||
|
||||
import logging
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
__func_alias__ = {
|
||||
@ -58,6 +58,7 @@ SCHEDULE_CONF = [
|
||||
'return_config',
|
||||
'return_kwargs',
|
||||
'run_on_start'
|
||||
'skip_during_range',
|
||||
]
|
||||
|
||||
|
||||
@ -353,7 +354,7 @@ def build_schedule_item(name, **kwargs):
|
||||
|
||||
for item in ['range', 'when', 'once', 'once_fmt', 'cron',
|
||||
'returner', 'after', 'return_config', 'return_kwargs',
|
||||
'until', 'run_on_start']:
|
||||
'until', 'run_on_start', 'skip_during_range']:
|
||||
if item in kwargs:
|
||||
schedule[name][item] = kwargs[item]
|
||||
|
||||
@ -951,3 +952,191 @@ def copy(name, target, **kwargs):
|
||||
ret['minions'] = minions
|
||||
return ret
|
||||
return ret
|
||||
|
||||
|
||||
def postpone_job(name, current_time, new_time, **kwargs):
|
||||
'''
|
||||
Postpone a job in the minion's schedule
|
||||
|
||||
Current time and new time should be specified as Unix timestamps
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' schedule.postpone_job job current_time new_time
|
||||
'''
|
||||
|
||||
ret = {'comment': [],
|
||||
'result': True}
|
||||
|
||||
if not name:
|
||||
ret['comment'] = 'Job name is required.'
|
||||
ret['result'] = False
|
||||
return ret
|
||||
|
||||
if not current_time:
|
||||
ret['comment'] = 'Job current time is required.'
|
||||
ret['result'] = False
|
||||
return ret
|
||||
else:
|
||||
if not isinstance(current_time, six.integer_types):
|
||||
ret['comment'] = 'Job current time must be an integer.'
|
||||
ret['result'] = False
|
||||
return ret
|
||||
|
||||
if not new_time:
|
||||
ret['comment'] = 'Job new_time is required.'
|
||||
ret['result'] = False
|
||||
return ret
|
||||
else:
|
||||
if not isinstance(new_time, six.integer_types):
|
||||
ret['comment'] = 'Job new time must be an integer.'
|
||||
ret['result'] = False
|
||||
return ret
|
||||
|
||||
if 'test' in __opts__ and __opts__['test']:
|
||||
ret['comment'] = 'Job: {0} would be postponed in schedule.'.format(name)
|
||||
else:
|
||||
|
||||
if name in list_(show_all=True, where='opts', return_yaml=False):
|
||||
event_data = {'name': name,
|
||||
'time': current_time,
|
||||
'new_time': new_time,
|
||||
'func': 'postpone_job'}
|
||||
elif name in list_(show_all=True, where='pillar', return_yaml=False):
|
||||
event_data = {'name': name,
|
||||
'time': current_time,
|
||||
'new_time': new_time,
|
||||
'where': 'pillar',
|
||||
'func': 'postpone_job'}
|
||||
else:
|
||||
ret['comment'] = 'Job {0} does not exist.'.format(name)
|
||||
ret['result'] = False
|
||||
return ret
|
||||
|
||||
try:
|
||||
eventer = salt.utils.event.get_event('minion', opts=__opts__)
|
||||
res = __salt__['event.fire'](event_data, 'manage_schedule')
|
||||
if res:
|
||||
event_ret = eventer.get_event(tag='/salt/minion/minion_schedule_postpone_job_complete', wait=30)
|
||||
if event_ret and event_ret['complete']:
|
||||
schedule = event_ret['schedule']
|
||||
# check item exists in schedule and is enabled
|
||||
if name in schedule and schedule[name]['enabled']:
|
||||
ret['result'] = True
|
||||
ret['comment'] = 'Postponed Job {0} in schedule.'.format(name)
|
||||
else:
|
||||
ret['result'] = False
|
||||
ret['comment'] = 'Failed to postpone job {0} in schedule.'.format(name)
|
||||
return ret
|
||||
except KeyError:
|
||||
# Effectively a no-op, since we can't really return without an event system
|
||||
ret['comment'] = 'Event module not available. Schedule postpone job failed.'
|
||||
return ret
|
||||
|
||||
|
||||
def skip_job(name, time, **kwargs):
|
||||
'''
|
||||
Skip a job in the minion's schedule at specified time.
|
||||
|
||||
Time to skip should be specified as Unix timestamps
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' schedule.skip_job job time
|
||||
'''
|
||||
|
||||
ret = {'comment': [],
|
||||
'result': True}
|
||||
|
||||
if not name:
|
||||
ret['comment'] = 'Job name is required.'
|
||||
ret['result'] = False
|
||||
|
||||
if not time:
|
||||
ret['comment'] = 'Job time is required.'
|
||||
ret['result'] = False
|
||||
|
||||
if 'test' in __opts__ and __opts__['test']:
|
||||
ret['comment'] = 'Job: {0} would be skipped in schedule.'.format(name)
|
||||
else:
|
||||
|
||||
if name in list_(show_all=True, where='opts', return_yaml=False):
|
||||
event_data = {'name': name,
|
||||
'time': time,
|
||||
'func': 'skip_job'}
|
||||
elif name in list_(show_all=True, where='pillar', return_yaml=False):
|
||||
event_data = {'name': name,
|
||||
'time': time,
|
||||
'where': 'pillar',
|
||||
'func': 'skip_job'}
|
||||
else:
|
||||
ret['comment'] = 'Job {0} does not exist.'.format(name)
|
||||
ret['result'] = False
|
||||
return ret
|
||||
|
||||
try:
|
||||
eventer = salt.utils.event.get_event('minion', opts=__opts__)
|
||||
res = __salt__['event.fire'](event_data, 'manage_schedule')
|
||||
if res:
|
||||
event_ret = eventer.get_event(tag='/salt/minion/minion_schedule_skip_job_complete', wait=30)
|
||||
if event_ret and event_ret['complete']:
|
||||
schedule = event_ret['schedule']
|
||||
# check item exists in schedule and is enabled
|
||||
if name in schedule and schedule[name]['enabled']:
|
||||
ret['result'] = True
|
||||
ret['comment'] = 'Added Skip Job {0} in schedule.'.format(name)
|
||||
else:
|
||||
ret['result'] = False
|
||||
ret['comment'] = 'Failed to skip job {0} in schedule.'.format(name)
|
||||
return ret
|
||||
except KeyError:
|
||||
# Effectively a no-op, since we can't really return without an event system
|
||||
ret['comment'] = 'Event module not available. Schedule skip job failed.'
|
||||
return ret
|
||||
|
||||
|
||||
def show_next_fire_time(name, **kwargs):
|
||||
'''
|
||||
Show the next fire time for scheduled job
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' schedule.show_next_fire_time job_name
|
||||
|
||||
'''
|
||||
|
||||
ret = {'comment': [],
|
||||
'result': True}
|
||||
|
||||
if not name:
|
||||
ret['comment'] = 'Job name is required.'
|
||||
ret['result'] = False
|
||||
|
||||
try:
|
||||
event_data = {'name': name, 'func': 'get_next_fire_time'}
|
||||
eventer = salt.utils.event.get_event('minion', opts=__opts__)
|
||||
res = __salt__['event.fire'](event_data,
|
||||
'manage_schedule')
|
||||
if res:
|
||||
event_ret = eventer.get_event(tag='/salt/minion/minion_schedule_next_fire_time_complete', wait=30)
|
||||
except KeyError:
|
||||
# Effectively a no-op, since we can't really return without an event system
|
||||
ret = {}
|
||||
ret['comment'] = 'Event module not available. Schedule show next fire time failed.'
|
||||
ret['result'] = True
|
||||
log.debug(ret['comment'])
|
||||
return ret
|
||||
|
||||
return event_ret
|
||||
|
@ -43,6 +43,7 @@ from salt.runners.state import orchestrate as _orchestrate
|
||||
|
||||
# Import 3rd-party libs
|
||||
from salt.ext import six
|
||||
import msgpack
|
||||
|
||||
__proxyenabled__ = ['*']
|
||||
|
||||
@ -165,6 +166,99 @@ def _snapper_post(opts, jid, pre_num):
|
||||
log.error('Failed to create snapper pre snapshot for jid: {0}'.format(jid))
|
||||
|
||||
|
||||
def pause(jid, state_id=None, duration=None):
|
||||
'''
|
||||
Set up a state id pause, this instructs a running state to pause at a given
|
||||
state id. This needs to pass in the jid of the running state and can
|
||||
optionally pass in a duration in seconds. If a state_id is not passed then
|
||||
the jid referenced will be paused at the begining of the next state run.
|
||||
|
||||
The given state id is the id got a given state execution, so given a state
|
||||
that looks like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
vim:
|
||||
pkg.installed: []
|
||||
|
||||
The state_id to pass to `pause` is `vim`
|
||||
|
||||
CLI Examples:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' state.pause 20171130110407769519
|
||||
salt '*' state.pause 20171130110407769519 vim
|
||||
salt '*' state.pause 20171130110407769519 vim 20
|
||||
'''
|
||||
jid = str(jid)
|
||||
if state_id is None:
|
||||
state_id = '__all__'
|
||||
pause_dir = os.path.join(__opts__[u'cachedir'], 'state_pause')
|
||||
pause_path = os.path.join(pause_dir, jid)
|
||||
if not os.path.exists(pause_dir):
|
||||
try:
|
||||
os.makedirs(pause_dir)
|
||||
except OSError:
|
||||
# File created in the gap
|
||||
pass
|
||||
data = {}
|
||||
if os.path.exists(pause_path):
|
||||
with salt.utils.files.fopen(pause_path, 'rb') as fp_:
|
||||
data = msgpack.loads(fp_.read())
|
||||
if state_id not in data:
|
||||
data[state_id] = {}
|
||||
if duration:
|
||||
data[state_id]['duration'] = int(duration)
|
||||
with salt.utils.files.fopen(pause_path, 'wb') as fp_:
|
||||
fp_.write(msgpack.dumps(data))
|
||||
|
||||
|
||||
def resume(jid, state_id=None):
|
||||
'''
|
||||
Remove a pause from a jid, allowing it to continue. If the state_id is
|
||||
not specified then the a general pause will be resumed.
|
||||
|
||||
The given state_id is the id got a given state execution, so given a state
|
||||
that looks like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
vim:
|
||||
pkg.installed: []
|
||||
|
||||
The state_id to pass to `rm_pause` is `vim`
|
||||
|
||||
CLI Examples:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' state.resume 20171130110407769519
|
||||
salt '*' state.resume 20171130110407769519 vim
|
||||
'''
|
||||
jid = str(jid)
|
||||
if state_id is None:
|
||||
state_id = '__all__'
|
||||
pause_dir = os.path.join(__opts__[u'cachedir'], 'state_pause')
|
||||
pause_path = os.path.join(pause_dir, jid)
|
||||
if not os.path.exists(pause_dir):
|
||||
try:
|
||||
os.makedirs(pause_dir)
|
||||
except OSError:
|
||||
# File created in the gap
|
||||
pass
|
||||
data = {}
|
||||
if os.path.exists(pause_path):
|
||||
with salt.utils.files.fopen(pause_path, 'rb') as fp_:
|
||||
data = msgpack.loads(fp_.read())
|
||||
else:
|
||||
return True
|
||||
if state_id in data:
|
||||
data.pop(state_id)
|
||||
with salt.utils.files.fopen(pause_path, 'wb') as fp_:
|
||||
fp_.write(msgpack.dumps(data))
|
||||
|
||||
|
||||
def orchestrate(mods,
|
||||
saltenv='base',
|
||||
test=None,
|
||||
@ -270,10 +364,14 @@ def _get_opts(**kwargs):
|
||||
|
||||
if 'saltenv' in kwargs:
|
||||
saltenv = kwargs['saltenv']
|
||||
if saltenv is not None and not isinstance(saltenv, six.string_types):
|
||||
opts['environment'] = str(kwargs['saltenv'])
|
||||
else:
|
||||
opts['environment'] = kwargs['saltenv']
|
||||
if saltenv is not None:
|
||||
if not isinstance(saltenv, six.string_types):
|
||||
saltenv = six.text_type(saltenv)
|
||||
if opts['lock_saltenv'] and saltenv != opts['saltenv']:
|
||||
raise CommandExecutionError(
|
||||
'lock_saltenv is enabled, saltenv cannot be changed'
|
||||
)
|
||||
opts['saltenv'] = kwargs['saltenv']
|
||||
|
||||
if 'pillarenv' in kwargs or opts.get('pillarenv_from_saltenv', False):
|
||||
pillarenv = kwargs.get('pillarenv') or kwargs.get('saltenv')
|
||||
@ -840,7 +938,7 @@ def highstate(test=None, queue=False, **kwargs):
|
||||
kwargs.pop('env')
|
||||
|
||||
if 'saltenv' in kwargs:
|
||||
opts['environment'] = kwargs['saltenv']
|
||||
opts['saltenv'] = kwargs['saltenv']
|
||||
|
||||
if 'pillarenv' in kwargs:
|
||||
opts['pillarenv'] = kwargs['pillarenv']
|
||||
@ -1032,8 +1130,8 @@ def sls(mods, test=None, exclude=None, queue=False, **kwargs):
|
||||
|
||||
# Since this is running a specific SLS file (or files), fall back to the
|
||||
# 'base' saltenv if none is configured and none was passed.
|
||||
if opts['environment'] is None:
|
||||
opts['environment'] = 'base'
|
||||
if opts['saltenv'] is None:
|
||||
opts['saltenv'] = 'base'
|
||||
|
||||
pillar_override = kwargs.get('pillar')
|
||||
pillar_enc = kwargs.get('pillar_enc')
|
||||
@ -1089,7 +1187,7 @@ def sls(mods, test=None, exclude=None, queue=False, **kwargs):
|
||||
st_.push_active()
|
||||
ret = {}
|
||||
try:
|
||||
high_, errors = st_.render_highstate({opts['environment']: mods})
|
||||
high_, errors = st_.render_highstate({opts['saltenv']: mods})
|
||||
|
||||
if errors:
|
||||
__context__['retcode'] = 1
|
||||
@ -1411,8 +1509,8 @@ def sls_id(id_, mods, test=None, queue=False, **kwargs):
|
||||
|
||||
# Since this is running a specific ID within a specific SLS file, fall back
|
||||
# to the 'base' saltenv if none is configured and none was passed.
|
||||
if opts['environment'] is None:
|
||||
opts['environment'] = 'base'
|
||||
if opts['saltenv'] is None:
|
||||
opts['saltenv'] = 'base'
|
||||
|
||||
pillar_override = kwargs.get('pillar')
|
||||
pillar_enc = kwargs.get('pillar_enc')
|
||||
@ -1446,7 +1544,7 @@ def sls_id(id_, mods, test=None, queue=False, **kwargs):
|
||||
split_mods = mods.split(',')
|
||||
st_.push_active()
|
||||
try:
|
||||
high_, errors = st_.render_highstate({opts['environment']: split_mods})
|
||||
high_, errors = st_.render_highstate({opts['saltenv']: split_mods})
|
||||
finally:
|
||||
st_.pop_active()
|
||||
errors += st_.state.verify_high(high_)
|
||||
@ -1472,7 +1570,7 @@ def sls_id(id_, mods, test=None, queue=False, **kwargs):
|
||||
if not ret:
|
||||
raise SaltInvocationError(
|
||||
'No matches for ID \'{0}\' found in SLS \'{1}\' within saltenv '
|
||||
'\'{2}\''.format(id_, mods, opts['environment'])
|
||||
'\'{2}\''.format(id_, mods, opts['saltenv'])
|
||||
)
|
||||
return ret
|
||||
|
||||
@ -1523,8 +1621,8 @@ def show_low_sls(mods, test=None, queue=False, **kwargs):
|
||||
|
||||
# Since this is dealing with a specific SLS file (or files), fall back to
|
||||
# the 'base' saltenv if none is configured and none was passed.
|
||||
if opts['environment'] is None:
|
||||
opts['environment'] = 'base'
|
||||
if opts['saltenv'] is None:
|
||||
opts['saltenv'] = 'base'
|
||||
|
||||
pillar_override = kwargs.get('pillar')
|
||||
pillar_enc = kwargs.get('pillar_enc')
|
||||
@ -1555,7 +1653,7 @@ def show_low_sls(mods, test=None, queue=False, **kwargs):
|
||||
mods = mods.split(',')
|
||||
st_.push_active()
|
||||
try:
|
||||
high_, errors = st_.render_highstate({opts['environment']: mods})
|
||||
high_, errors = st_.render_highstate({opts['saltenv']: mods})
|
||||
finally:
|
||||
st_.pop_active()
|
||||
errors += st_.state.verify_high(high_)
|
||||
@ -1594,7 +1692,7 @@ def show_sls(mods, test=None, queue=False, **kwargs):
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' state.show_sls core,edit.vim dev
|
||||
salt '*' state.show_sls core,edit.vim saltenv=dev
|
||||
'''
|
||||
if 'env' in kwargs:
|
||||
# "env" is not supported; Use "saltenv".
|
||||
@ -1610,8 +1708,8 @@ def show_sls(mods, test=None, queue=False, **kwargs):
|
||||
|
||||
# Since this is dealing with a specific SLS file (or files), fall back to
|
||||
# the 'base' saltenv if none is configured and none was passed.
|
||||
if opts['environment'] is None:
|
||||
opts['environment'] = 'base'
|
||||
if opts['saltenv'] is None:
|
||||
opts['saltenv'] = 'base'
|
||||
|
||||
pillar_override = kwargs.get('pillar')
|
||||
pillar_enc = kwargs.get('pillar_enc')
|
||||
@ -1644,7 +1742,7 @@ def show_sls(mods, test=None, queue=False, **kwargs):
|
||||
mods = mods.split(',')
|
||||
st_.push_active()
|
||||
try:
|
||||
high_, errors = st_.render_highstate({opts['environment']: mods})
|
||||
high_, errors = st_.render_highstate({opts['saltenv']: mods})
|
||||
finally:
|
||||
st_.pop_active()
|
||||
errors += st_.state.verify_high(high_)
|
||||
@ -1810,6 +1908,7 @@ def pkg(pkg_path,
|
||||
salt '*' state.pkg /tmp/salt_state.tgz 760a9353810e36f6d81416366fc426dc md5
|
||||
'''
|
||||
# TODO - Add ability to download from salt master or other source
|
||||
popts = _get_opts(**kwargs)
|
||||
if not os.path.isfile(pkg_path):
|
||||
return {}
|
||||
if not salt.utils.hashutils.get_hash(pkg_path, hash_type) == pkg_sum:
|
||||
@ -1844,7 +1943,6 @@ def pkg(pkg_path,
|
||||
with salt.utils.files.fopen(roster_grains_json, 'r') as fp_:
|
||||
roster_grains = json.load(fp_, object_hook=salt.utils.data.decode_dict)
|
||||
|
||||
popts = _get_opts(**kwargs)
|
||||
if os.path.isfile(roster_grains_json):
|
||||
popts['grains'] = roster_grains
|
||||
popts['fileclient'] = 'local'
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -1218,41 +1218,55 @@ def mkdir(path,
|
||||
owner=None,
|
||||
grant_perms=None,
|
||||
deny_perms=None,
|
||||
inheritance=True):
|
||||
inheritance=True,
|
||||
reset=False):
|
||||
'''
|
||||
Ensure that the directory is available and permissions are set.
|
||||
|
||||
Args:
|
||||
|
||||
path (str): The full path to the directory.
|
||||
path (str):
|
||||
The full path to the directory.
|
||||
|
||||
owner (str): The owner of the directory. If not passed, it will be the
|
||||
account that created the directory, likely SYSTEM
|
||||
owner (str):
|
||||
The owner of the directory. If not passed, it will be the account
|
||||
that created the directory, likely SYSTEM
|
||||
|
||||
grant_perms (dict): A dictionary containing the user/group and the basic
|
||||
permissions to grant, ie: ``{'user': {'perms': 'basic_permission'}}``.
|
||||
You can also set the ``applies_to`` setting here. The default is
|
||||
``this_folder_subfolders_files``. Specify another ``applies_to`` setting
|
||||
like this:
|
||||
grant_perms (dict):
|
||||
A dictionary containing the user/group and the basic permissions to
|
||||
grant, ie: ``{'user': {'perms': 'basic_permission'}}``. You can also
|
||||
set the ``applies_to`` setting here. The default is
|
||||
``this_folder_subfolders_files``. Specify another ``applies_to``
|
||||
setting like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
.. code-block:: yaml
|
||||
|
||||
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
|
||||
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
|
||||
|
||||
To set advanced permissions use a list for the ``perms`` parameter, ie:
|
||||
To set advanced permissions use a list for the ``perms`` parameter,
|
||||
ie:
|
||||
|
||||
.. code-block:: yaml
|
||||
.. code-block:: yaml
|
||||
|
||||
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
|
||||
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
|
||||
|
||||
deny_perms (dict): A dictionary containing the user/group and
|
||||
permissions to deny along with the ``applies_to`` setting. Use the same
|
||||
format used for the ``grant_perms`` parameter. Remember, deny
|
||||
permissions supersede grant permissions.
|
||||
deny_perms (dict):
|
||||
A dictionary containing the user/group and permissions to deny along
|
||||
with the ``applies_to`` setting. Use the same format used for the
|
||||
``grant_perms`` parameter. Remember, deny permissions supersede
|
||||
grant permissions.
|
||||
|
||||
inheritance (bool): If True the object will inherit permissions from the
|
||||
parent, if False, inheritance will be disabled. Inheritance setting will
|
||||
not apply to parent directories if they must be created
|
||||
inheritance (bool):
|
||||
If True the object will inherit permissions from the parent, if
|
||||
``False``, inheritance will be disabled. Inheritance setting will
|
||||
not apply to parent directories if they must be created.
|
||||
|
||||
reset (bool):
|
||||
If ``True`` the existing DACL will be cleared and replaced with the
|
||||
settings defined in this function. If ``False``, new entries will be
|
||||
appended to the existing DACL. Default is ``False``.
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
Returns:
|
||||
bool: True if successful
|
||||
@ -1289,10 +1303,16 @@ def mkdir(path,
|
||||
|
||||
# Set owner
|
||||
if owner:
|
||||
salt.utils.win_dacl.set_owner(path, owner)
|
||||
salt.utils.win_dacl.set_owner(obj_name=path, principal=owner)
|
||||
|
||||
# Set permissions
|
||||
set_perms(path, grant_perms, deny_perms, inheritance)
|
||||
set_perms(
|
||||
path=path,
|
||||
grant_perms=grant_perms,
|
||||
deny_perms=deny_perms,
|
||||
inheritance=inheritance,
|
||||
reset=reset)
|
||||
|
||||
except WindowsError as exc:
|
||||
raise CommandExecutionError(exc)
|
||||
|
||||
@ -1303,49 +1323,63 @@ def makedirs_(path,
|
||||
owner=None,
|
||||
grant_perms=None,
|
||||
deny_perms=None,
|
||||
inheritance=True):
|
||||
inheritance=True,
|
||||
reset=False):
|
||||
'''
|
||||
Ensure that the parent directory containing this path is available.
|
||||
|
||||
Args:
|
||||
|
||||
path (str): The full path to the directory.
|
||||
path (str):
|
||||
The full path to the directory.
|
||||
|
||||
owner (str): The owner of the directory. If not passed, it will be the
|
||||
account that created the directly, likely SYSTEM
|
||||
.. note::
|
||||
|
||||
grant_perms (dict): A dictionary containing the user/group and the basic
|
||||
permissions to grant, ie: ``{'user': {'perms': 'basic_permission'}}``.
|
||||
You can also set the ``applies_to`` setting here. The default is
|
||||
``this_folder_subfolders_files``. Specify another ``applies_to`` setting
|
||||
like this:
|
||||
The path must end with a trailing slash otherwise the
|
||||
directory(s) will be created up to the parent directory. For
|
||||
example if path is ``C:\\temp\\test``, then it would be treated
|
||||
as ``C:\\temp\\`` but if the path ends with a trailing slash
|
||||
like ``C:\\temp\\test\\``, then it would be treated as
|
||||
``C:\\temp\\test\\``.
|
||||
|
||||
.. code-block:: yaml
|
||||
owner (str):
|
||||
The owner of the directory. If not passed, it will be the account
|
||||
that created the directly, likely SYSTEM
|
||||
|
||||
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
|
||||
grant_perms (dict):
|
||||
A dictionary containing the user/group and the basic permissions to
|
||||
grant, ie: ``{'user': {'perms': 'basic_permission'}}``. You can also
|
||||
set the ``applies_to`` setting here. The default is
|
||||
``this_folder_subfolders_files``. Specify another ``applies_to``
|
||||
setting like this:
|
||||
|
||||
To set advanced permissions use a list for the ``perms`` parameter, ie:
|
||||
.. code-block:: yaml
|
||||
|
||||
.. code-block:: yaml
|
||||
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
|
||||
|
||||
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
|
||||
To set advanced permissions use a list for the ``perms`` parameter, ie:
|
||||
|
||||
deny_perms (dict): A dictionary containing the user/group and
|
||||
permissions to deny along with the ``applies_to`` setting. Use the same
|
||||
format used for the ``grant_perms`` parameter. Remember, deny
|
||||
permissions supersede grant permissions.
|
||||
.. code-block:: yaml
|
||||
|
||||
inheritance (bool): If True the object will inherit permissions from the
|
||||
parent, if False, inheritance will be disabled. Inheritance setting will
|
||||
not apply to parent directories if they must be created
|
||||
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
|
||||
|
||||
.. note::
|
||||
deny_perms (dict):
|
||||
A dictionary containing the user/group and permissions to deny along
|
||||
with the ``applies_to`` setting. Use the same format used for the
|
||||
``grant_perms`` parameter. Remember, deny permissions supersede
|
||||
grant permissions.
|
||||
|
||||
The path must end with a trailing slash otherwise the directory(s) will
|
||||
be created up to the parent directory. For example if path is
|
||||
``C:\\temp\\test``, then it would be treated as ``C:\\temp\\`` but if
|
||||
the path ends with a trailing slash like ``C:\\temp\\test\\``, then it
|
||||
would be treated as ``C:\\temp\\test\\``.
|
||||
inheritance (bool):
|
||||
If True the object will inherit permissions from the parent, if
|
||||
False, inheritance will be disabled. Inheritance setting will not
|
||||
apply to parent directories if they must be created.
|
||||
|
||||
reset (bool):
|
||||
If ``True`` the existing DACL will be cleared and replaced with the
|
||||
settings defined in this function. If ``False``, new entries will be
|
||||
appended to the existing DACL. Default is ``False``.
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
Returns:
|
||||
bool: True if successful
|
||||
@ -1405,7 +1439,13 @@ def makedirs_(path,
|
||||
for directory_to_create in directories_to_create:
|
||||
# all directories have the user, group and mode set!!
|
||||
log.debug('Creating directory: %s', directory_to_create)
|
||||
mkdir(directory_to_create, owner, grant_perms, deny_perms, inheritance)
|
||||
mkdir(
|
||||
path=directory_to_create,
|
||||
owner=owner,
|
||||
grant_perms=grant_perms,
|
||||
deny_perms=deny_perms,
|
||||
inheritance=inheritance,
|
||||
reset=reset)
|
||||
|
||||
return True
|
||||
|
||||
@ -1414,41 +1454,54 @@ def makedirs_perms(path,
|
||||
owner=None,
|
||||
grant_perms=None,
|
||||
deny_perms=None,
|
||||
inheritance=True):
|
||||
inheritance=True,
|
||||
reset=True):
|
||||
'''
|
||||
Set owner and permissions for each directory created.
|
||||
|
||||
Args:
|
||||
|
||||
path (str): The full path to the directory.
|
||||
path (str):
|
||||
The full path to the directory.
|
||||
|
||||
owner (str): The owner of the directory. If not passed, it will be the
|
||||
account that created the directory, likely SYSTEM
|
||||
owner (str):
|
||||
The owner of the directory. If not passed, it will be the account
|
||||
that created the directory, likely SYSTEM
|
||||
|
||||
grant_perms (dict): A dictionary containing the user/group and the basic
|
||||
permissions to grant, ie: ``{'user': {'perms': 'basic_permission'}}``.
|
||||
You can also set the ``applies_to`` setting here. The default is
|
||||
``this_folder_subfolders_files``. Specify another ``applies_to`` setting
|
||||
like this:
|
||||
grant_perms (dict):
|
||||
A dictionary containing the user/group and the basic permissions to
|
||||
grant, ie: ``{'user': {'perms': 'basic_permission'}}``. You can also
|
||||
set the ``applies_to`` setting here. The default is
|
||||
``this_folder_subfolders_files``. Specify another ``applies_to``
|
||||
setting like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
.. code-block:: yaml
|
||||
|
||||
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
|
||||
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
|
||||
|
||||
To set advanced permissions use a list for the ``perms`` parameter, ie:
|
||||
To set advanced permissions use a list for the ``perms`` parameter, ie:
|
||||
|
||||
.. code-block:: yaml
|
||||
.. code-block:: yaml
|
||||
|
||||
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
|
||||
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
|
||||
|
||||
deny_perms (dict): A dictionary containing the user/group and
|
||||
permissions to deny along with the ``applies_to`` setting. Use the same
|
||||
format used for the ``grant_perms`` parameter. Remember, deny
|
||||
permissions supersede grant permissions.
|
||||
deny_perms (dict):
|
||||
A dictionary containing the user/group and permissions to deny along
|
||||
with the ``applies_to`` setting. Use the same format used for the
|
||||
``grant_perms`` parameter. Remember, deny permissions supersede
|
||||
grant permissions.
|
||||
|
||||
inheritance (bool): If True the object will inherit permissions from the
|
||||
parent, if False, inheritance will be disabled. Inheritance setting will
|
||||
not apply to parent directories if they must be created
|
||||
inheritance (bool):
|
||||
If ``True`` the object will inherit permissions from the parent, if
|
||||
``False``, inheritance will be disabled. Inheritance setting will
|
||||
not apply to parent directories if they must be created
|
||||
|
||||
reset (bool):
|
||||
If ``True`` the existing DACL will be cleared and replaced with the
|
||||
settings defined in this function. If ``False``, new entries will be
|
||||
appended to the existing DACL. Default is ``False``.
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
Returns:
|
||||
bool: True if successful, otherwise raise an error
|
||||
@ -1482,8 +1535,15 @@ def makedirs_perms(path,
|
||||
try:
|
||||
# Create the directory here, set inherited True because this is a
|
||||
# parent directory, the inheritance setting will only apply to the
|
||||
# child directory
|
||||
makedirs_perms(head, owner, grant_perms, deny_perms, True)
|
||||
# target directory. Reset will be False as we only want to reset
|
||||
# the permissions on the target directory
|
||||
makedirs_perms(
|
||||
path=head,
|
||||
owner=owner,
|
||||
grant_perms=grant_perms,
|
||||
deny_perms=deny_perms,
|
||||
inheritance=True,
|
||||
reset=False)
|
||||
except OSError as exc:
|
||||
# be happy if someone already created the path
|
||||
if exc.errno != errno.EEXIST:
|
||||
@ -1492,7 +1552,13 @@ def makedirs_perms(path,
|
||||
return {}
|
||||
|
||||
# Make the directory
|
||||
mkdir(path, owner, grant_perms, deny_perms, inheritance)
|
||||
mkdir(
|
||||
path=path,
|
||||
owner=owner,
|
||||
grant_perms=grant_perms,
|
||||
deny_perms=deny_perms,
|
||||
inheritance=inheritance,
|
||||
reset=reset)
|
||||
|
||||
return True
|
||||
|
||||
@ -1502,66 +1568,64 @@ def check_perms(path,
|
||||
owner=None,
|
||||
grant_perms=None,
|
||||
deny_perms=None,
|
||||
inheritance=True):
|
||||
inheritance=True,
|
||||
reset=False):
|
||||
'''
|
||||
Set owner and permissions for each directory created.
|
||||
Check owner and permissions for the passed directory. This function checks
|
||||
the permissions and sets them, returning the changes made.
|
||||
|
||||
Args:
|
||||
|
||||
path (str): The full path to the directory.
|
||||
path (str):
|
||||
The full path to the directory.
|
||||
|
||||
ret (dict): A dictionary to append changes to and return. If not passed,
|
||||
will create a new dictionary to return.
|
||||
ret (dict):
|
||||
A dictionary to append changes to and return. If not passed, will
|
||||
create a new dictionary to return.
|
||||
|
||||
owner (str): The owner of the directory. If not passed, it will be the
|
||||
account that created the directory, likely SYSTEM
|
||||
owner (str):
|
||||
The owner to set for the directory.
|
||||
|
||||
grant_perms (dict): A dictionary containing the user/group and the basic
|
||||
permissions to grant, ie: ``{'user': {'perms': 'basic_permission'}}``.
|
||||
You can also set the ``applies_to`` setting here. The default is
|
||||
``this_folder_subfolders_files``. Specify another ``applies_to`` setting
|
||||
like this:
|
||||
grant_perms (dict):
|
||||
A dictionary containing the user/group and the basic permissions to
|
||||
check/grant, ie: ``{'user': {'perms': 'basic_permission'}}``.
|
||||
Default is ``None``.
|
||||
|
||||
.. code-block:: yaml
|
||||
deny_perms (dict):
|
||||
A dictionary containing the user/group and permissions to
|
||||
check/deny. Default is ``None``.
|
||||
|
||||
{'user': {'perms': 'full_control', 'applies_to': 'this_folder'}}
|
||||
inheritance (bool):
|
||||
``True will check if inheritance is enabled and enable it. ``False``
|
||||
will check if inheritance is disabled and disable it. Defaultl is
|
||||
``True``.
|
||||
|
||||
To set advanced permissions use a list for the ``perms`` parameter, ie:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
|
||||
|
||||
deny_perms (dict): A dictionary containing the user/group and
|
||||
permissions to deny along with the ``applies_to`` setting. Use the same
|
||||
format used for the ``grant_perms`` parameter. Remember, deny
|
||||
permissions supersede grant permissions.
|
||||
|
||||
inheritance (bool): If True the object will inherit permissions from the
|
||||
parent, if False, inheritance will be disabled. Inheritance setting will
|
||||
not apply to parent directories if they must be created
|
||||
reset (bool):
|
||||
``True`` wil show what permisisons will be removed by resetting the
|
||||
DACL. ``False`` will do nothing. Default is ``False``.
|
||||
|
||||
Returns:
|
||||
bool: True if successful, otherwise raise an error
|
||||
dict: A dictionary of changes that have been made
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# To grant the 'Users' group 'read & execute' permissions.
|
||||
salt '*' file.check_perms C:\\Temp\\ Administrators "{'Users': {'perms': 'read_execute'}}"
|
||||
# To see changes to ``C:\\Temp`` if the 'Users' group is given 'read & execute' permissions.
|
||||
salt '*' file.check_perms C:\\Temp\\ {} Administrators "{'Users': {'perms': 'read_execute'}}"
|
||||
|
||||
# Locally using salt call
|
||||
salt-call file.check_perms C:\\Temp\\ Administrators "{'Users': {'perms': 'read_execute', 'applies_to': 'this_folder_only'}}"
|
||||
salt-call file.check_perms C:\\Temp\\ {} Administrators "{'Users': {'perms': 'read_execute', 'applies_to': 'this_folder_only'}}"
|
||||
|
||||
# Specify advanced attributes with a list
|
||||
salt '*' file.check_perms C:\\Temp\\ Administrators "{'jsnuffy': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'files_only'}}"
|
||||
salt '*' file.check_perms C:\\Temp\\ {} Administrators "{'jsnuffy': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'files_only'}}"
|
||||
'''
|
||||
path = os.path.expanduser(path)
|
||||
|
||||
if not ret:
|
||||
ret = {'name': path,
|
||||
'changes': {},
|
||||
'pchanges': {},
|
||||
'comment': [],
|
||||
'result': True}
|
||||
orig_comment = ''
|
||||
@ -1571,14 +1635,16 @@ def check_perms(path,
|
||||
|
||||
# Check owner
|
||||
if owner:
|
||||
owner = salt.utils.win_dacl.get_name(owner)
|
||||
current_owner = salt.utils.win_dacl.get_owner(path)
|
||||
owner = salt.utils.win_dacl.get_name(principal=owner)
|
||||
current_owner = salt.utils.win_dacl.get_owner(obj_name=path)
|
||||
if owner != current_owner:
|
||||
if __opts__['test'] is True:
|
||||
ret['pchanges']['owner'] = owner
|
||||
else:
|
||||
try:
|
||||
salt.utils.win_dacl.set_owner(path, owner)
|
||||
salt.utils.win_dacl.set_owner(
|
||||
obj_name=path,
|
||||
principal=owner)
|
||||
ret['changes']['owner'] = owner
|
||||
except CommandExecutionError:
|
||||
ret['result'] = False
|
||||
@ -1586,7 +1652,7 @@ def check_perms(path,
|
||||
'Failed to change owner to "{0}"'.format(owner))
|
||||
|
||||
# Check permissions
|
||||
cur_perms = salt.utils.win_dacl.get_permissions(path)
|
||||
cur_perms = salt.utils.win_dacl.get_permissions(obj_name=path)
|
||||
|
||||
# Verify Deny Permissions
|
||||
changes = {}
|
||||
@ -1594,7 +1660,7 @@ def check_perms(path,
|
||||
for user in deny_perms:
|
||||
# Check that user exists:
|
||||
try:
|
||||
user_name = salt.utils.win_dacl.get_name(user)
|
||||
user_name = salt.utils.win_dacl.get_name(principal=user)
|
||||
except CommandExecutionError:
|
||||
ret['comment'].append(
|
||||
'Deny Perms: User "{0}" missing from Target System'.format(user))
|
||||
@ -1619,7 +1685,11 @@ def check_perms(path,
|
||||
# Check Perms
|
||||
if isinstance(deny_perms[user]['perms'], six.string_types):
|
||||
if not salt.utils.win_dacl.has_permission(
|
||||
path, user, deny_perms[user]['perms'], 'deny'):
|
||||
obj_name=path,
|
||||
principal=user,
|
||||
permission=deny_perms[user]['perms'],
|
||||
access_mode='deny',
|
||||
exact=False):
|
||||
changes[user] = {'perms': deny_perms[user]['perms']}
|
||||
else:
|
||||
for perm in deny_perms[user]['perms']:
|
||||
@ -1640,9 +1710,10 @@ def check_perms(path,
|
||||
changes[user]['applies_to'] = applies_to
|
||||
|
||||
if changes:
|
||||
ret['pchanges']['deny_perms'] = {}
|
||||
ret['changes']['deny_perms'] = {}
|
||||
for user in changes:
|
||||
user_name = salt.utils.win_dacl.get_name(user)
|
||||
user_name = salt.utils.win_dacl.get_name(principal=user)
|
||||
|
||||
if __opts__['test'] is True:
|
||||
ret['pchanges']['deny_perms'][user] = changes[user]
|
||||
@ -1689,7 +1760,11 @@ def check_perms(path,
|
||||
|
||||
try:
|
||||
salt.utils.win_dacl.set_permissions(
|
||||
path, user, perms, 'deny', applies_to)
|
||||
obj_name=path,
|
||||
principal=user,
|
||||
permissions=perms,
|
||||
access_mode='deny',
|
||||
applies_to=applies_to)
|
||||
ret['changes']['deny_perms'][user] = changes[user]
|
||||
except CommandExecutionError:
|
||||
ret['result'] = False
|
||||
@ -1703,7 +1778,7 @@ def check_perms(path,
|
||||
for user in grant_perms:
|
||||
# Check that user exists:
|
||||
try:
|
||||
user_name = salt.utils.win_dacl.get_name(user)
|
||||
user_name = salt.utils.win_dacl.get_name(principal=user)
|
||||
except CommandExecutionError:
|
||||
ret['comment'].append(
|
||||
'Grant Perms: User "{0}" missing from Target System'.format(user))
|
||||
@ -1729,12 +1804,19 @@ def check_perms(path,
|
||||
# Check Perms
|
||||
if isinstance(grant_perms[user]['perms'], six.string_types):
|
||||
if not salt.utils.win_dacl.has_permission(
|
||||
path, user, grant_perms[user]['perms']):
|
||||
obj_name=path,
|
||||
principal=user,
|
||||
permission=grant_perms[user]['perms'],
|
||||
access_mode='grant'):
|
||||
changes[user] = {'perms': grant_perms[user]['perms']}
|
||||
else:
|
||||
for perm in grant_perms[user]['perms']:
|
||||
if not salt.utils.win_dacl.has_permission(
|
||||
path, user, perm, exact=False):
|
||||
obj_name=path,
|
||||
principal=user,
|
||||
permission=perm,
|
||||
access_mode='grant',
|
||||
exact=False):
|
||||
if user not in changes:
|
||||
changes[user] = {'perms': []}
|
||||
changes[user]['perms'].append(grant_perms[user]['perms'])
|
||||
@ -1750,11 +1832,12 @@ def check_perms(path,
|
||||
changes[user]['applies_to'] = applies_to
|
||||
|
||||
if changes:
|
||||
ret['pchanges']['grant_perms'] = {}
|
||||
ret['changes']['grant_perms'] = {}
|
||||
for user in changes:
|
||||
user_name = salt.utils.win_dacl.get_name(user)
|
||||
user_name = salt.utils.win_dacl.get_name(principal=user)
|
||||
if __opts__['test'] is True:
|
||||
ret['changes']['grant_perms'][user] = changes[user]
|
||||
ret['pchanges']['grant_perms'][user] = changes[user]
|
||||
else:
|
||||
applies_to = None
|
||||
if 'applies_to' not in changes[user]:
|
||||
@ -1796,7 +1879,11 @@ def check_perms(path,
|
||||
|
||||
try:
|
||||
salt.utils.win_dacl.set_permissions(
|
||||
path, user, perms, 'grant', applies_to)
|
||||
obj_name=path,
|
||||
principal=user,
|
||||
permissions=perms,
|
||||
access_mode='grant',
|
||||
applies_to=applies_to)
|
||||
ret['changes']['grant_perms'][user] = changes[user]
|
||||
except CommandExecutionError:
|
||||
ret['result'] = False
|
||||
@ -1806,12 +1893,14 @@ def check_perms(path,
|
||||
|
||||
# Check inheritance
|
||||
if inheritance is not None:
|
||||
if not inheritance == salt.utils.win_dacl.get_inheritance(path):
|
||||
if not inheritance == salt.utils.win_dacl.get_inheritance(obj_name=path):
|
||||
if __opts__['test'] is True:
|
||||
ret['changes']['inheritance'] = inheritance
|
||||
ret['pchanges']['inheritance'] = inheritance
|
||||
else:
|
||||
try:
|
||||
salt.utils.win_dacl.set_inheritance(path, inheritance)
|
||||
salt.utils.win_dacl.set_inheritance(
|
||||
obj_name=path,
|
||||
enabled=inheritance)
|
||||
ret['changes']['inheritance'] = inheritance
|
||||
except CommandExecutionError:
|
||||
ret['result'] = False
|
||||
@ -1819,6 +1908,45 @@ def check_perms(path,
|
||||
'Failed to set inheritance for "{0}" to '
|
||||
'{1}'.format(path, inheritance))
|
||||
|
||||
# Check reset
|
||||
# If reset=True, which users will be removed as a result
|
||||
if reset:
|
||||
for user_name in cur_perms:
|
||||
if user_name not in grant_perms:
|
||||
if 'grant' in cur_perms[user_name] and not \
|
||||
cur_perms[user_name]['grant']['inherited']:
|
||||
if __opts__['test'] is True:
|
||||
if 'remove_perms' not in ret['pchanges']:
|
||||
ret['pchanges']['remove_perms'] = {}
|
||||
ret['pchanges']['remove_perms'].update(
|
||||
{user_name: cur_perms[user_name]})
|
||||
else:
|
||||
if 'remove_perms' not in ret['changes']:
|
||||
ret['changes']['remove_perms'] = {}
|
||||
salt.utils.win_dacl.rm_permissions(
|
||||
obj_name=path,
|
||||
principal=user_name,
|
||||
ace_type='grant')
|
||||
ret['changes']['remove_perms'].update(
|
||||
{user_name: cur_perms[user_name]})
|
||||
if user_name not in deny_perms:
|
||||
if 'deny' in cur_perms[user_name] and not \
|
||||
cur_perms[user_name]['deny']['inherited']:
|
||||
if __opts__['test'] is True:
|
||||
if 'remove_perms' not in ret['pchanges']:
|
||||
ret['pchanges']['remove_perms'] = {}
|
||||
ret['pchanges']['remove_perms'].update(
|
||||
{user_name: cur_perms[user_name]})
|
||||
else:
|
||||
if 'remove_perms' not in ret['changes']:
|
||||
ret['changes']['remove_perms'] = {}
|
||||
salt.utils.win_dacl.rm_permissions(
|
||||
obj_name=path,
|
||||
principal=user_name,
|
||||
ace_type='deny')
|
||||
ret['changes']['remove_perms'].update(
|
||||
{user_name: cur_perms[user_name]})
|
||||
|
||||
# Re-add the Original Comment if defined
|
||||
if isinstance(orig_comment, six.string_types):
|
||||
if orig_comment:
|
||||
@ -1830,25 +1958,30 @@ def check_perms(path,
|
||||
ret['comment'] = '\n'.join(ret['comment'])
|
||||
|
||||
# Set result for test = True
|
||||
if __opts__['test'] is True and ret['changes']:
|
||||
if __opts__['test'] and (ret['changes'] or ret['pchanges']):
|
||||
ret['result'] = None
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
def set_perms(path, grant_perms=None, deny_perms=None, inheritance=True):
|
||||
def set_perms(path,
|
||||
grant_perms=None,
|
||||
deny_perms=None,
|
||||
inheritance=True,
|
||||
reset=False):
|
||||
'''
|
||||
Set permissions for the given path
|
||||
|
||||
Args:
|
||||
|
||||
path (str): The full path to the directory.
|
||||
path (str):
|
||||
The full path to the directory.
|
||||
|
||||
grant_perms (dict):
|
||||
A dictionary containing the user/group and the basic permissions to
|
||||
grant, ie: ``{'user': {'perms': 'basic_permission'}}``. You can also
|
||||
set the ``applies_to`` setting here. The default is
|
||||
``this_folder_subfolders_files``. Specify another ``applies_to``
|
||||
set the ``applies_to`` setting here. The default for ``applise_to``
|
||||
is ``this_folder_subfolders_files``. Specify another ``applies_to``
|
||||
setting like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
@ -1863,7 +1996,10 @@ def set_perms(path, grant_perms=None, deny_perms=None, inheritance=True):
|
||||
{'user': {'perms': ['read_attributes', 'read_ea'], 'applies_to': 'this_folder'}}
|
||||
|
||||
To see a list of available attributes and applies to settings see
|
||||
the documentation for salt.utils.win_dacl
|
||||
the documentation for salt.utils.win_dacl.
|
||||
|
||||
A value of ``None`` will make no changes to the ``grant`` portion of
|
||||
the DACL. Default is ``None``.
|
||||
|
||||
deny_perms (dict):
|
||||
A dictionary containing the user/group and permissions to deny along
|
||||
@ -1871,13 +2007,27 @@ def set_perms(path, grant_perms=None, deny_perms=None, inheritance=True):
|
||||
``grant_perms`` parameter. Remember, deny permissions supersede
|
||||
grant permissions.
|
||||
|
||||
A value of ``None`` will make no changes to the ``deny`` portion of
|
||||
the DACL. Default is ``None``.
|
||||
|
||||
inheritance (bool):
|
||||
If True the object will inherit permissions from the parent, if
|
||||
False, inheritance will be disabled. Inheritance setting will not
|
||||
apply to parent directories if they must be created
|
||||
If ``True`` the object will inherit permissions from the parent, if
|
||||
``False``, inheritance will be disabled. Inheritance setting will
|
||||
not apply to parent directories if they must be created. Default is
|
||||
``False``.
|
||||
|
||||
reset (bool):
|
||||
If ``True`` the existing DCL will be cleared and replaced with the
|
||||
settings defined in this function. If ``False``, new entries will be
|
||||
appended to the existing DACL. Default is ``False``.
|
||||
|
||||
.. versionadded: Oxygen
|
||||
|
||||
Returns:
|
||||
bool: True if successful, otherwise raise an error
|
||||
bool: True if successful
|
||||
|
||||
Raises:
|
||||
CommandExecutionError: If unsuccessful
|
||||
|
||||
CLI Example:
|
||||
|
||||
@ -1894,11 +2044,19 @@ def set_perms(path, grant_perms=None, deny_perms=None, inheritance=True):
|
||||
'''
|
||||
ret = {}
|
||||
|
||||
# Get the DACL for the directory
|
||||
dacl = salt.utils.win_dacl.dacl(path)
|
||||
if reset:
|
||||
# Get an empty DACL
|
||||
dacl = salt.utils.win_dacl.dacl()
|
||||
|
||||
# Get current file/folder permissions
|
||||
cur_perms = salt.utils.win_dacl.get_permissions(path)
|
||||
# Get an empty perms dict
|
||||
cur_perms = {}
|
||||
|
||||
else:
|
||||
# Get the DACL for the directory
|
||||
dacl = salt.utils.win_dacl.dacl(path)
|
||||
|
||||
# Get current file/folder permissions
|
||||
cur_perms = salt.utils.win_dacl.get_permissions(path)
|
||||
|
||||
# Set 'deny' perms if any
|
||||
if deny_perms is not None:
|
||||
|
@ -279,11 +279,23 @@ def _get_extra_options(**kwargs):
|
||||
'''
|
||||
ret = []
|
||||
kwargs = salt.utils.args.clean_kwargs(**kwargs)
|
||||
|
||||
# Remove already handled options from kwargs
|
||||
fromrepo = kwargs.pop('fromrepo', '')
|
||||
repo = kwargs.pop('repo', '')
|
||||
disablerepo = kwargs.pop('disablerepo', '')
|
||||
enablerepo = kwargs.pop('enablerepo', '')
|
||||
disable_excludes = kwargs.pop('disableexcludes', '')
|
||||
branch = kwargs.pop('branch', '')
|
||||
|
||||
for key, value in six.iteritems(kwargs):
|
||||
if isinstance(key, six.string_types):
|
||||
if isinstance(value, six.string_types):
|
||||
log.info('Adding extra option --%s=\'%s\'', key, value)
|
||||
ret.append('--{0}=\'{1}\''.format(key, value))
|
||||
elif value is True:
|
||||
log.info('Adding extra option --%s', key)
|
||||
ret.append('--{0}'.format(key))
|
||||
log.info('Adding extra options %s', ret)
|
||||
return ret
|
||||
|
||||
|
||||
|
@ -399,6 +399,14 @@ def _systemd_scope():
|
||||
and __salt__['config.get']('systemd.scope', True)
|
||||
|
||||
|
||||
def _clean_cache():
|
||||
'''
|
||||
Clean cached results
|
||||
'''
|
||||
for cache_name in ['pkg.list_pkgs', 'pkg.list_provides']:
|
||||
__context__.pop(cache_name, None)
|
||||
|
||||
|
||||
def list_upgrades(refresh=True, **kwargs):
|
||||
'''
|
||||
List all available package upgrades on this system
|
||||
@ -1049,6 +1057,10 @@ def install(name=None,
|
||||
operator (<, >, <=, >=, =) and a version number (ex. '>1.2.3-4').
|
||||
This parameter is ignored if ``pkgs`` or ``sources`` is passed.
|
||||
|
||||
resolve_capabilities
|
||||
If this option is set to True zypper will take capabilites into
|
||||
account. In this case names which are just provided by a package
|
||||
will get installed. Default is False.
|
||||
|
||||
Multiple Package Installation Options:
|
||||
|
||||
@ -1164,7 +1176,10 @@ def install(name=None,
|
||||
log.info('Targeting repo \'{0}\''.format(fromrepo))
|
||||
else:
|
||||
fromrepoopt = ''
|
||||
cmd_install = ['install', '--name', '--auto-agree-with-licenses']
|
||||
cmd_install = ['install', '--auto-agree-with-licenses']
|
||||
|
||||
cmd_install.append(kwargs.get('resolve_capabilities') and '--capability' or '--name')
|
||||
|
||||
if not refresh:
|
||||
cmd_install.insert(0, '--no-refresh')
|
||||
if skip_verify:
|
||||
@ -1194,7 +1209,7 @@ def install(name=None,
|
||||
downgrades = downgrades[500:]
|
||||
__zypper__(no_repo_failure=ignore_repo_failure).call(*cmd)
|
||||
|
||||
__context__.pop('pkg.list_pkgs', None)
|
||||
_clean_cache()
|
||||
new = list_pkgs(attr=diff_attr) if not downloadonly else list_downloaded()
|
||||
|
||||
# Handle packages which report multiple new versions
|
||||
@ -1311,7 +1326,7 @@ def upgrade(refresh=True,
|
||||
old = list_pkgs()
|
||||
|
||||
__zypper__(systemd_scope=_systemd_scope()).noraise.call(*cmd_update)
|
||||
__context__.pop('pkg.list_pkgs', None)
|
||||
_clean_cache()
|
||||
new = list_pkgs()
|
||||
|
||||
# Handle packages which report multiple new versions
|
||||
@ -1360,7 +1375,7 @@ def _uninstall(name=None, pkgs=None):
|
||||
__zypper__(systemd_scope=systemd_scope).call('remove', *targets[:500])
|
||||
targets = targets[500:]
|
||||
|
||||
__context__.pop('pkg.list_pkgs', None)
|
||||
_clean_cache()
|
||||
ret = salt.utils.data.compare_dicts(old, list_pkgs())
|
||||
|
||||
if errors:
|
||||
@ -1750,7 +1765,7 @@ def list_installed_patterns():
|
||||
return _get_patterns(installed_only=True)
|
||||
|
||||
|
||||
def search(criteria, refresh=False):
|
||||
def search(criteria, refresh=False, **kwargs):
|
||||
'''
|
||||
List known packags, available to the system.
|
||||
|
||||
@ -1759,26 +1774,94 @@ def search(criteria, refresh=False):
|
||||
If set to False (default) it depends on zypper if a refresh is
|
||||
executed.
|
||||
|
||||
match (str)
|
||||
One of `exact`, `words`, `substrings`. Search for an `exact` match
|
||||
or for the whole `words` only. Default to `substrings` to patch
|
||||
partial words.
|
||||
|
||||
provides (bool)
|
||||
Search for packages which provide the search strings.
|
||||
|
||||
recommends (bool)
|
||||
Search for packages which recommend the search strings.
|
||||
|
||||
requires (bool)
|
||||
Search for packages which require the search strings.
|
||||
|
||||
suggests (bool)
|
||||
Search for packages which suggest the search strings.
|
||||
|
||||
conflicts (bool)
|
||||
Search packages conflicting with search strings.
|
||||
|
||||
obsoletes (bool)
|
||||
Search for packages which obsolete the search strings.
|
||||
|
||||
file_list (bool)
|
||||
Search for a match in the file list of packages.
|
||||
|
||||
search_descriptions (bool)
|
||||
Search also in package summaries and descriptions.
|
||||
|
||||
case_sensitive (bool)
|
||||
Perform case-sensitive search.
|
||||
|
||||
installed_only (bool)
|
||||
Show only installed packages.
|
||||
|
||||
not_installed_only (bool)
|
||||
Show only packages which are not installed.
|
||||
|
||||
details (bool)
|
||||
Show version and repository
|
||||
|
||||
CLI Examples:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' pkg.search <criteria>
|
||||
'''
|
||||
ALLOWED_SEARCH_OPTIONS = {
|
||||
'provides': '--provides',
|
||||
'recommends': '--recommends',
|
||||
'requires': '--requires',
|
||||
'suggests': '--suggests',
|
||||
'conflicts': '--conflicts',
|
||||
'obsoletes': '--obsoletes',
|
||||
'file_list': '--file-list',
|
||||
'search_descriptions': '--search-descriptions',
|
||||
'case_sensitive': '--case-sensitive',
|
||||
'installed_only': '--installed-only',
|
||||
'not_installed_only': '-u',
|
||||
'details': '--details'
|
||||
}
|
||||
if refresh:
|
||||
refresh_db()
|
||||
|
||||
solvables = __zypper__.nolock.xml.call('se', criteria).getElementsByTagName('solvable')
|
||||
cmd = ['search']
|
||||
if kwargs.get('match') == 'exact':
|
||||
cmd.append('--match-exact')
|
||||
elif kwargs.get('match') == 'words':
|
||||
cmd.append('--match-words')
|
||||
elif kwargs.get('match') == 'substrings':
|
||||
cmd.append('--match-substrings')
|
||||
|
||||
for opt in kwargs:
|
||||
if opt in ALLOWED_SEARCH_OPTIONS:
|
||||
cmd.append(ALLOWED_SEARCH_OPTIONS.get(opt))
|
||||
|
||||
cmd.append(criteria)
|
||||
solvables = __zypper__.nolock.noraise.xml.call(*cmd).getElementsByTagName('solvable')
|
||||
if not solvables:
|
||||
raise CommandExecutionError(
|
||||
'No packages found matching \'{0}\''.format(criteria)
|
||||
)
|
||||
|
||||
out = {}
|
||||
for solvable in [slv for slv in solvables
|
||||
if slv.getAttribute('status') == 'not-installed'
|
||||
and slv.getAttribute('kind') == 'package']:
|
||||
out[solvable.getAttribute('name')] = {'summary': solvable.getAttribute('summary')}
|
||||
for solvable in solvables:
|
||||
out[solvable.getAttribute('name')] = dict()
|
||||
for k, v in solvable.attributes.items():
|
||||
out[solvable.getAttribute('name')][k] = v
|
||||
|
||||
return out
|
||||
|
||||
@ -2033,3 +2116,97 @@ def list_installed_patches():
|
||||
salt '*' pkg.list_installed_patches
|
||||
'''
|
||||
return _get_patches(installed_only=True)
|
||||
|
||||
|
||||
def list_provides(**kwargs):
|
||||
'''
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
List package provides of installed packages as a dict.
|
||||
{'<provided_name>': ['<package_name>', '<package_name>', ...]}
|
||||
|
||||
CLI Examples:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' pkg.list_provides
|
||||
'''
|
||||
ret = __context__.get('pkg.list_provides')
|
||||
if not ret:
|
||||
cmd = ['rpm', '-qa', '--queryformat', '[%{PROVIDES}_|-%{NAME}\n]']
|
||||
ret = dict()
|
||||
for line in __salt__['cmd.run'](cmd, output_loglevel='trace', python_shell=False).splitlines():
|
||||
provide, realname = line.split('_|-')
|
||||
|
||||
if provide == realname:
|
||||
continue
|
||||
if provide not in ret:
|
||||
ret[provide] = list()
|
||||
ret[provide].append(realname)
|
||||
|
||||
__context__['pkg.list_provides'] = ret
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
def resolve_capabilities(pkgs, refresh, **kwargs):
|
||||
'''
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
Convert name provides in ``pkgs`` into real package names if
|
||||
``resolve_capabilities`` parameter is set to True. In case of
|
||||
``resolve_capabilities`` is set to False the package list
|
||||
is returned unchanged.
|
||||
|
||||
refresh
|
||||
force a refresh if set to True.
|
||||
If set to False (default) it depends on zypper if a refresh is
|
||||
executed.
|
||||
|
||||
resolve_capabilities
|
||||
If this option is set to True the input will be checked if
|
||||
a package with this name exists. If not, this function will
|
||||
search for a package which provides this name. If one is found
|
||||
the output is exchanged with the real package name.
|
||||
In case this option is set to False (Default) the input will
|
||||
be returned unchanged.
|
||||
|
||||
CLI Examples:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' pkg.resolve_capabilities resolve_capabilities=True w3m_ssl
|
||||
'''
|
||||
if refresh:
|
||||
refresh_db()
|
||||
|
||||
ret = list()
|
||||
for pkg in pkgs:
|
||||
if isinstance(pkg, dict):
|
||||
name = next(iter(pkg))
|
||||
version = pkg[name]
|
||||
else:
|
||||
name = pkg
|
||||
version = None
|
||||
|
||||
if kwargs.get('resolve_capabilities', False):
|
||||
try:
|
||||
search(name, match='exact')
|
||||
except CommandExecutionError:
|
||||
# no package this such a name found
|
||||
# search for a package which provides this name
|
||||
try:
|
||||
result = search(name, provides=True, match='exact')
|
||||
if len(result) == 1:
|
||||
name = result.keys()[0]
|
||||
elif len(result) > 1:
|
||||
log.warn("Found ambiguous match for capability '{0}'.".format(pkg))
|
||||
except CommandExecutionError as exc:
|
||||
# when search throws an exception stay with original name and version
|
||||
log.debug("Search failed with: {0}".format(exc))
|
||||
|
||||
if version:
|
||||
ret.append({name: version})
|
||||
else:
|
||||
ret.append(name)
|
||||
return ret
|
||||
|
@ -138,7 +138,7 @@ class AsyncRemotePillar(RemotePillarMixin):
|
||||
def __init__(self, opts, grains, minion_id, saltenv, ext=None, functions=None,
|
||||
pillar_override=None, pillarenv=None, extra_minion_data=None):
|
||||
self.opts = opts
|
||||
self.opts['environment'] = saltenv
|
||||
self.opts['saltenv'] = saltenv
|
||||
self.ext = ext
|
||||
self.grains = grains
|
||||
self.minion_id = minion_id
|
||||
@ -165,7 +165,7 @@ class AsyncRemotePillar(RemotePillarMixin):
|
||||
'''
|
||||
load = {'id': self.minion_id,
|
||||
'grains': self.grains,
|
||||
'saltenv': self.opts['environment'],
|
||||
'saltenv': self.opts['saltenv'],
|
||||
'pillarenv': self.opts['pillarenv'],
|
||||
'pillar_override': self.pillar_override,
|
||||
'extra_minion_data': self.extra_minion_data,
|
||||
@ -198,7 +198,7 @@ class RemotePillar(RemotePillarMixin):
|
||||
def __init__(self, opts, grains, minion_id, saltenv, ext=None, functions=None,
|
||||
pillar_override=None, pillarenv=None, extra_minion_data=None):
|
||||
self.opts = opts
|
||||
self.opts['environment'] = saltenv
|
||||
self.opts['saltenv'] = saltenv
|
||||
self.ext = ext
|
||||
self.grains = grains
|
||||
self.minion_id = minion_id
|
||||
@ -224,7 +224,7 @@ class RemotePillar(RemotePillarMixin):
|
||||
'''
|
||||
load = {'id': self.minion_id,
|
||||
'grains': self.grains,
|
||||
'saltenv': self.opts['environment'],
|
||||
'saltenv': self.opts['saltenv'],
|
||||
'pillarenv': self.opts['pillarenv'],
|
||||
'pillar_override': self.pillar_override,
|
||||
'extra_minion_data': self.extra_minion_data,
|
||||
@ -445,9 +445,9 @@ class Pillar(object):
|
||||
else:
|
||||
opts['grains'] = grains
|
||||
# Allow minion/CLI saltenv/pillarenv to take precedence over master
|
||||
opts['environment'] = saltenv \
|
||||
opts['saltenv'] = saltenv \
|
||||
if saltenv is not None \
|
||||
else opts.get('environment')
|
||||
else opts.get('saltenv')
|
||||
opts['pillarenv'] = pillarenv \
|
||||
if pillarenv is not None \
|
||||
else opts.get('pillarenv')
|
||||
|
@ -404,7 +404,7 @@ def ext_pillar(minion_id, pillar, *repos): # pylint: disable=unused-argument
|
||||
# Map env if env == '__env__' before checking the env value
|
||||
if env == '__env__':
|
||||
env = opts.get('pillarenv') \
|
||||
or opts.get('environment') \
|
||||
or opts.get('saltenv') \
|
||||
or opts.get('git_pillar_base')
|
||||
log.debug('__env__ maps to %s', env)
|
||||
|
||||
|
287
salt/proxy/esxvm.py
Normal file
287
salt/proxy/esxvm.py
Normal file
@ -0,0 +1,287 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Proxy Minion interface module for managing VMWare ESXi virtual machines.
|
||||
|
||||
Dependencies
|
||||
============
|
||||
|
||||
- pyVmomi
|
||||
- jsonschema
|
||||
|
||||
Configuration
|
||||
=============
|
||||
To use this integration proxy module, please configure the following:
|
||||
|
||||
Pillar
|
||||
------
|
||||
|
||||
Proxy minions get their configuration from Salt's Pillar. This can now happen
|
||||
from the proxy's configuration file.
|
||||
|
||||
Example pillars:
|
||||
|
||||
``userpass`` mechanism:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
proxy:
|
||||
proxytype: esxvm
|
||||
datacenter: <datacenter name>
|
||||
vcenter: <ip or dns name of parent vcenter>
|
||||
mechanism: userpass
|
||||
username: <vCenter username>
|
||||
passwords: (required if userpass is used)
|
||||
- first_password
|
||||
- second_password
|
||||
- third_password
|
||||
|
||||
``sspi`` mechanism:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
proxy:
|
||||
proxytype: esxvm
|
||||
datacenter: <datacenter name>
|
||||
vcenter: <ip or dns name of parent vcenter>
|
||||
mechanism: sspi
|
||||
domain: <user domain>
|
||||
principal: <host kerberos principal>
|
||||
|
||||
proxytype
|
||||
^^^^^^^^^
|
||||
To use this Proxy Module, set this to ``esxvm``.
|
||||
|
||||
datacenter
|
||||
^^^^^^^^^^
|
||||
Name of the datacenter where the virtual machine should be deployed. Required.
|
||||
|
||||
vcenter
|
||||
^^^^^^^
|
||||
The location of the VMware vCenter server (host of ip) where the virtual
|
||||
machine should be managed. Required.
|
||||
|
||||
mechanism
|
||||
^^^^^^^^^
|
||||
The mechanism used to connect to the vCenter server. Supported values are
|
||||
``userpass`` and ``sspi``. Required.
|
||||
|
||||
Note:
|
||||
Connections are attempted using all (``username``, ``password``)
|
||||
combinations on proxy startup.
|
||||
|
||||
username
|
||||
^^^^^^^^
|
||||
The username used to login to the host, such as ``root``. Required if mechanism
|
||||
is ``userpass``.
|
||||
|
||||
passwords
|
||||
^^^^^^^^^
|
||||
A list of passwords to be used to try and login to the vCenter server. At least
|
||||
one password in this list is required if mechanism is ``userpass``. When the
|
||||
proxy comes up, it will try the passwords listed in order.
|
||||
|
||||
domain
|
||||
^^^^^^
|
||||
User realm domain. Required if mechanism is ``sspi``.
|
||||
|
||||
principal
|
||||
^^^^^^^^
|
||||
Kerberos principal. Rquired if mechanism is ``sspi``.
|
||||
|
||||
protocol
|
||||
^^^^^^^^
|
||||
If the ESXi host is not using the default protocol, set this value to an
|
||||
alternate protocol. Default is ``https``.
|
||||
|
||||
port
|
||||
^^^^
|
||||
If the ESXi host is not using the default port, set this value to an
|
||||
alternate port. Default is ``443``.
|
||||
|
||||
Salt Proxy
|
||||
----------
|
||||
|
||||
After your pillar is in place, you can test the proxy. The proxy can run on
|
||||
any machine that has network connectivity to your Salt Master and to the
|
||||
vCenter server in the pillar. SaltStack recommends that the machine running the
|
||||
salt-proxy process also run a regular minion, though it is not strictly
|
||||
necessary.
|
||||
|
||||
To start a proxy minion one needs to establish its identity <id>:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt-proxy --proxyid <proxy_id>
|
||||
|
||||
On the machine that will run the proxy, make sure there is a configuration file
|
||||
present. By default this is ``/etc/salt/proxy``. If in a different location, the
|
||||
``<configuration_folder>`` has to be specified when running the proxy:
|
||||
file with at least the following in it:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt-proxy --proxyid <proxy_id> -c <configuration_folder>
|
||||
|
||||
Commands
|
||||
--------
|
||||
|
||||
Once the proxy is running it will connect back to the specified master and
|
||||
individual commands can be runs against it:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Master - minion communication
|
||||
salt <proxy_id> test.ping
|
||||
|
||||
# Test vcenter connection
|
||||
salt <proxy_id> vsphere.test_vcenter_connection
|
||||
|
||||
States
|
||||
------
|
||||
|
||||
Associated states are documented in
|
||||
:mod:`salt.states.esxvm </ref/states/all/salt.states.esxvm>`.
|
||||
Look there to find an example structure for Pillar as well as an example
|
||||
``.sls`` file for configuring an ESX virtual machine from scratch.
|
||||
'''
|
||||
|
||||
# Import Python Libs
|
||||
from __future__ import absolute_import
|
||||
import logging
|
||||
import os
|
||||
|
||||
# Import Salt Libs
|
||||
import salt.exceptions as excs
|
||||
from salt.utils.dictupdate import merge
|
||||
|
||||
# This must be present or the Salt loader won't load this module.
|
||||
__proxyenabled__ = ['esxvm']
|
||||
|
||||
|
||||
# Variables are scoped to this module so we can have persistent data
|
||||
# across calls to fns in here.
|
||||
GRAINS_CACHE = {}
|
||||
DETAILS = {}
|
||||
|
||||
|
||||
# Set up logging
|
||||
log = logging.getLogger(__name__)
|
||||
# Define the module's virtual name
|
||||
__virtualname__ = 'esxvm'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
'''
|
||||
Only load if the vsphere execution module is available.
|
||||
'''
|
||||
return __virtualname__
|
||||
|
||||
|
||||
def init(opts):
|
||||
'''
|
||||
This function gets called when the proxy starts up. For
|
||||
login the protocol and port are cached.
|
||||
'''
|
||||
log.debug('Initting esxvm proxy module in process '
|
||||
'{}'.format(os.getpid()))
|
||||
log.debug('Validating esxvm proxy input')
|
||||
proxy_conf = merge(opts.get('proxy', {}), __pillar__.get('proxy', {}))
|
||||
log.trace('proxy_conf = {0}'.format(proxy_conf))
|
||||
# TODO json schema validation
|
||||
|
||||
# Save mandatory fields in cache
|
||||
for key in ('vcenter', 'datacenter', 'mechanism'):
|
||||
DETAILS[key] = proxy_conf[key]
|
||||
|
||||
# Additional validation
|
||||
if DETAILS['mechanism'] == 'userpass':
|
||||
if 'username' not in proxy_conf:
|
||||
raise excs.InvalidProxyInputError(
|
||||
'Mechanism is set to \'userpass\' , but no '
|
||||
'\'username\' key found in pillar for this proxy.')
|
||||
if 'passwords' not in proxy_conf:
|
||||
raise excs.InvalidProxyInputError(
|
||||
'Mechanism is set to \'userpass\' , but no '
|
||||
'\'passwords\' key found in pillar for this proxy.')
|
||||
for key in ('username', 'passwords'):
|
||||
DETAILS[key] = proxy_conf[key]
|
||||
else:
|
||||
if 'domain' not in proxy_conf:
|
||||
raise excs.InvalidProxyInputError(
|
||||
'Mechanism is set to \'sspi\' , but no '
|
||||
'\'domain\' key found in pillar for this proxy.')
|
||||
if 'principal' not in proxy_conf:
|
||||
raise excs.InvalidProxyInputError(
|
||||
'Mechanism is set to \'sspi\' , but no '
|
||||
'\'principal\' key found in pillar for this proxy.')
|
||||
for key in ('domain', 'principal'):
|
||||
DETAILS[key] = proxy_conf[key]
|
||||
|
||||
# Save optional
|
||||
DETAILS['protocol'] = proxy_conf.get('protocol')
|
||||
DETAILS['port'] = proxy_conf.get('port')
|
||||
|
||||
# Test connection
|
||||
if DETAILS['mechanism'] == 'userpass':
|
||||
# Get the correct login details
|
||||
log.debug('Retrieving credentials and testing vCenter connection for '
|
||||
'mehchanism \'userpass\'')
|
||||
try:
|
||||
username, password = find_credentials()
|
||||
DETAILS['password'] = password
|
||||
except excs.SaltSystemExit as err:
|
||||
log.critical('Error: {0}'.format(err))
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def ping():
|
||||
'''
|
||||
Returns True.
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt esx-vm test.ping
|
||||
'''
|
||||
return True
|
||||
|
||||
|
||||
def shutdown():
|
||||
'''
|
||||
Shutdown the connection to the proxy device. For this proxy,
|
||||
shutdown is a no-op.
|
||||
'''
|
||||
log.debug('ESX vm proxy shutdown() called...')
|
||||
|
||||
|
||||
def find_credentials():
|
||||
'''
|
||||
Cycle through all the possible credentials and return the first one that
|
||||
works.
|
||||
'''
|
||||
|
||||
# if the username and password were already found don't go through the
|
||||
# connection process again
|
||||
if 'username' in DETAILS and 'password' in DETAILS:
|
||||
return DETAILS['username'], DETAILS['password']
|
||||
|
||||
passwords = __pillar__['proxy']['passwords']
|
||||
for password in passwords:
|
||||
DETAILS['password'] = password
|
||||
if not __salt__['vsphere.test_vcenter_connection']():
|
||||
# We are unable to authenticate
|
||||
continue
|
||||
# If we have data returned from above, we've successfully authenticated.
|
||||
return DETAILS['username'], password
|
||||
# We've reached the end of the list without successfully authenticating.
|
||||
raise excs.VMwareConnectionError('Cannot complete login due to '
|
||||
'incorrect credentials.')
|
||||
|
||||
|
||||
def get_details():
|
||||
'''
|
||||
Function that returns the cached details
|
||||
'''
|
||||
return DETAILS
|
@ -294,9 +294,11 @@ def get_load(jid):
|
||||
if not os.path.exists(jid_dir) or not os.path.exists(load_fn):
|
||||
return {}
|
||||
serial = salt.payload.Serial(__opts__)
|
||||
ret = {}
|
||||
with salt.utils.files.fopen(os.path.join(jid_dir, LOAD_P), 'rb') as rfh:
|
||||
ret = serial.load(rfh)
|
||||
|
||||
if ret is None:
|
||||
ret = {}
|
||||
minions_cache = [os.path.join(jid_dir, MINIONS_P)]
|
||||
minions_cache.extend(
|
||||
glob.glob(os.path.join(jid_dir, SYNDIC_MINIONS_P.format('*')))
|
||||
|
@ -408,7 +408,7 @@ def sync_sdb(saltenv='base', extmod_whitelist=None, extmod_blacklist=None):
|
||||
'''
|
||||
.. versionadded:: 2017.7.0
|
||||
|
||||
Sync utils modules from ``salt://_sdb`` to the master
|
||||
Sync sdb modules from ``salt://_sdb`` to the master
|
||||
|
||||
saltenv : base
|
||||
The fileserver environment from which to sync. To sync from more than
|
||||
@ -454,7 +454,7 @@ def sync_cache(saltenv='base', extmod_whitelist=None, extmod_blacklist=None):
|
||||
'''
|
||||
.. versionadded:: 2017.7.0
|
||||
|
||||
Sync utils modules from ``salt://_cache`` to the master
|
||||
Sync cache modules from ``salt://_cache`` to the master
|
||||
|
||||
saltenv : base
|
||||
The fileserver environment from which to sync. To sync from more than
|
||||
@ -480,7 +480,7 @@ def sync_fileserver(saltenv='base', extmod_whitelist=None, extmod_blacklist=None
|
||||
'''
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
Sync utils modules from ``salt://_fileserver`` to the master
|
||||
Sync fileserver modules from ``salt://_fileserver`` to the master
|
||||
|
||||
saltenv : base
|
||||
The fileserver environment from which to sync. To sync from more than
|
||||
@ -506,7 +506,7 @@ def sync_clouds(saltenv='base', extmod_whitelist=None, extmod_blacklist=None):
|
||||
'''
|
||||
.. versionadded:: 2017.7.0
|
||||
|
||||
Sync utils modules from ``salt://_clouds`` to the master
|
||||
Sync cloud modules from ``salt://_clouds`` to the master
|
||||
|
||||
saltenv : base
|
||||
The fileserver environment from which to sync. To sync from more than
|
||||
@ -532,7 +532,7 @@ def sync_roster(saltenv='base', extmod_whitelist=None, extmod_blacklist=None):
|
||||
'''
|
||||
.. versionadded:: 2017.7.0
|
||||
|
||||
Sync utils modules from ``salt://_roster`` to the master
|
||||
Sync roster modules from ``salt://_roster`` to the master
|
||||
|
||||
saltenv : base
|
||||
The fileserver environment from which to sync. To sync from more than
|
||||
|
@ -22,7 +22,7 @@ def get(uri):
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' sdb.get sdb://mymemcached/foo
|
||||
salt-run sdb.get sdb://mymemcached/foo
|
||||
'''
|
||||
return salt.utils.sdb.sdb_get(uri, __opts__, __utils__)
|
||||
|
||||
@ -37,7 +37,7 @@ def set_(uri, value):
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' sdb.set sdb://mymemcached/foo bar
|
||||
salt-run sdb.set sdb://mymemcached/foo bar
|
||||
'''
|
||||
return salt.utils.sdb.sdb_set(uri, value, __opts__, __utils__)
|
||||
|
||||
@ -52,7 +52,7 @@ def delete(uri):
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' sdb.delete sdb://mymemcached/foo
|
||||
salt-run sdb.delete sdb://mymemcached/foo
|
||||
'''
|
||||
return salt.utils.sdb.sdb_delete(uri, __opts__, __utils__)
|
||||
|
||||
|
@ -15,6 +15,24 @@ from salt.exceptions import SaltInvocationError
|
||||
LOGGER = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def set_pause(jid, state_id, duration=None):
|
||||
'''
|
||||
Set up a state id pause, this instructs a running state to pause at a given
|
||||
state id. This needs to pass in the jid of the running state and can
|
||||
optionally pass in a duration in seconds.
|
||||
'''
|
||||
minion = salt.minion.MasterMinion(__opts__)
|
||||
minion['state.set_pause'](jid, state_id, duration)
|
||||
|
||||
|
||||
def rm_pause(jid, state_id, duration=None):
|
||||
'''
|
||||
Remove a pause from a jid, allowing it to continue
|
||||
'''
|
||||
minion = salt.minion.MasterMinion(__opts__)
|
||||
minion['state.rm_pause'](jid, state_id)
|
||||
|
||||
|
||||
def orchestrate(mods,
|
||||
saltenv='base',
|
||||
test=None,
|
||||
|
@ -33,6 +33,10 @@ Optional configuration:
|
||||
merge:
|
||||
strategy: smart
|
||||
merge_list: false
|
||||
gpg: true
|
||||
|
||||
Setting the ``gpg`` option to ``true`` (default is ``false``) will decrypt embedded
|
||||
GPG-encrypted data using the :py:mod:`GPG renderer <salt.renderers.gpg>`.
|
||||
'''
|
||||
|
||||
# import python libs
|
||||
@ -44,6 +48,7 @@ import salt.loader
|
||||
import salt.utils.data
|
||||
import salt.utils.files
|
||||
import salt.utils.dictupdate
|
||||
import salt.renderers.gpg
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
@ -52,7 +57,11 @@ __func_alias__ = {
|
||||
}
|
||||
|
||||
|
||||
def set_(*args, **kwargs):
|
||||
def __virtual__():
|
||||
return True
|
||||
|
||||
|
||||
def set_(*args, **kwargs): # pylint: disable=W0613
|
||||
'''
|
||||
Setting a value is not supported; edit the YAML files directly
|
||||
'''
|
||||
@ -61,9 +70,14 @@ def set_(*args, **kwargs):
|
||||
|
||||
def get(key, profile=None): # pylint: disable=W0613
|
||||
'''
|
||||
Get a value from the REST interface
|
||||
Get a value from the dictionary
|
||||
'''
|
||||
data = _get_values(profile)
|
||||
|
||||
# Decrypt SDB data if specified in the profile
|
||||
if profile and profile.get('gpg', False):
|
||||
return salt.utils.data.traverse_dict_and_list(_decrypt(data), key, None)
|
||||
|
||||
return salt.utils.data.traverse_dict_and_list(data, key, None)
|
||||
|
||||
|
||||
@ -77,12 +91,19 @@ def _get_values(profile=None):
|
||||
ret = {}
|
||||
for fname in profile.get('files', []):
|
||||
try:
|
||||
with salt.utils.files.flopen(fname) as f:
|
||||
contents = serializers.yaml.deserialize(f)
|
||||
ret = salt.utils.dictupdate.merge(ret, contents,
|
||||
**profile.get('merge', {}))
|
||||
with salt.utils.files.flopen(fname) as yamlfile:
|
||||
contents = serializers.yaml.deserialize(yamlfile)
|
||||
ret = salt.utils.dictupdate.merge(
|
||||
ret, contents, **profile.get('merge', {}))
|
||||
except IOError:
|
||||
log.error("File not found '{0}'".format(fname))
|
||||
except TypeError:
|
||||
log.error("Error deserializing sdb file '{0}'".format(fname))
|
||||
return ret
|
||||
|
||||
|
||||
def _decrypt(data):
|
||||
'''
|
||||
Pass the dictionary through the GPG renderer to decrypt encrypted values.
|
||||
'''
|
||||
return salt.loader.render(__opts__, __salt__)['gpg'](data)
|
||||
|
105
salt/state.py
105
salt/state.py
@ -763,7 +763,7 @@ class State(object):
|
||||
self.opts,
|
||||
self.opts[u'grains'],
|
||||
self.opts[u'id'],
|
||||
self.opts[u'environment'],
|
||||
self.opts[u'saltenv'],
|
||||
pillar_override=self._pillar_override,
|
||||
pillarenv=self.opts.get(u'pillarenv'))
|
||||
return pillar.compile_pillar()
|
||||
@ -1892,20 +1892,27 @@ class State(object):
|
||||
(u'onlyif' in low and u'{0[state]}.mod_run_check'.format(low) not in self.states):
|
||||
ret.update(self._run_check(low))
|
||||
|
||||
if u'saltenv' in low:
|
||||
inject_globals[u'__env__'] = six.text_type(low[u'saltenv'])
|
||||
elif isinstance(cdata[u'kwargs'].get(u'env', None), six.string_types):
|
||||
# User is using a deprecated env setting which was parsed by
|
||||
# format_call.
|
||||
# We check for a string type since module functions which
|
||||
# allow setting the OS environ also make use of the "env"
|
||||
# keyword argument, which is not a string
|
||||
inject_globals[u'__env__'] = six.text_type(cdata[u'kwargs'][u'env'])
|
||||
elif u'__env__' in low:
|
||||
# The user is passing an alternative environment using __env__
|
||||
# which is also not the appropriate choice, still, handle it
|
||||
inject_globals[u'__env__'] = six.text_type(low[u'__env__'])
|
||||
else:
|
||||
if not self.opts.get(u'lock_saltenv', False):
|
||||
# NOTE: Overriding the saltenv when lock_saltenv is blocked in
|
||||
# salt/modules/state.py, before we ever get here, but this
|
||||
# additional check keeps use of the State class outside of the
|
||||
# salt/modules/state.py from getting around this setting.
|
||||
if u'saltenv' in low:
|
||||
inject_globals[u'__env__'] = six.text_type(low[u'saltenv'])
|
||||
elif isinstance(cdata[u'kwargs'].get(u'env', None), six.string_types):
|
||||
# User is using a deprecated env setting which was parsed by
|
||||
# format_call.
|
||||
# We check for a string type since module functions which
|
||||
# allow setting the OS environ also make use of the "env"
|
||||
# keyword argument, which is not a string
|
||||
inject_globals[u'__env__'] = six.text_type(cdata[u'kwargs'][u'env'])
|
||||
elif u'__env__' in low:
|
||||
# The user is passing an alternative environment using
|
||||
# __env__ which is also not the appropriate choice, still,
|
||||
# handle it
|
||||
inject_globals[u'__env__'] = six.text_type(low[u'__env__'])
|
||||
|
||||
if u'__env__' not in inject_globals:
|
||||
# Let's use the default environment
|
||||
inject_globals[u'__env__'] = u'base'
|
||||
|
||||
@ -1918,6 +1925,8 @@ class State(object):
|
||||
if self.mocked:
|
||||
ret = mock_ret(cdata)
|
||||
else:
|
||||
# Check if this low chunk is paused
|
||||
self.check_pause(low)
|
||||
# Execute the state function
|
||||
if not low.get(u'__prereq__') and low.get(u'parallel'):
|
||||
# run the state call in parallel, but only if not in a prereq
|
||||
@ -2127,6 +2136,48 @@ class State(object):
|
||||
return not running[tag][u'result']
|
||||
return False
|
||||
|
||||
def check_pause(self, low):
|
||||
'''
|
||||
Check to see if this low chunk has been paused
|
||||
'''
|
||||
if not self.jid:
|
||||
# Can't pause on salt-ssh since we can't track continuous state
|
||||
return
|
||||
pause_path = os.path.join(self.opts[u'cachedir'], 'state_pause', self.jid)
|
||||
start = time.time()
|
||||
if os.path.isfile(pause_path):
|
||||
try:
|
||||
while True:
|
||||
tries = 0
|
||||
with salt.utils.files.fopen(pause_path, 'rb') as fp_:
|
||||
try:
|
||||
pdat = msgpack.loads(fp_.read())
|
||||
except msgpack.UnpackValueError:
|
||||
# Reading race condition
|
||||
if tries > 10:
|
||||
# Break out if there are a ton of read errors
|
||||
return
|
||||
tries += 1
|
||||
time.sleep(1)
|
||||
continue
|
||||
id_ = low[u'__id__']
|
||||
key = u''
|
||||
if id_ in pdat:
|
||||
key = id_
|
||||
elif u'__all__' in pdat:
|
||||
key = u'__all__'
|
||||
if key:
|
||||
if u'duration' in pdat[key]:
|
||||
now = time.time()
|
||||
if now - start > pdat[key][u'duration']:
|
||||
return
|
||||
else:
|
||||
return
|
||||
time.sleep(1)
|
||||
except Exception as exc:
|
||||
log.error('Failed to read in pause data for file located at: %s', pause_path)
|
||||
return
|
||||
|
||||
def reconcile_procs(self, running):
|
||||
'''
|
||||
Check the running dict for processes and resolve them
|
||||
@ -2682,6 +2733,14 @@ class State(object):
|
||||
except OSError:
|
||||
log.debug(u'File %s does not exist, no need to cleanup', accum_data_path)
|
||||
_cleanup_accumulator_data()
|
||||
if self.jid is not None:
|
||||
pause_path = os.path.join(self.opts[u'cachedir'], u'state_pause', self.jid)
|
||||
if os.path.isfile(pause_path):
|
||||
try:
|
||||
os.remove(pause_path)
|
||||
except OSError:
|
||||
# File is not present, all is well
|
||||
pass
|
||||
|
||||
return ret
|
||||
|
||||
@ -2900,32 +2959,32 @@ class BaseHighState(object):
|
||||
found = 0 # did we find any contents in the top files?
|
||||
# Gather initial top files
|
||||
merging_strategy = self.opts[u'top_file_merging_strategy']
|
||||
if merging_strategy == u'same' and not self.opts[u'environment']:
|
||||
if merging_strategy == u'same' and not self.opts[u'saltenv']:
|
||||
if not self.opts[u'default_top']:
|
||||
raise SaltRenderError(
|
||||
u'top_file_merging_strategy set to \'same\', but no '
|
||||
u'default_top configuration option was set'
|
||||
)
|
||||
|
||||
if self.opts[u'environment']:
|
||||
if self.opts[u'saltenv']:
|
||||
contents = self.client.cache_file(
|
||||
self.opts[u'state_top'],
|
||||
self.opts[u'environment']
|
||||
self.opts[u'saltenv']
|
||||
)
|
||||
if contents:
|
||||
found = 1
|
||||
tops[self.opts[u'environment']] = [
|
||||
tops[self.opts[u'saltenv']] = [
|
||||
compile_template(
|
||||
contents,
|
||||
self.state.rend,
|
||||
self.state.opts[u'renderer'],
|
||||
self.state.opts[u'renderer_blacklist'],
|
||||
self.state.opts[u'renderer_whitelist'],
|
||||
saltenv=self.opts[u'environment']
|
||||
saltenv=self.opts[u'saltenv']
|
||||
)
|
||||
]
|
||||
else:
|
||||
tops[self.opts[u'environment']] = [{}]
|
||||
tops[self.opts[u'saltenv']] = [{}]
|
||||
|
||||
else:
|
||||
found = 0
|
||||
@ -3257,8 +3316,8 @@ class BaseHighState(object):
|
||||
matches = DefaultOrderedDict(OrderedDict)
|
||||
# pylint: disable=cell-var-from-loop
|
||||
for saltenv, body in six.iteritems(top):
|
||||
if self.opts[u'environment']:
|
||||
if saltenv != self.opts[u'environment']:
|
||||
if self.opts[u'saltenv']:
|
||||
if saltenv != self.opts[u'saltenv']:
|
||||
continue
|
||||
for match, data in six.iteritems(body):
|
||||
def _filter_matches(_match, _data, _opts):
|
||||
|
538
salt/states/esxvm.py
Normal file
538
salt/states/esxvm.py
Normal file
@ -0,0 +1,538 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Salt state to create, update VMware ESXi Virtual Machines.
|
||||
|
||||
Dependencies
|
||||
============
|
||||
|
||||
- pyVmomi
|
||||
- jsonschema
|
||||
|
||||
States
|
||||
======
|
||||
|
||||
vm_configured
|
||||
-------------
|
||||
|
||||
Enforces correct virtual machine configuration. Creates, updates and registers
|
||||
a virtual machine.
|
||||
|
||||
This state identifies the action which should be taken for the virtual machine
|
||||
and applies that action via the create, update, register state functions.
|
||||
|
||||
Supported proxies: esxvm
|
||||
|
||||
|
||||
Example:
|
||||
|
||||
1. Get the virtual machine ``my_vm`` status with an ``esxvm`` proxy:
|
||||
|
||||
Proxy minion configuration for ``esxvm`` proxy:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
proxy:
|
||||
proxytype: esxvm
|
||||
datacenter: my_dc
|
||||
vcenter: vcenter.fake.com
|
||||
mechanism: sspi
|
||||
domain: fake.com
|
||||
principal: host
|
||||
|
||||
State configuration:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
myvm_state:
|
||||
esxvm.vm_configured:
|
||||
- vm_name: my_vm
|
||||
- cpu: {{ {'count': 4, 'cores_per_socket': 2} }}
|
||||
- memory: {{ {'size': 16384, 'unit': 'MB'} }}
|
||||
- image: rhel7_64Guest
|
||||
- version: vmx-12
|
||||
- interfaces: {{ [{
|
||||
'adapter': 'Network adapter 1',
|
||||
'name': 'my_pg1',
|
||||
'switch_type': 'distributed',
|
||||
'adapter_type': 'vmxnet3',
|
||||
'mac': '00:50:56:00:01:02,
|
||||
'connectable': { 'start_connected': true,
|
||||
'allow_guest_control': true,
|
||||
'connected': true}},
|
||||
{
|
||||
'adapter': 'Network adapter 2',
|
||||
'name': 'my_pg2',
|
||||
'switch_type': 'distributed',
|
||||
'adapter_type': 'vmxnet3',
|
||||
'mac': '00:50:56:00:01:03',
|
||||
'connectable': { 'start_connected': true,
|
||||
'allow_guest_control': true,
|
||||
'connected': true}}
|
||||
] }}
|
||||
- disks: {{ [{
|
||||
'adapter': 'Hard disk 1',
|
||||
'unit': 'MB',
|
||||
'size': 51200,
|
||||
'filename': 'my_vm/sda.vmdk',
|
||||
'datastore': 'my_datastore',
|
||||
'address': '0:0',
|
||||
'thin_provision': true,
|
||||
'eagerly_scrub': false,
|
||||
'controller': 'SCSI controller 0'},
|
||||
{
|
||||
'adapter': 'Hard disk 2',
|
||||
'unit': 'MB',
|
||||
'size': 10240,
|
||||
'filename': 'my_vm/sdb.vmdk',
|
||||
'datastore': 'my_datastore',
|
||||
'address': '0:1',
|
||||
'thin_provision': true,
|
||||
'eagerly_scrub': false,
|
||||
'controller': 'SCSI controller 0'}
|
||||
] }}
|
||||
- scsi_devices: {{ [{
|
||||
'adapter': 'SCSI controller 0',
|
||||
'type': 'paravirtual',
|
||||
'bus_sharing': 'no_sharing',
|
||||
'bus_number': 0}
|
||||
] }}
|
||||
- serial_ports: {{ [{
|
||||
'adapter': 'Serial port 1',
|
||||
'type': 'network',
|
||||
'yield': false,
|
||||
'backing': {
|
||||
'uri': 'my_uri',
|
||||
'direction': 'server',
|
||||
'filename': 'my_file'},
|
||||
'connectable': {
|
||||
'start_connected': true,
|
||||
'allow_guest_control': true,
|
||||
'connected': true}}
|
||||
] }}
|
||||
- datacenter: {{ 'my_dc' }}
|
||||
- datastore: 'my_datastore'
|
||||
- placement: {{ {'cluster': 'my_cluster'} }}
|
||||
- cd_dvd_drives: {{ [] }}
|
||||
- advanced_configs: {{ {'my_param': '1'} }}
|
||||
- template: false
|
||||
- tools: false
|
||||
- power_on: false
|
||||
- deploy: false
|
||||
|
||||
|
||||
vm_updated
|
||||
----------
|
||||
|
||||
Updates a virtual machine to a given configuration.
|
||||
|
||||
vm_created
|
||||
----------
|
||||
|
||||
Creates a virtual machine with a given configuration.
|
||||
|
||||
vm_registered
|
||||
-------------
|
||||
|
||||
Registers a virtual machine with it's configuration file path.
|
||||
|
||||
Dependencies
|
||||
============
|
||||
|
||||
pyVmomi
|
||||
-------
|
||||
|
||||
PyVmomi can be installed via pip:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install pyVmomi
|
||||
|
||||
.. note::
|
||||
|
||||
Version 6.0 of pyVmomi has some problems with SSL error handling on
|
||||
certain versions of Python. If using version 6.0 of pyVmomi, Python 2.6,
|
||||
Python 2.7.9, or newer must be present. This is due to an upstream
|
||||
dependency in pyVmomi 6.0 that is not supported in Python versions
|
||||
2.7 to 2.7.8. If the version of Python is not in the supported range,
|
||||
you will need to install an earlier version of pyVmomi.
|
||||
See `Issue #29537`_ for more information.
|
||||
|
||||
.. _Issue #29537: https://github.com/saltstack/salt/issues/29537
|
||||
|
||||
Based on the note above, to install an earlier version of pyVmomi than the
|
||||
version currently listed in PyPi, run the following:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install pyVmomi==6.0.0.2016.4
|
||||
|
||||
The 5.5.0.2014.1.1 is a known stable version that this original ESXi State
|
||||
Module was developed against. To be able to connect through SSPI you must
|
||||
use pyvmomi 6.0.0.2016.4 or above. The ESXVM State Module was tested with
|
||||
this version.
|
||||
|
||||
About
|
||||
-----
|
||||
|
||||
This state module was written to be used in conjunction with Salt's
|
||||
:mod:`ESXi Proxy Minion <salt.proxy.esxi>` For a tutorial on how to use Salt's
|
||||
ESXi Proxy Minion, please refer to the
|
||||
:ref:`ESXi Proxy Minion Tutorial <tutorial-esxi-proxy>` for
|
||||
configuration examples, dependency installation instructions, how to run remote
|
||||
execution functions against ESXi hosts via a Salt Proxy Minion, and a larger state
|
||||
example.
|
||||
'''
|
||||
|
||||
# Import Python libs
|
||||
from __future__ import absolute_import
|
||||
import sys
|
||||
import logging
|
||||
|
||||
import salt.exceptions
|
||||
import salt.ext.six as six
|
||||
from salt.config.schemas.esxvm import ESXVirtualMachineConfigSchema
|
||||
|
||||
# External libraries
|
||||
try:
|
||||
import jsonschema
|
||||
HAS_JSONSCHEMA = True
|
||||
except ImportError:
|
||||
HAS_JSONSCHEMA = False
|
||||
|
||||
try:
|
||||
from pyVmomi import VmomiSupport
|
||||
HAS_PYVMOMI = True
|
||||
except ImportError:
|
||||
HAS_PYVMOMI = False
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def __virtual__():
|
||||
if not HAS_JSONSCHEMA:
|
||||
return False, 'State module did not load: jsonschema not found'
|
||||
if not HAS_PYVMOMI:
|
||||
return False, 'State module did not load: pyVmomi not found'
|
||||
|
||||
# We check the supported vim versions to infer the pyVmomi version
|
||||
if 'vim25/6.0' in VmomiSupport.versionMap and \
|
||||
sys.version_info > (2, 7) and sys.version_info < (2, 7, 9):
|
||||
|
||||
return False, ('State module did not load: Incompatible versions '
|
||||
'of Python and pyVmomi present. See Issue #29537.')
|
||||
return True
|
||||
|
||||
|
||||
def vm_configured(name, vm_name, cpu, memory, image, version, interfaces,
|
||||
disks, scsi_devices, serial_ports, datacenter, datastore,
|
||||
placement, cd_dvd_drives=None, sata_controllers=None,
|
||||
advanced_configs=None, template=None, tools=True,
|
||||
power_on=False, deploy=False):
|
||||
'''
|
||||
Selects the correct operation to be executed on a virtual machine, non
|
||||
existing machines will be created, existing ones will be updated if the
|
||||
config differs.
|
||||
'''
|
||||
result = {'name': name,
|
||||
'result': None,
|
||||
'changes': {},
|
||||
'comment': ''}
|
||||
|
||||
log.trace('Validating virtual machine configuration')
|
||||
schema = ESXVirtualMachineConfigSchema.serialize()
|
||||
log.trace('schema = {0}'.format(schema))
|
||||
try:
|
||||
jsonschema.validate({'vm_name': vm_name,
|
||||
'cpu': cpu,
|
||||
'memory': memory,
|
||||
'image': image,
|
||||
'version': version,
|
||||
'interfaces': interfaces,
|
||||
'disks': disks,
|
||||
'scsi_devices': scsi_devices,
|
||||
'serial_ports': serial_ports,
|
||||
'cd_dvd_drives': cd_dvd_drives,
|
||||
'sata_controllers': sata_controllers,
|
||||
'datacenter': datacenter,
|
||||
'datastore': datastore,
|
||||
'placement': placement,
|
||||
'template': template,
|
||||
'tools': tools,
|
||||
'power_on': power_on,
|
||||
'deploy': deploy}, schema)
|
||||
except jsonschema.exceptions.ValidationError as exc:
|
||||
raise salt.exceptions.InvalidConfigError(exc)
|
||||
|
||||
service_instance = __salt__['vsphere.get_service_instance_via_proxy']()
|
||||
try:
|
||||
__salt__['vsphere.get_vm'](vm_name, vm_properties=['name'],
|
||||
service_instance=service_instance)
|
||||
except salt.exceptions.VMwareObjectRetrievalError:
|
||||
vm_file = __salt__['vsphere.get_vm_config_file'](
|
||||
vm_name, datacenter,
|
||||
placement, datastore,
|
||||
service_instance=service_instance)
|
||||
if vm_file:
|
||||
if __opts__['test']:
|
||||
result.update({'comment': 'The virtual machine {0}'
|
||||
' will be registered.'.format(vm_name)})
|
||||
__salt__['vsphere.disconnect'](service_instance)
|
||||
return result
|
||||
result = vm_registered(vm_name, datacenter, placement,
|
||||
vm_file, power_on=power_on)
|
||||
return result
|
||||
else:
|
||||
if __opts__['test']:
|
||||
result.update({'comment': 'The virtual machine {0}'
|
||||
' will be created.'.format(vm_name)})
|
||||
__salt__['vsphere.disconnect'](service_instance)
|
||||
return result
|
||||
if template:
|
||||
result = vm_cloned(name)
|
||||
else:
|
||||
result = vm_created(name, vm_name, cpu, memory, image, version,
|
||||
interfaces, disks, scsi_devices,
|
||||
serial_ports, datacenter, datastore,
|
||||
placement, cd_dvd_drives=cd_dvd_drives,
|
||||
advanced_configs=advanced_configs,
|
||||
power_on=power_on)
|
||||
return result
|
||||
|
||||
result = vm_updated(name, vm_name, cpu, memory, image, version,
|
||||
interfaces, disks, scsi_devices,
|
||||
serial_ports, datacenter, datastore,
|
||||
cd_dvd_drives=cd_dvd_drives,
|
||||
sata_controllers=sata_controllers,
|
||||
advanced_configs=advanced_configs,
|
||||
power_on=power_on)
|
||||
__salt__['vsphere.disconnect'](service_instance)
|
||||
|
||||
log.trace(result)
|
||||
return result
|
||||
|
||||
|
||||
def vm_cloned(name):
|
||||
'''
|
||||
Clones a virtual machine from a template virtual machine if it doesn't
|
||||
exist and a template is defined.
|
||||
'''
|
||||
result = {'name': name,
|
||||
'result': True,
|
||||
'changes': {},
|
||||
'comment': ''}
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def vm_updated(name, vm_name, cpu, memory, image, version, interfaces,
|
||||
disks, scsi_devices, serial_ports, datacenter, datastore,
|
||||
cd_dvd_drives=None, sata_controllers=None,
|
||||
advanced_configs=None, power_on=False):
|
||||
'''
|
||||
Updates a virtual machine configuration if there is a difference between
|
||||
the given and deployed configuration.
|
||||
'''
|
||||
result = {'name': name,
|
||||
'result': None,
|
||||
'changes': {},
|
||||
'comment': ''}
|
||||
|
||||
service_instance = __salt__['vsphere.get_service_instance_via_proxy']()
|
||||
current_config = __salt__['vsphere.get_vm_config'](
|
||||
vm_name,
|
||||
datacenter=datacenter,
|
||||
objects=False,
|
||||
service_instance=service_instance)
|
||||
|
||||
diffs = __salt__['vsphere.compare_vm_configs'](
|
||||
{'name': vm_name,
|
||||
'cpu': cpu,
|
||||
'memory': memory,
|
||||
'image': image,
|
||||
'version': version,
|
||||
'interfaces': interfaces,
|
||||
'disks': disks,
|
||||
'scsi_devices': scsi_devices,
|
||||
'serial_ports': serial_ports,
|
||||
'datacenter': datacenter,
|
||||
'datastore': datastore,
|
||||
'cd_drives': cd_dvd_drives,
|
||||
'sata_controllers': sata_controllers,
|
||||
'advanced_configs': advanced_configs},
|
||||
current_config)
|
||||
if not diffs:
|
||||
result.update({
|
||||
'result': True,
|
||||
'changes': {},
|
||||
'comment': 'Virtual machine {0} is already up to date'.format(vm_name)})
|
||||
return result
|
||||
|
||||
if __opts__['test']:
|
||||
comment = 'State vm_updated will update virtual machine \'{0}\' ' \
|
||||
'in datacenter \'{1}\':\n{2}'.format(vm_name,
|
||||
datacenter,
|
||||
'\n'.join([':\n'.join([key, difference.changes_str])
|
||||
for key, difference in six.iteritems(diffs)]))
|
||||
result.update({'result': None,
|
||||
'comment': comment})
|
||||
__salt__['vsphere.disconnect'](service_instance)
|
||||
return result
|
||||
|
||||
try:
|
||||
changes = __salt__['vsphere.update_vm'](vm_name, cpu, memory, image,
|
||||
version, interfaces, disks,
|
||||
scsi_devices, serial_ports,
|
||||
datacenter, datastore,
|
||||
cd_dvd_drives=cd_dvd_drives,
|
||||
sata_controllers=sata_controllers,
|
||||
advanced_configs=advanced_configs,
|
||||
service_instance=service_instance)
|
||||
except salt.exceptions.CommandExecutionError as exc:
|
||||
log.error('Error: {}'.format(str(exc)))
|
||||
if service_instance:
|
||||
__salt__['vsphere.disconnect'](service_instance)
|
||||
result.update({
|
||||
'result': False,
|
||||
'comment': str(exc)})
|
||||
return result
|
||||
|
||||
if power_on:
|
||||
try:
|
||||
__salt__['vsphere.power_on_vm'](vm_name, datacenter)
|
||||
except salt.exceptions.VMwarePowerOnError as exc:
|
||||
log.error('Error: {}'.format(exc))
|
||||
if service_instance:
|
||||
__salt__['vsphere.disconnect'](service_instance)
|
||||
result.update({
|
||||
'result': False,
|
||||
'comment': str(exc)})
|
||||
return result
|
||||
changes.update({'power_on': True})
|
||||
|
||||
__salt__['vsphere.disconnect'](service_instance)
|
||||
|
||||
result = {'name': name,
|
||||
'result': True,
|
||||
'changes': changes,
|
||||
'comment': 'Virtual machine '
|
||||
'{0} was updated successfully'.format(vm_name)}
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def vm_created(name, vm_name, cpu, memory, image, version, interfaces,
|
||||
disks, scsi_devices, serial_ports, datacenter, datastore,
|
||||
placement, ide_controllers=None, sata_controllers=None,
|
||||
cd_dvd_drives=None, advanced_configs=None, power_on=False):
|
||||
'''
|
||||
Creates a virtual machine with the given properties if it doesn't exist.
|
||||
'''
|
||||
result = {'name': name,
|
||||
'result': None,
|
||||
'changes': {},
|
||||
'comment': ''}
|
||||
|
||||
if __opts__['test']:
|
||||
result.update({'result': None,
|
||||
'changes': None,
|
||||
'comment': 'Virtual machine '
|
||||
'{0} will be created'.format(vm_name)})
|
||||
return result
|
||||
|
||||
service_instance = __salt__['vsphere.get_service_instance_via_proxy']()
|
||||
try:
|
||||
info = __salt__['vsphere.create_vm'](vm_name, cpu, memory, image,
|
||||
version, datacenter, datastore,
|
||||
placement, interfaces, disks,
|
||||
scsi_devices,
|
||||
serial_ports=serial_ports,
|
||||
ide_controllers=ide_controllers,
|
||||
sata_controllers=sata_controllers,
|
||||
cd_drives=cd_dvd_drives,
|
||||
advanced_configs=advanced_configs,
|
||||
service_instance=service_instance)
|
||||
except salt.exceptions.CommandExecutionError as exc:
|
||||
log.error('Error: {0}'.format(str(exc)))
|
||||
if service_instance:
|
||||
__salt__['vsphere.disconnect'](service_instance)
|
||||
result.update({
|
||||
'result': False,
|
||||
'comment': str(exc)})
|
||||
return result
|
||||
|
||||
if power_on:
|
||||
try:
|
||||
__salt__['vsphere.power_on_vm'](vm_name, datacenter,
|
||||
service_instance=service_instance)
|
||||
except salt.exceptions.VMwarePowerOnError as exc:
|
||||
log.error('Error: {0}'.format(exc))
|
||||
if service_instance:
|
||||
__salt__['vsphere.disconnect'](service_instance)
|
||||
result.update({
|
||||
'result': False,
|
||||
'comment': str(exc)})
|
||||
return result
|
||||
info['power_on'] = power_on
|
||||
|
||||
changes = {'name': vm_name, 'info': info}
|
||||
__salt__['vsphere.disconnect'](service_instance)
|
||||
result = {'name': name,
|
||||
'result': True,
|
||||
'changes': changes,
|
||||
'comment': 'Virtual machine '
|
||||
'{0} created successfully'.format(vm_name)}
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def vm_registered(vm_name, datacenter, placement, vm_file, power_on=False):
|
||||
'''
|
||||
Registers a virtual machine if the machine files are available on
|
||||
the main datastore.
|
||||
'''
|
||||
result = {'name': vm_name,
|
||||
'result': None,
|
||||
'changes': {},
|
||||
'comment': ''}
|
||||
|
||||
vmx_path = '{0}{1}'.format(vm_file.folderPath, vm_file.file[0].path)
|
||||
log.trace('Registering virtual machine with vmx file: {0}'.format(vmx_path))
|
||||
service_instance = __salt__['vsphere.get_service_instance_via_proxy']()
|
||||
try:
|
||||
__salt__['vsphere.register_vm'](vm_name, datacenter,
|
||||
placement, vmx_path,
|
||||
service_instance=service_instance)
|
||||
except salt.exceptions.VMwareMultipleObjectsError as exc:
|
||||
log.error('Error: {0}'.format(str(exc)))
|
||||
if service_instance:
|
||||
__salt__['vsphere.disconnect'](service_instance)
|
||||
result.update({'result': False,
|
||||
'comment': str(exc)})
|
||||
return result
|
||||
except salt.exceptions.VMwareVmRegisterError as exc:
|
||||
log.error('Error: {0}'.format(exc))
|
||||
if service_instance:
|
||||
__salt__['vsphere.disconnect'](service_instance)
|
||||
result.update({'result': False,
|
||||
'comment': str(exc)})
|
||||
return result
|
||||
|
||||
if power_on:
|
||||
try:
|
||||
__salt__['vsphere.power_on_vm'](vm_name, datacenter,
|
||||
service_instance=service_instance)
|
||||
except salt.exceptions.VMwarePowerOnError as exc:
|
||||
log.error('Error: {0}'.format(exc))
|
||||
if service_instance:
|
||||
__salt__['vsphere.disconnect'](service_instance)
|
||||
result.update({
|
||||
'result': False,
|
||||
'comment': str(exc)})
|
||||
return result
|
||||
__salt__['vsphere.disconnect'](service_instance)
|
||||
result.update({'result': True,
|
||||
'changes': {'name': vm_name, 'power_on': power_on},
|
||||
'comment': 'Virtual machine '
|
||||
'{0} registered successfully'.format(vm_name)})
|
||||
|
||||
return result
|
@ -761,7 +761,8 @@ def _check_directory_win(name,
|
||||
win_owner,
|
||||
win_perms=None,
|
||||
win_deny_perms=None,
|
||||
win_inheritance=None):
|
||||
win_inheritance=None,
|
||||
win_perms_reset=None):
|
||||
'''
|
||||
Check what changes need to be made on a directory
|
||||
'''
|
||||
@ -879,6 +880,20 @@ def _check_directory_win(name,
|
||||
if not win_inheritance == salt.utils.win_dacl.get_inheritance(name):
|
||||
changes['inheritance'] = win_inheritance
|
||||
|
||||
# Check reset
|
||||
if win_perms_reset:
|
||||
for user_name in perms:
|
||||
if user_name not in win_perms:
|
||||
if 'grant' in perms[user_name] and not perms[user_name]['grant']['inherited']:
|
||||
if 'remove_perms' not in changes:
|
||||
changes['remove_perms'] = {}
|
||||
changes['remove_perms'].update({user_name: perms[user_name]})
|
||||
if user_name not in win_deny_perms:
|
||||
if 'deny' in perms[user_name] and not perms[user_name]['deny']['inherited']:
|
||||
if 'remove_perms' not in changes:
|
||||
changes['remove_perms'] = {}
|
||||
changes['remove_perms'].update({user_name: perms[user_name]})
|
||||
|
||||
if changes:
|
||||
return None, 'The directory "{0}" will be changed'.format(name), changes
|
||||
|
||||
@ -1488,6 +1503,9 @@ def exists(name,
|
||||
(e.g., keytabs, private keys, etc.) have been previously satisfied before
|
||||
deployment.
|
||||
|
||||
This function does not create the file if it doesn't exist, it will return
|
||||
an error.
|
||||
|
||||
name
|
||||
Absolute path which must exist
|
||||
'''
|
||||
@ -1566,6 +1584,7 @@ def managed(name,
|
||||
win_perms=None,
|
||||
win_deny_perms=None,
|
||||
win_inheritance=True,
|
||||
win_perms_reset=False,
|
||||
**kwargs):
|
||||
r'''
|
||||
Manage a given file, this function allows for a file to be downloaded from
|
||||
@ -2072,6 +2091,13 @@ def managed(name,
|
||||
|
||||
.. versionadded:: 2017.7.0
|
||||
|
||||
win_perms_reset : False
|
||||
If ``True`` the existing DACL will be cleared and replaced with the
|
||||
settings defined in this function. If ``False``, new entries will be
|
||||
appended to the existing DACL. Default is ``False``.
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
Here's an example using the above ``win_*`` parameters:
|
||||
|
||||
.. code-block:: yaml
|
||||
@ -2314,8 +2340,13 @@ def managed(name,
|
||||
# Check and set the permissions if necessary
|
||||
if salt.utils.platform.is_windows():
|
||||
ret = __salt__['file.check_perms'](
|
||||
name, ret, win_owner, win_perms, win_deny_perms, None,
|
||||
win_inheritance)
|
||||
path=name,
|
||||
ret=ret,
|
||||
owner=win_owner,
|
||||
grant_perms=win_perms,
|
||||
deny_perms=win_deny_perms,
|
||||
inheritance=win_inheritance,
|
||||
reset=win_perms_reset)
|
||||
else:
|
||||
ret, _ = __salt__['file.check_perms'](
|
||||
name, ret, user, group, mode, attrs, follow_symlinks)
|
||||
@ -2356,8 +2387,13 @@ def managed(name,
|
||||
|
||||
if salt.utils.platform.is_windows():
|
||||
ret = __salt__['file.check_perms'](
|
||||
name, ret, win_owner, win_perms, win_deny_perms, None,
|
||||
win_inheritance)
|
||||
path=name,
|
||||
ret=ret,
|
||||
owner=win_owner,
|
||||
grant_perms=win_perms,
|
||||
deny_perms=win_deny_perms,
|
||||
inheritance=win_inheritance,
|
||||
reset=win_perms_reset)
|
||||
|
||||
if isinstance(ret['pchanges'], tuple):
|
||||
ret['result'], ret['comment'] = ret['pchanges']
|
||||
@ -2448,6 +2484,7 @@ def managed(name,
|
||||
win_perms=win_perms,
|
||||
win_deny_perms=win_deny_perms,
|
||||
win_inheritance=win_inheritance,
|
||||
win_perms_reset=win_perms_reset,
|
||||
encoding=encoding,
|
||||
encoding_errors=encoding_errors,
|
||||
**kwargs)
|
||||
@ -2517,6 +2554,7 @@ def managed(name,
|
||||
win_perms=win_perms,
|
||||
win_deny_perms=win_deny_perms,
|
||||
win_inheritance=win_inheritance,
|
||||
win_perms_reset=win_perms_reset,
|
||||
encoding=encoding,
|
||||
encoding_errors=encoding_errors,
|
||||
**kwargs)
|
||||
@ -2590,6 +2628,7 @@ def directory(name,
|
||||
win_perms=None,
|
||||
win_deny_perms=None,
|
||||
win_inheritance=True,
|
||||
win_perms_reset=False,
|
||||
**kwargs):
|
||||
r'''
|
||||
Ensure that a named directory is present and has the right perms
|
||||
@ -2751,6 +2790,13 @@ def directory(name,
|
||||
|
||||
.. versionadded:: 2017.7.0
|
||||
|
||||
win_perms_reset : False
|
||||
If ``True`` the existing DACL will be cleared and replaced with the
|
||||
settings defined in this function. If ``False``, new entries will be
|
||||
appended to the existing DACL. Default is ``False``.
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
Here's an example using the above ``win_*`` parameters:
|
||||
|
||||
.. code-block:: yaml
|
||||
@ -2855,13 +2901,23 @@ def directory(name,
|
||||
elif force:
|
||||
# Remove whatever is in the way
|
||||
if os.path.isfile(name):
|
||||
os.remove(name)
|
||||
ret['changes']['forced'] = 'File was forcibly replaced'
|
||||
if __opts__['test']:
|
||||
ret['pchanges']['forced'] = 'File was forcibly replaced'
|
||||
else:
|
||||
os.remove(name)
|
||||
ret['changes']['forced'] = 'File was forcibly replaced'
|
||||
elif __salt__['file.is_link'](name):
|
||||
__salt__['file.remove'](name)
|
||||
ret['changes']['forced'] = 'Symlink was forcibly replaced'
|
||||
if __opts__['test']:
|
||||
ret['pchanges']['forced'] = 'Symlink was forcibly replaced'
|
||||
else:
|
||||
__salt__['file.remove'](name)
|
||||
ret['changes']['forced'] = 'Symlink was forcibly replaced'
|
||||
else:
|
||||
__salt__['file.remove'](name)
|
||||
if __opts__['test']:
|
||||
ret['pchanges']['forced'] = 'Directory was forcibly replaced'
|
||||
else:
|
||||
__salt__['file.remove'](name)
|
||||
ret['changes']['forced'] = 'Directory was forcibly replaced'
|
||||
else:
|
||||
if os.path.isfile(name):
|
||||
return _error(
|
||||
@ -2874,17 +2930,26 @@ def directory(name,
|
||||
|
||||
# Check directory?
|
||||
if salt.utils.platform.is_windows():
|
||||
presult, pcomment, ret['pchanges'] = _check_directory_win(
|
||||
name, win_owner, win_perms, win_deny_perms, win_inheritance)
|
||||
presult, pcomment, pchanges = _check_directory_win(
|
||||
name=name,
|
||||
win_owner=win_owner,
|
||||
win_perms=win_perms,
|
||||
win_deny_perms=win_deny_perms,
|
||||
win_inheritance=win_inheritance,
|
||||
win_perms_reset=win_perms_reset)
|
||||
else:
|
||||
presult, pcomment, ret['pchanges'] = _check_directory(
|
||||
presult, pcomment, pchanges = _check_directory(
|
||||
name, user, group, recurse or [], dir_mode, clean, require,
|
||||
exclude_pat, max_depth, follow_symlinks)
|
||||
|
||||
if __opts__['test']:
|
||||
if pchanges:
|
||||
ret['pchanges'].update(pchanges)
|
||||
|
||||
# Don't run through the reset of the function if there are no changes to be
|
||||
# made
|
||||
if not ret['pchanges'] or __opts__['test']:
|
||||
ret['result'] = presult
|
||||
ret['comment'] = pcomment
|
||||
ret['changes'] = ret['pchanges']
|
||||
return ret
|
||||
|
||||
if not os.path.isdir(name):
|
||||
@ -2900,8 +2965,13 @@ def directory(name,
|
||||
if not os.path.isdir(drive):
|
||||
return _error(
|
||||
ret, 'Drive {0} is not mapped'.format(drive))
|
||||
__salt__['file.makedirs'](name, win_owner, win_perms,
|
||||
win_deny_perms, win_inheritance)
|
||||
__salt__['file.makedirs'](
|
||||
path=name,
|
||||
owner=win_owner,
|
||||
grant_perms=win_perms,
|
||||
deny_perms=win_deny_perms,
|
||||
inheritance=win_inheritance,
|
||||
reset=win_perms_reset)
|
||||
else:
|
||||
__salt__['file.makedirs'](name, user=user, group=group,
|
||||
mode=dir_mode)
|
||||
@ -2910,8 +2980,13 @@ def directory(name,
|
||||
ret, 'No directory to create {0} in'.format(name))
|
||||
|
||||
if salt.utils.platform.is_windows():
|
||||
__salt__['file.mkdir'](name, win_owner, win_perms, win_deny_perms,
|
||||
win_inheritance)
|
||||
__salt__['file.mkdir'](
|
||||
path=name,
|
||||
owner=win_owner,
|
||||
grant_perms=win_perms,
|
||||
deny_perms=win_deny_perms,
|
||||
inheritance=win_inheritance,
|
||||
reset=win_perms_reset)
|
||||
else:
|
||||
__salt__['file.mkdir'](name, user=user, group=group, mode=dir_mode)
|
||||
|
||||
@ -2925,7 +3000,13 @@ def directory(name,
|
||||
if not children_only:
|
||||
if salt.utils.platform.is_windows():
|
||||
ret = __salt__['file.check_perms'](
|
||||
name, ret, win_owner, win_perms, win_deny_perms, None, win_inheritance)
|
||||
path=name,
|
||||
ret=ret,
|
||||
owner=win_owner,
|
||||
grant_perms=win_perms,
|
||||
deny_perms=win_deny_perms,
|
||||
inheritance=win_inheritance,
|
||||
reset=win_perms_reset)
|
||||
else:
|
||||
ret, perms = __salt__['file.check_perms'](
|
||||
name, ret, user, group, dir_mode, None, follow_symlinks)
|
||||
@ -2996,8 +3077,13 @@ def directory(name,
|
||||
try:
|
||||
if salt.utils.platform.is_windows():
|
||||
ret = __salt__['file.check_perms'](
|
||||
full, ret, win_owner, win_perms, win_deny_perms, None,
|
||||
win_inheritance)
|
||||
path=full,
|
||||
ret=ret,
|
||||
owner=win_owner,
|
||||
grant_perms=win_perms,
|
||||
deny_perms=win_deny_perms,
|
||||
inheritance=win_inheritance,
|
||||
reset=win_perms_reset)
|
||||
else:
|
||||
ret, _ = __salt__['file.check_perms'](
|
||||
full, ret, user, group, file_mode, None, follow_symlinks)
|
||||
@ -3011,8 +3097,13 @@ def directory(name,
|
||||
try:
|
||||
if salt.utils.platform.is_windows():
|
||||
ret = __salt__['file.check_perms'](
|
||||
full, ret, win_owner, win_perms, win_deny_perms, None,
|
||||
win_inheritance)
|
||||
path=full,
|
||||
ret=ret,
|
||||
owner=win_owner,
|
||||
grant_perms=win_perms,
|
||||
deny_perms=win_deny_perms,
|
||||
inheritance=win_inheritance,
|
||||
reset=win_perms_reset)
|
||||
else:
|
||||
ret, _ = __salt__['file.check_perms'](
|
||||
full, ret, user, group, dir_mode, None, follow_symlinks)
|
||||
@ -3034,7 +3125,8 @@ def directory(name,
|
||||
if children_only:
|
||||
ret['comment'] = u'Directory {0}/* updated'.format(name)
|
||||
else:
|
||||
ret['comment'] = u'Directory {0} updated'.format(name)
|
||||
if ret['changes']:
|
||||
ret['comment'] = u'Directory {0} updated'.format(name)
|
||||
|
||||
if __opts__['test']:
|
||||
ret['comment'] = 'Directory {0} not updated'.format(name)
|
||||
|
105
salt/states/glance_image.py
Normal file
105
salt/states/glance_image.py
Normal file
@ -0,0 +1,105 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Management of OpenStack Glance Images
|
||||
========================================
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
:depends: shade
|
||||
:configuration: see :py:mod:`salt.modules.glanceng` for setup instructions
|
||||
|
||||
Example States
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
create image:
|
||||
glance_image.present:
|
||||
- name: cirros
|
||||
- filename: cirros.raw
|
||||
- image_format: raw
|
||||
|
||||
delete image:
|
||||
glance_image.absent:
|
||||
- name: cirros
|
||||
'''
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
__virtualname__ = 'glance_image'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
if 'glanceng.image_get' in __salt__:
|
||||
return __virtualname__
|
||||
return (False, 'The glanceng execution module failed to load: shade python module is not available')
|
||||
|
||||
|
||||
def present(name, auth=None, **kwargs):
|
||||
'''
|
||||
Ensure image exists and is up-to-date
|
||||
|
||||
name
|
||||
Name of the image
|
||||
|
||||
enabled
|
||||
Boolean to control if image is enabled
|
||||
|
||||
description
|
||||
An arbitrary description of the image
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['glanceng.setup_clouds'](auth)
|
||||
|
||||
image = __salt__['glanceng.image_get'](name=name)
|
||||
|
||||
if not image:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = kwargs
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Image {} will be created.'.format(name)
|
||||
return ret
|
||||
|
||||
kwargs['name'] = name
|
||||
image = __salt__['glanceng.image_create'](**kwargs)
|
||||
ret['changes'] = image
|
||||
ret['comment'] = 'Created image'
|
||||
return ret
|
||||
|
||||
# TODO(SamYaple): Compare and update image properties here
|
||||
return ret
|
||||
|
||||
|
||||
def absent(name, auth=None):
|
||||
'''
|
||||
Ensure image does not exist
|
||||
|
||||
name
|
||||
Name of the image
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['glanceng.setup_clouds'](auth)
|
||||
|
||||
image = __salt__['glanceng.image_get'](name=name)
|
||||
|
||||
if image:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = {'name': name}
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Image {} will be deleted.'.format(name)
|
||||
return ret
|
||||
|
||||
__salt__['glanceng.image_delete'](name=image)
|
||||
ret['changes']['id'] = image.id
|
||||
ret['comment'] = 'Deleted image'
|
||||
|
||||
return ret
|
122
salt/states/keystone_domain.py
Normal file
122
salt/states/keystone_domain.py
Normal file
@ -0,0 +1,122 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Management of OpenStack Keystone Domains
|
||||
========================================
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
:depends: shade
|
||||
:configuration: see :py:mod:`salt.modules.keystoneng` for setup instructions
|
||||
|
||||
Example States
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
create domain:
|
||||
keystone_domain.present:
|
||||
- name: domain1
|
||||
|
||||
create domain with optional params:
|
||||
keystone_domain.present:
|
||||
- name: domain1
|
||||
- enabled: False
|
||||
- description: 'my domain'
|
||||
|
||||
delete domain:
|
||||
keystone_domain.absent:
|
||||
- name: domain1
|
||||
'''
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
__virtualname__ = 'keystone_domain'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
if 'keystoneng.domain_get' in __salt__:
|
||||
return __virtualname__
|
||||
return (False, 'The keystoneng execution module failed to load: shade python module is not available')
|
||||
|
||||
|
||||
def present(name, auth=None, **kwargs):
|
||||
'''
|
||||
Ensure domain exists and is up-to-date
|
||||
|
||||
name
|
||||
Name of the domain
|
||||
|
||||
enabled
|
||||
Boolean to control if domain is enabled
|
||||
|
||||
description
|
||||
An arbitrary description of the domain
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['keystoneng.setup_clouds'](auth)
|
||||
|
||||
domain = __salt__['keystoneng.domain_get'](name=name)
|
||||
|
||||
if not domain:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = kwargs
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Domain {} will be created.'.format(name)
|
||||
return ret
|
||||
|
||||
kwargs['name'] = name
|
||||
domain = __salt__['keystoneng.domain_create'](**kwargs)
|
||||
ret['changes'] = domain
|
||||
ret['comment'] = 'Created domain'
|
||||
return ret
|
||||
|
||||
changes = __salt__['keystoneng.compare_changes'](domain, **kwargs)
|
||||
if changes:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = changes
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Domain {} will be updated.'.format(name)
|
||||
return ret
|
||||
|
||||
kwargs['domain_id'] = domain.id
|
||||
__salt__['keystoneng.domain_update'](**kwargs)
|
||||
ret['changes'].update(changes)
|
||||
ret['comment'] = 'Updated domain'
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
def absent(name, auth=None):
|
||||
'''
|
||||
Ensure domain does not exist
|
||||
|
||||
name
|
||||
Name of the domain
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['keystoneng.setup_clouds'](auth)
|
||||
|
||||
domain = __salt__['keystoneng.domain_get'](name=name)
|
||||
|
||||
if domain:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = {'name': name}
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Domain {} will be deleted.'.format(name)
|
||||
return ret
|
||||
|
||||
__salt__['keystoneng.domain_delete'](name=domain)
|
||||
ret['changes']['id'] = domain.id
|
||||
ret['comment'] = 'Deleted domain'
|
||||
|
||||
return ret
|
185
salt/states/keystone_endpoint.py
Normal file
185
salt/states/keystone_endpoint.py
Normal file
@ -0,0 +1,185 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Management of OpenStack Keystone Endpoints
|
||||
==========================================
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
:depends: shade
|
||||
:configuration: see :py:mod:`salt.modules.keystoneng` for setup instructions
|
||||
|
||||
Example States
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
create endpoint:
|
||||
keystone_endpoint.present:
|
||||
- name: public
|
||||
- url: https://example.org:9292
|
||||
- region: RegionOne
|
||||
- service_name: glance
|
||||
|
||||
destroy endpoint:
|
||||
keystone_endpoint.absent:
|
||||
- name: public
|
||||
- url: https://example.org:9292
|
||||
- region: RegionOne
|
||||
- service_name: glance
|
||||
|
||||
create multiple endpoints:
|
||||
keystone_endpoint.absent:
|
||||
- names:
|
||||
- public
|
||||
- admin
|
||||
- internal
|
||||
- url: https://example.org:9292
|
||||
- region: RegionOne
|
||||
- service_name: glance
|
||||
'''
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
__virtualname__ = 'keystone_endpoint'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
if 'keystoneng.endpoint_get' in __salt__:
|
||||
return __virtualname__
|
||||
return (False, 'The keystoneng execution module failed to load: shade python module is not available')
|
||||
|
||||
|
||||
def _common(ret, name, service_name, kwargs):
|
||||
'''
|
||||
Returns: tuple whose first element is a bool indicating success or failure
|
||||
and the second element is either a ret dict for salt or an object
|
||||
'''
|
||||
if 'interface' not in kwargs and 'public_url' not in kwargs:
|
||||
kwargs['interface'] = name
|
||||
service = __salt__['keystoneng.service_get'](name_or_id=service_name)
|
||||
|
||||
if not service:
|
||||
ret['comment'] = 'Cannot find service'
|
||||
ret['result'] = False
|
||||
return (False, ret)
|
||||
|
||||
filters = kwargs.copy()
|
||||
filters.pop('enabled', None)
|
||||
filters.pop('url', None)
|
||||
filters['service_id'] = service.id
|
||||
kwargs['service_name_or_id'] = service.id
|
||||
endpoints = __salt__['keystoneng.endpoint_search'](filters=filters)
|
||||
|
||||
if len(endpoints) > 1:
|
||||
ret['comment'] = "Multiple endpoints match criteria"
|
||||
ret['result'] = False
|
||||
return ret
|
||||
endpoint = endpoints[0] if endpoints else None
|
||||
return (True, endpoint)
|
||||
|
||||
|
||||
def present(name, service_name, auth=None, **kwargs):
|
||||
'''
|
||||
Ensure an endpoint exists and is up-to-date
|
||||
|
||||
name
|
||||
Interface name
|
||||
|
||||
url
|
||||
URL of the endpoint
|
||||
|
||||
service_name
|
||||
Service name or ID
|
||||
|
||||
region
|
||||
The region name to assign the endpoint
|
||||
|
||||
enabled
|
||||
Boolean to control if endpoint is enabled
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['keystoneng.setup_clouds'](auth)
|
||||
|
||||
success, val = _, endpoint = _common(ret, name, service_name, kwargs)
|
||||
if not success:
|
||||
return val
|
||||
|
||||
if not endpoint:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = kwargs
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Endpoint will be created.'
|
||||
return ret
|
||||
|
||||
# NOTE(SamYaple): Endpoints are returned as a list which can contain
|
||||
# several items depending on the options passed
|
||||
endpoints = __salt__['keystoneng.endpoint_create'](**kwargs)
|
||||
if len(endpoints) == 1:
|
||||
ret['changes'] = endpoints[0]
|
||||
else:
|
||||
for i, endpoint in enumerate(endpoints):
|
||||
ret['changes'][i] = endpoint
|
||||
ret['comment'] = 'Created endpoint'
|
||||
return ret
|
||||
|
||||
changes = __salt__['keystoneng.compare_changes'](endpoint, **kwargs)
|
||||
if changes:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = changes
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Endpoint will be updated.'
|
||||
return ret
|
||||
|
||||
kwargs['endpoint_id'] = endpoint.id
|
||||
__salt__['keystoneng.endpoint_update'](**kwargs)
|
||||
ret['changes'].update(changes)
|
||||
ret['comment'] = 'Updated endpoint'
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
def absent(name, service_name, auth=None, **kwargs):
|
||||
'''
|
||||
Ensure an endpoint does not exists
|
||||
|
||||
name
|
||||
Interface name
|
||||
|
||||
url
|
||||
URL of the endpoint
|
||||
|
||||
service_name
|
||||
Service name or ID
|
||||
|
||||
region
|
||||
The region name to assign the endpoint
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['keystoneng.setup_clouds'](auth)
|
||||
|
||||
success, val = _, endpoint = _common(ret, name, service_name, kwargs)
|
||||
if not success:
|
||||
return val
|
||||
|
||||
if endpoint:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = {'id': endpoint.id}
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Endpoint will be deleted.'
|
||||
return ret
|
||||
|
||||
__salt__['keystoneng.endpoint_delete'](id=endpoint.id)
|
||||
ret['changes']['id'] = endpoint.id
|
||||
ret['comment'] = 'Deleted endpoint'
|
||||
|
||||
return ret
|
140
salt/states/keystone_group.py
Normal file
140
salt/states/keystone_group.py
Normal file
@ -0,0 +1,140 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Management of OpenStack Keystone Groups
|
||||
=======================================
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
:depends: shade
|
||||
:configuration: see :py:mod:`salt.modules.keystoneng` for setup instructions
|
||||
|
||||
Example States
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
create group:
|
||||
keystone_group.present:
|
||||
- name: group1
|
||||
|
||||
delete group:
|
||||
keystone_group.absent:
|
||||
- name: group1
|
||||
|
||||
create group with optional params:
|
||||
keystone_group.present:
|
||||
- name: group1
|
||||
- domain: domain1
|
||||
- description: 'my group'
|
||||
'''
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
__virtualname__ = 'keystone_endpoint'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
if 'keystoneng.group_get' in __salt__:
|
||||
return __virtualname__
|
||||
return (False, 'The keystoneng execution module failed to load: shade python module is not available')
|
||||
|
||||
|
||||
def _common(kwargs):
|
||||
'''
|
||||
Returns: None if group wasn't found, otherwise a group object
|
||||
'''
|
||||
search_kwargs = {'name': kwargs['name']}
|
||||
if 'domain' in kwargs:
|
||||
domain = __salt__['keystoneng.get_entity'](
|
||||
'domain', name=kwargs.pop('domain'))
|
||||
domain_id = domain.id if hasattr(domain, 'id') else domain
|
||||
search_kwargs['filters'] = {'domain_id': domain_id}
|
||||
kwargs['domain'] = domain
|
||||
|
||||
return __salt__['keystoneng.group_get'](**search_kwargs)
|
||||
|
||||
|
||||
def present(name, auth=None, **kwargs):
|
||||
'''
|
||||
Ensure an group exists and is up-to-date
|
||||
|
||||
name
|
||||
Name of the group
|
||||
|
||||
domain
|
||||
The name or id of the domain
|
||||
|
||||
description
|
||||
An arbitrary description of the group
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['keystoneng.setup_cloud'](auth)
|
||||
|
||||
kwargs['name'] = name
|
||||
group = _common(kwargs)
|
||||
|
||||
if group is None:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = kwargs
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Group will be created.'
|
||||
return ret
|
||||
|
||||
group = __salt__['keystoneng.group_create'](**kwargs)
|
||||
ret['changes'] = group
|
||||
ret['comment'] = 'Created group'
|
||||
return ret
|
||||
|
||||
changes = __salt__['keystoneng.compare_changes'](group, **kwargs)
|
||||
if changes:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = changes
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Group will be updated.'
|
||||
return ret
|
||||
|
||||
__salt__['keystoneng.group_update'](**kwargs)
|
||||
ret['changes'].update(changes)
|
||||
ret['comment'] = 'Updated group'
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
def absent(name, auth=None, **kwargs):
|
||||
'''
|
||||
Ensure group does not exist
|
||||
|
||||
name
|
||||
Name of the group
|
||||
|
||||
domain
|
||||
The name or id of the domain
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['keystoneng.setup_cloud'](auth)
|
||||
|
||||
kwargs['name'] = name
|
||||
group = _common(kwargs)
|
||||
|
||||
if group:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = {'id': group.id}
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Group will be deleted.'
|
||||
return ret
|
||||
|
||||
__salt__['keystoneng.group_delete'](name=group)
|
||||
ret['changes']['id'] = group.id
|
||||
ret['comment'] = 'Deleted group'
|
||||
|
||||
return ret
|
141
salt/states/keystone_project.py
Normal file
141
salt/states/keystone_project.py
Normal file
@ -0,0 +1,141 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Management of OpenStack Keystone Projects
|
||||
=========================================
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
:depends: shade
|
||||
:configuration: see :py:mod:`salt.modules.keystoneng` for setup instructions
|
||||
|
||||
Example States
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
create project:
|
||||
keystone_project.present:
|
||||
- name: project1
|
||||
|
||||
delete project:
|
||||
keystone_project.absent:
|
||||
- name: project1
|
||||
|
||||
create project with optional params:
|
||||
keystone_project.present:
|
||||
- name: project1
|
||||
- domain: domain1
|
||||
- enabled: False
|
||||
- description: 'my project'
|
||||
'''
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
__virtualname__ = 'keystone_project'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
if 'keystoneng.project_get' in __salt__:
|
||||
return __virtualname__
|
||||
return (False, 'The keystoneng execution module failed to load: shade python module is not available')
|
||||
|
||||
|
||||
def _common(name, kwargs):
|
||||
'''
|
||||
Returns: None if project wasn't found, otherwise a group object
|
||||
'''
|
||||
search_kwargs = {'name': name}
|
||||
if 'domain' in kwargs:
|
||||
domain = __salt__['keystoneng.get_entity'](
|
||||
'domain', name=kwargs.pop('domain'))
|
||||
domain_id = domain.id if hasattr(domain, 'id') else domain
|
||||
search_kwargs['domain_id'] = domain_id
|
||||
kwargs['domain_id'] = domain_id
|
||||
|
||||
return __salt__['keystoneng.project_get'](**search_kwargs)
|
||||
|
||||
|
||||
def present(name, auth=None, **kwargs):
|
||||
'''
|
||||
Ensure a project exists and is up-to-date
|
||||
|
||||
name
|
||||
Name of the project
|
||||
|
||||
domain
|
||||
The name or id of the domain
|
||||
|
||||
description
|
||||
An arbitrary description of the project
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['keystoneng.setup_clouds'](auth)
|
||||
|
||||
kwargs['name'] = name
|
||||
project = _common(name, kwargs)
|
||||
|
||||
if project is None:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = kwargs
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Project will be created.'
|
||||
return ret
|
||||
|
||||
project = __salt__['keystoneng.project_create'](**kwargs)
|
||||
ret['changes'] = project
|
||||
ret['comment'] = 'Created project'
|
||||
return ret
|
||||
|
||||
changes = __salt__['keystoneng.compare_changes'](project, **kwargs)
|
||||
if changes:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = changes
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Project will be updated.'
|
||||
return ret
|
||||
|
||||
__salt__['keystoneng.project_update'](**kwargs)
|
||||
ret['changes'].update(changes)
|
||||
ret['comment'] = 'Updated project'
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
def absent(name, auth=None, **kwargs):
|
||||
'''
|
||||
Ensure a project does not exists
|
||||
|
||||
name
|
||||
Name of the project
|
||||
|
||||
domain
|
||||
The name or id of the domain
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['keystoneng.setup_clouds'](auth)
|
||||
|
||||
kwargs['name'] = name
|
||||
project = _common(name, kwargs)
|
||||
|
||||
if project:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = {'id': project.id}
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Project will be deleted.'
|
||||
return ret
|
||||
|
||||
__salt__['keystoneng.project_delete'](name=project)
|
||||
ret['changes']['id'] = project.id
|
||||
ret['comment'] = 'Deleted project'
|
||||
|
||||
return ret
|
106
salt/states/keystone_role.py
Normal file
106
salt/states/keystone_role.py
Normal file
@ -0,0 +1,106 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Management of OpenStack Keystone Roles
|
||||
======================================
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
:depends: shade
|
||||
:configuration: see :py:mod:`salt.modules.keystoneng` for setup instructions
|
||||
|
||||
Example States
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
create role:
|
||||
keystone_role.present:
|
||||
- name: role1
|
||||
|
||||
delete role:
|
||||
keystone_role.absent:
|
||||
- name: role1
|
||||
|
||||
create role with optional params:
|
||||
keystone_role.present:
|
||||
- name: role1
|
||||
- description: 'my group'
|
||||
'''
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
__virtualname__ = 'keystone_role'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
if 'keystoneng.role_get' in __salt__:
|
||||
return __virtualname__
|
||||
return (False, 'The keystoneng execution module failed to load: shade python module is not available')
|
||||
|
||||
|
||||
def present(name, auth=None, **kwargs):
|
||||
'''
|
||||
Ensure an role exists
|
||||
|
||||
name
|
||||
Name of the role
|
||||
|
||||
description
|
||||
An arbitrary description of the role
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['keystoneng.setup_clouds'](auth)
|
||||
|
||||
kwargs['name'] = name
|
||||
role = __salt__['keystoneng.role_get'](**kwargs)
|
||||
|
||||
if not role:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = kwargs
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Role will be created.'
|
||||
return ret
|
||||
|
||||
role = __salt__['keystoneng.role_create'](**kwargs)
|
||||
ret['changes']['id'] = role.id
|
||||
ret['changes']['name'] = role.name
|
||||
ret['comment'] = 'Created role'
|
||||
return ret
|
||||
# NOTE(SamYaple): Update support pending https://review.openstack.org/#/c/496992/
|
||||
return ret
|
||||
|
||||
|
||||
def absent(name, auth=None, **kwargs):
|
||||
'''
|
||||
Ensure role does not exist
|
||||
|
||||
name
|
||||
Name of the role
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['keystoneng.setup_clouds'](auth)
|
||||
|
||||
kwargs['name'] = name
|
||||
role = __salt__['keystoneng.role_get'](**kwargs)
|
||||
|
||||
if role:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = {'id': role.id}
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Role will be deleted.'
|
||||
return ret
|
||||
|
||||
__salt__['keystoneng.role_delete'](name=role)
|
||||
ret['changes']['id'] = role.id
|
||||
ret['comment'] = 'Deleted role'
|
||||
|
||||
return ret
|
140
salt/states/keystone_role_grant.py
Normal file
140
salt/states/keystone_role_grant.py
Normal file
@ -0,0 +1,140 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Management of OpenStack Keystone Role Grants
|
||||
============================================
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
:depends: shade
|
||||
:configuration: see :py:mod:`salt.modules.keystoneng` for setup instructions
|
||||
|
||||
Example States
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
create group:
|
||||
keystone_group.present:
|
||||
- name: group1
|
||||
|
||||
delete group:
|
||||
keystone_group.absent:
|
||||
- name: group1
|
||||
|
||||
create group with optional params:
|
||||
keystone_group.present:
|
||||
- name: group1
|
||||
- domain: domain1
|
||||
- description: 'my group'
|
||||
'''
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
__virtualname__ = 'keystone_role_grant'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
if 'keystoneng.role_grant' in __salt__:
|
||||
return __virtualname__
|
||||
return (False, 'The keystoneng execution module failed to load: shade python module is not available')
|
||||
|
||||
|
||||
def _get_filters(kwargs):
|
||||
role_kwargs = {'name': kwargs.pop('role')}
|
||||
if 'role_domain' in kwargs:
|
||||
domain = __salt__['keystoneng.get_entity'](
|
||||
'domain', name=kwargs.pop('role_domain'))
|
||||
if domain:
|
||||
role_kwargs['domain_id'] = domain.id \
|
||||
if hasattr(domain, 'id') else domain
|
||||
role = __salt__['keystoneng.role_get'](**role_kwargs)
|
||||
kwargs['name'] = role
|
||||
filters = {'role': role.id if hasattr(role, 'id') else role}
|
||||
|
||||
if 'domain' in kwargs:
|
||||
domain = __salt__['keystoneng.get_entity'](
|
||||
'domain', name=kwargs.pop('domain'))
|
||||
kwargs['domain'] = filters['domain'] = \
|
||||
domain.id if hasattr(domain, 'id') else domain
|
||||
|
||||
if 'project' in kwargs:
|
||||
project_kwargs = {'name': kwargs.pop('project')}
|
||||
if 'project_domain' in kwargs:
|
||||
domain = __salt__['keystoneng.get_entity'](
|
||||
'domain', name=kwargs.pop('project_domain'))
|
||||
if domain:
|
||||
project_kwargs['domain_id'] = domain.id
|
||||
project = __salt__['keystoneng.get_entity'](
|
||||
'project', **project_kwargs)
|
||||
kwargs['project'] = project
|
||||
filters['project'] = project.id if hasattr(project, 'id') else project
|
||||
|
||||
if 'user' in kwargs:
|
||||
user_kwargs = {'name': kwargs.pop('user')}
|
||||
if 'user_domain' in kwargs:
|
||||
domain = __salt__['keystoneng.get_entity'](
|
||||
'domain', name=kwargs.pop('user_domain'))
|
||||
if domain:
|
||||
user_kwargs['domain_id'] = domain.id
|
||||
user = __salt__['keystoneng.get_entity']('user', **user_kwargs)
|
||||
kwargs['user'] = user
|
||||
filters['user'] = user.id if hasattr(user, 'id') else user
|
||||
|
||||
if 'group' in kwargs:
|
||||
group_kwargs = {'name': kwargs['group']}
|
||||
if 'group_domain' in kwargs:
|
||||
domain = __salt__['keystoneng.get_entity'](
|
||||
'domain', name=kwargs.pop('group_domain'))
|
||||
if domain:
|
||||
group_kwargs['domain_id'] = domain.id
|
||||
group = __salt__['keystoneng.get_entity']('group', **group_kwargs)
|
||||
|
||||
kwargs['group'] = group
|
||||
filters['group'] = group.id if hasattr(group, 'id') else group
|
||||
|
||||
return filters, kwargs
|
||||
|
||||
|
||||
def present(name, auth=None, **kwargs):
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['keystoneng.setup_clouds'](auth)
|
||||
|
||||
if 'role' not in kwargs:
|
||||
kwargs['role'] = name
|
||||
filters, kwargs = _get_filters(kwargs)
|
||||
|
||||
grants = __salt__['keystoneng.role_assignment_list'](filters=filters)
|
||||
|
||||
if not grants:
|
||||
__salt__['keystoneng.role_grant'](**kwargs)
|
||||
for k, v in filters.items():
|
||||
ret['changes'][k] = v
|
||||
ret['comment'] = 'Granted role assignment'
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
def absent(name, auth=None, **kwargs):
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['keystoneng.setup_clouds'](auth)
|
||||
|
||||
if 'role' not in kwargs:
|
||||
kwargs['role'] = name
|
||||
filters, kwargs = _get_filters(kwargs)
|
||||
|
||||
grants = __salt__['keystoneng.role_assignment_list'](filters=filters)
|
||||
|
||||
if grants:
|
||||
__salt__['keystoneng.role_revoke'](**kwargs)
|
||||
for k, v in filters.items():
|
||||
ret['changes'][k] = v
|
||||
ret['comment'] = 'Revoked role assignment'
|
||||
|
||||
return ret
|
128
salt/states/keystone_service.py
Normal file
128
salt/states/keystone_service.py
Normal file
@ -0,0 +1,128 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Management of OpenStack Keystone Services
|
||||
=========================================
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
:depends: shade
|
||||
:configuration: see :py:mod:`salt.modules.keystoneng` for setup instructions
|
||||
|
||||
Example States
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
create service:
|
||||
keystone_service.present:
|
||||
- name: glance
|
||||
- type: image
|
||||
|
||||
delete service:
|
||||
keystone_service.absent:
|
||||
- name: glance
|
||||
|
||||
create service with optional params:
|
||||
keystone_service.present:
|
||||
- name: glance
|
||||
- type: image
|
||||
- enabled: False
|
||||
- description: 'OpenStack Image'
|
||||
'''
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
__virtualname__ = 'keystone_service'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
if 'keystoneng.service_get' in __salt__:
|
||||
return __virtualname__
|
||||
return (False, 'The keystoneng execution module failed to load: shade python module is not available')
|
||||
|
||||
|
||||
def present(name, auth=None, **kwargs):
|
||||
'''
|
||||
Ensure an service exists and is up-to-date
|
||||
|
||||
name
|
||||
Name of the group
|
||||
|
||||
type
|
||||
Service type
|
||||
|
||||
enabled
|
||||
Boolean to control if service is enabled
|
||||
|
||||
description
|
||||
An arbitrary description of the service
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['keystoneng.setup_clouds'](auth)
|
||||
|
||||
service = __salt__['keystoneng.service_get'](name=name)
|
||||
|
||||
if service is None:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = kwargs
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Service will be created.'
|
||||
return ret
|
||||
|
||||
kwargs['name'] = name
|
||||
service = __salt__['keystoneng.service_create'](**kwargs)
|
||||
ret['changes'] = service
|
||||
ret['comment'] = 'Created service'
|
||||
return ret
|
||||
|
||||
changes = __salt__['keystoneng.compare_changes'](service, **kwargs)
|
||||
if changes:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = changes
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Service will be updated.'
|
||||
return ret
|
||||
|
||||
kwargs['name'] = service
|
||||
__salt__['keystoneng.service_update'](**kwargs)
|
||||
ret['changes'].update(changes)
|
||||
ret['comment'] = 'Updated service'
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
def absent(name, auth=None):
|
||||
'''
|
||||
Ensure service does not exist
|
||||
|
||||
name
|
||||
Name of the service
|
||||
'''
|
||||
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['keystoneng.setup_clouds'](auth)
|
||||
|
||||
service = __salt__['keystoneng.service_get'](name=name)
|
||||
|
||||
if service:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = {'id': service.id}
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Service will be deleted.'
|
||||
return ret
|
||||
|
||||
__salt__['keystoneng.service_delete'](name=service)
|
||||
ret['changes']['id'] = service.id
|
||||
ret['comment'] = 'Deleted service'
|
||||
|
||||
return ret
|
153
salt/states/keystone_user.py
Normal file
153
salt/states/keystone_user.py
Normal file
@ -0,0 +1,153 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Management of OpenStack Keystone Users
|
||||
======================================
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
:depends: shade
|
||||
:configuration: see :py:mod:`salt.modules.keystoneng` for setup instructions
|
||||
|
||||
Example States
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
create user:
|
||||
keystone_user.present:
|
||||
- name: user1
|
||||
|
||||
delete user:
|
||||
keystone_user.absent:
|
||||
- name: user1
|
||||
|
||||
create user with optional params:
|
||||
keystone_user.present:
|
||||
- name: user1
|
||||
- domain: domain1
|
||||
- enabled: False
|
||||
- password: password123
|
||||
- email: "user1@example.org"
|
||||
- description: 'my user'
|
||||
'''
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
__virtualname__ = 'keystone_user'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
if 'keystoneng.user_get' in __salt__:
|
||||
return __virtualname__
|
||||
return (False, 'The keystoneng execution module failed to load: shade python module is not available')
|
||||
|
||||
|
||||
def _common(kwargs):
|
||||
'''
|
||||
Returns: None if user wasn't found, otherwise a user object
|
||||
'''
|
||||
search_kwargs = {'name': kwargs['name']}
|
||||
if 'domain' in kwargs:
|
||||
domain = __salt__['keystoneng.get_entity'](
|
||||
'domain', name=kwargs.pop('domain'))
|
||||
domain_id = domain.id if hasattr(domain, 'id') else domain
|
||||
search_kwargs['domain_id'] = domain_id
|
||||
kwargs['domain_id'] = domain_id
|
||||
|
||||
return __salt__['keystoneng.user_get'](**search_kwargs)
|
||||
|
||||
|
||||
def present(name, auth=None, **kwargs):
|
||||
'''
|
||||
Ensure domain exists and is up-to-date
|
||||
|
||||
name
|
||||
Name of the domain
|
||||
|
||||
domain
|
||||
The name or id of the domain
|
||||
|
||||
enabled
|
||||
Boolean to control if domain is enabled
|
||||
|
||||
description
|
||||
An arbitrary description of the domain
|
||||
|
||||
password
|
||||
The user password
|
||||
|
||||
email
|
||||
The users email address
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['keystoneng.setup_clouds'](auth)
|
||||
|
||||
kwargs['name'] = name
|
||||
user = _common(kwargs)
|
||||
|
||||
if user is None:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = kwargs
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'User will be created.'
|
||||
return ret
|
||||
|
||||
user = __salt__['keystoneng.user_create'](**kwargs)
|
||||
ret['changes'] = user
|
||||
ret['comment'] = 'Created user'
|
||||
return ret
|
||||
|
||||
changes = __salt__['keystoneng.compare_changes'](user, **kwargs)
|
||||
if changes:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = changes
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'User will be updated.'
|
||||
return ret
|
||||
|
||||
kwargs['name'] = user
|
||||
__salt__['keystoneng.user_update'](**kwargs)
|
||||
ret['changes'].update(changes)
|
||||
ret['comment'] = 'Updated user'
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
def absent(name, auth=None, **kwargs):
|
||||
'''
|
||||
Ensure user does not exists
|
||||
|
||||
name
|
||||
Name of the user
|
||||
|
||||
domain
|
||||
The name or id of the domain
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['keystoneng.setup_clouds'](auth)
|
||||
|
||||
kwargs['name'] = name
|
||||
user = _common(kwargs)
|
||||
|
||||
if user:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = {'id': user.id}
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'User will be deleted.'
|
||||
return ret
|
||||
|
||||
__salt__['keystoneng.user_delete'](name=user)
|
||||
ret['changes']['id'] = user.id
|
||||
ret['comment'] = 'Deleted user'
|
||||
|
||||
return ret
|
@ -111,6 +111,14 @@ def present(name,
|
||||
# check if user exists
|
||||
users = __salt__['mongodb.user_find'](name, user, password, host, port, database, authdb)
|
||||
if len(users) > 0:
|
||||
# check for errors returned in users e.g.
|
||||
# users= (False, 'Failed to connect to MongoDB database localhost:27017')
|
||||
# users= (False, 'not authorized on admin to execute command { usersInfo: "root" }')
|
||||
if not users[0]:
|
||||
ret['result'] = False
|
||||
ret['comment'] = "Mongo Err: "+str(users[1])
|
||||
return ret
|
||||
|
||||
# check each user occurrence
|
||||
for usr in users:
|
||||
# prepare empty list for current roles
|
||||
|
161
salt/states/neutron_network.py
Normal file
161
salt/states/neutron_network.py
Normal file
@ -0,0 +1,161 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Management of OpenStack Neutron Networks
|
||||
=========================================
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
:depends: shade
|
||||
:configuration: see :py:mod:`salt.modules.neutronng` for setup instructions
|
||||
|
||||
Example States
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
create network:
|
||||
neutron_network.present:
|
||||
- name: network1
|
||||
|
||||
delete network:
|
||||
neutron_network.absent:
|
||||
- name: network1
|
||||
|
||||
create network with optional params:
|
||||
neutron_network.present:
|
||||
- name: network1
|
||||
- vlan: 200
|
||||
- shared: False
|
||||
- external: False
|
||||
- project: project1
|
||||
'''
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
__virtualname__ = 'neutron_network'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
if 'neutronng.list_networks' in __salt__:
|
||||
return __virtualname__
|
||||
return (False, 'The neutronng execution module failed to load:\
|
||||
shade python module is not available')
|
||||
|
||||
|
||||
def present(name, auth=None, **kwargs):
|
||||
'''
|
||||
Ensure a network exists and is up-to-date
|
||||
|
||||
name
|
||||
Name of the network
|
||||
|
||||
provider
|
||||
A dict of network provider options.
|
||||
|
||||
shared
|
||||
Set the network as shared.
|
||||
|
||||
external
|
||||
Whether this network is externally accessible.
|
||||
|
||||
admin_state_up
|
||||
Set the network administrative state to up.
|
||||
|
||||
vlan
|
||||
Vlan ID. Alias for
|
||||
provider:
|
||||
- physical_network: provider
|
||||
- network_type: vlan
|
||||
- segmentation_id: (vlan id)
|
||||
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['neutronng.setup_clouds'](auth)
|
||||
|
||||
kwargs['name'] = name
|
||||
network = __salt__['neutronng.network_get'](name=name)
|
||||
|
||||
if network is None:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = kwargs
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Network will be created.'
|
||||
return ret
|
||||
|
||||
if 'vlan' in kwargs:
|
||||
kwargs['provider'] = {"physical_network": "provider",
|
||||
"network_type": "vlan",
|
||||
"segmentation_id": kwargs['vlan']}
|
||||
del kwargs['vlan']
|
||||
|
||||
if 'project' in kwargs:
|
||||
projectname = kwargs['project']
|
||||
project = __salt__['keystoneng.project_get'](name=projectname)
|
||||
if project:
|
||||
kwargs['project_id'] = project.id
|
||||
del kwargs['project']
|
||||
else:
|
||||
ret['result'] = False
|
||||
ret['comment'] = "Project:{} not found.".format(projectname)
|
||||
return ret
|
||||
|
||||
network = __salt__['neutronng.network_create'](**kwargs)
|
||||
ret['changes'] = network
|
||||
ret['comment'] = 'Created network'
|
||||
return ret
|
||||
|
||||
changes = __salt__['neutronng.compare_changes'](network, **kwargs)
|
||||
|
||||
# there's no method for network update in shade right now;
|
||||
# can only delete and recreate
|
||||
if changes:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = changes
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Project will be updated.'
|
||||
return ret
|
||||
|
||||
__salt__['neutronng.network_delete'](name=network)
|
||||
__salt__['neutronng.network_create'](**kwargs)
|
||||
ret['changes'].update(changes)
|
||||
ret['comment'] = 'Updated network'
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
def absent(name, auth=None, **kwargs):
|
||||
'''
|
||||
Ensure a network does not exists
|
||||
|
||||
name
|
||||
Name of the network
|
||||
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['neutronng.setup_clouds'](auth)
|
||||
|
||||
kwargs['name'] = name
|
||||
network = __salt__['neutronng.network_get'](name=name)
|
||||
|
||||
if network:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = {'id': network.id}
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Network will be deleted.'
|
||||
return ret
|
||||
|
||||
__salt__['neutronng.network_delete'](name=network)
|
||||
ret['changes']['id'] = network.id
|
||||
ret['comment'] = 'Deleted network'
|
||||
|
||||
return ret
|
158
salt/states/neutron_secgroup.py
Normal file
158
salt/states/neutron_secgroup.py
Normal file
@ -0,0 +1,158 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Management of OpenStack Neutron Security Groups
|
||||
=========================================
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
:depends: shade
|
||||
:configuration: see :py:mod:`salt.modules.neutronng` for setup instructions
|
||||
|
||||
Example States
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
create security group;
|
||||
neutron_secgroup.present:
|
||||
- name: security_group1
|
||||
- description: "Very Secure Security Group"
|
||||
|
||||
delete security group:
|
||||
neutron_secgroup.absent:
|
||||
- name_or_id: security_group1
|
||||
- project_name: Project1
|
||||
|
||||
create security group with optional params:
|
||||
neutron_secgroup.present:
|
||||
- name: security_group1
|
||||
- description: "Very Secure Security Group"
|
||||
- project_id: 1dcac318a83b4610b7a7f7ba01465548
|
||||
|
||||
create security group with optional params:
|
||||
neutron_secgroup.present:
|
||||
- name: security_group1
|
||||
- description: "Very Secure Security Group"
|
||||
- project_name: Project1
|
||||
'''
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
__virtualname__ = 'neutron_secgroup'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
if 'neutronng.list_subnets' in __salt__:
|
||||
return __virtualname__
|
||||
return (False, 'The neutronng execution module failed to load:\
|
||||
shade python module is not available')
|
||||
|
||||
|
||||
def present(name, auth=None, **kwargs):
|
||||
'''
|
||||
Ensure a security group exists.
|
||||
|
||||
You can supply either project_name or project_id.
|
||||
|
||||
Creating a default security group will not show up as a change;
|
||||
it gets created through the lookup process.
|
||||
|
||||
name
|
||||
Name of the security group
|
||||
|
||||
description
|
||||
Description of the security group
|
||||
|
||||
project_name
|
||||
Name of Project
|
||||
|
||||
project_id
|
||||
ID of Project
|
||||
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['neutronng.setup_clouds'](auth)
|
||||
|
||||
if 'project_name' in kwargs:
|
||||
kwargs['project_id'] = kwargs['project_name']
|
||||
del kwargs['project_name']
|
||||
|
||||
project = __salt__['keystoneng.project_get'](
|
||||
name=kwargs['project_id'])
|
||||
|
||||
if project is None:
|
||||
ret['result'] = False
|
||||
ret['comment'] = "project does not exist"
|
||||
return ret
|
||||
|
||||
secgroup = __salt__['neutronng.security_group_get'](
|
||||
name=name, filters={'tenant_id': project.id})
|
||||
|
||||
if secgroup is None:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = kwargs
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Security Group will be created.'
|
||||
return ret
|
||||
|
||||
secgroup = __salt__['neutronng.security_group_create'](**kwargs)
|
||||
ret['changes'] = secgroup
|
||||
ret['comment'] = 'Created security group'
|
||||
return ret
|
||||
|
||||
changes = __salt__['neutronng.compare_changes'](secgroup, **kwargs)
|
||||
if changes:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = changes
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Security Group will be updated.'
|
||||
return ret
|
||||
|
||||
__salt__['neutronng.security_group_update'](secgroup=secgroup, **changes)
|
||||
ret['changes'].update(changes)
|
||||
ret['comment'] = 'Updated security group'
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
def absent(name, auth=None, **kwargs):
|
||||
'''
|
||||
Ensure a security group does not exist
|
||||
|
||||
name
|
||||
Name of the security group
|
||||
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['neutronng.setup_clouds'](auth)
|
||||
|
||||
kwargs['project_id'] = __salt__['keystoneng.project_get'](
|
||||
name=kwargs['project_name'])
|
||||
|
||||
secgroup = __salt__['neutronng.security_group_get'](
|
||||
name=name,
|
||||
filters={'project_id': kwargs['project_id']}
|
||||
)
|
||||
|
||||
if secgroup:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = {'id': secgroup.id}
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Security group will be deleted.'
|
||||
return ret
|
||||
|
||||
__salt__['neutronng.security_group_delete'](name=secgroup)
|
||||
ret['changes']['id'] = name
|
||||
ret['comment'] = 'Deleted security group'
|
||||
|
||||
return ret
|
180
salt/states/neutron_secgroup_rule.py
Normal file
180
salt/states/neutron_secgroup_rule.py
Normal file
@ -0,0 +1,180 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Management of OpenStack Neutron Security Group Rules
|
||||
=========================================
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
:depends: shade
|
||||
:configuration: see :py:mod:`salt.modules.neutronng` for setup instructions
|
||||
|
||||
Example States
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
create security group rule:
|
||||
neutron_secgroup_rule.present:
|
||||
- name: security_group1
|
||||
- project_name: Project1
|
||||
- protocol: icmp
|
||||
|
||||
delete security group:
|
||||
neutron_secgroup_rule.absent:
|
||||
- name_or_id: security_group1
|
||||
|
||||
create security group with optional params:
|
||||
neutron_secgroup_rule.present:
|
||||
- name: security_group1
|
||||
- description: "Very Secure Security Group"
|
||||
- project_id: 1dcac318a83b4610b7a7f7ba01465548
|
||||
'''
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
__virtualname__ = 'neutron_secgroup_rule'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
if 'neutronng.list_subnets' in __salt__:
|
||||
return __virtualname__
|
||||
return (False, 'The neutronng execution module failed to load:\
|
||||
shade python module is not available')
|
||||
|
||||
|
||||
def _rule_compare(rule1, rule2):
|
||||
'''
|
||||
Compare the common keys between security group rules against eachother
|
||||
'''
|
||||
|
||||
commonkeys = set(rule1.keys()).intersection(rule2.keys())
|
||||
for key in commonkeys:
|
||||
if rule1[key] != rule2[key]:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def present(name, auth=None, **kwargs):
|
||||
'''
|
||||
Ensure a security group rule exists
|
||||
|
||||
defaults: port_range_min=None, port_range_max=None, protocol=None,
|
||||
remote_ip_prefix=None, remote_group_id=None, direction='ingress',
|
||||
ethertype='IPv4', project_id=None
|
||||
|
||||
name
|
||||
Name of the security group to associate with this rule
|
||||
|
||||
project_name
|
||||
Name of the project associated with the security group
|
||||
|
||||
protocol
|
||||
The protocol that is matched by the security group rule.
|
||||
Valid values are None, tcp, udp, and icmp.
|
||||
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['neutronng.setup_clouds'](auth)
|
||||
|
||||
if 'project_name' in kwargs:
|
||||
kwargs['project_id'] = kwargs['project_name']
|
||||
del kwargs['project_name']
|
||||
|
||||
project = __salt__['keystoneng.project_get'](
|
||||
name=kwargs['project_id'])
|
||||
|
||||
if project is None:
|
||||
ret['result'] = False
|
||||
ret['comment'] = "Project does not exist"
|
||||
return ret
|
||||
|
||||
secgroup = __salt__['neutronng.security_group_get'](
|
||||
name=name,
|
||||
filters={'tenant_id': project.id}
|
||||
)
|
||||
|
||||
if secgroup is None:
|
||||
ret['result'] = False
|
||||
ret['changes'] = {},
|
||||
ret['comment'] = 'Security Group does not exist {}'.format(name)
|
||||
return ret
|
||||
|
||||
# we have to search through all secgroup rules for a possible match
|
||||
rule_exists = None
|
||||
for rule in secgroup['security_group_rules']:
|
||||
if _rule_compare(rule, kwargs) is True:
|
||||
rule_exists = True
|
||||
|
||||
if rule_exists is None:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = kwargs
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Security Group rule will be created.'
|
||||
return ret
|
||||
|
||||
# The variable differences are a little clumsy right now
|
||||
kwargs['secgroup_name_or_id'] = secgroup
|
||||
|
||||
new_rule = __salt__['neutronng.security_group_rule_create'](**kwargs)
|
||||
ret['changes'] = new_rule
|
||||
ret['comment'] = 'Created security group rule'
|
||||
return ret
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
def absent(name, auth=None, **kwargs):
|
||||
'''
|
||||
Ensure a security group rule does not exist
|
||||
|
||||
name
|
||||
name or id of the security group rule to delete
|
||||
|
||||
rule_id
|
||||
uuid of the rule to delete
|
||||
|
||||
project_id
|
||||
id of project to delete rule from
|
||||
'''
|
||||
rule_id = kwargs['rule_id']
|
||||
ret = {'name': rule_id,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['neutronng.setup_clouds'](auth)
|
||||
|
||||
secgroup = __salt__['neutronng.security_group_get'](
|
||||
name=name,
|
||||
filters={'tenant_id': kwargs['project_id']}
|
||||
)
|
||||
|
||||
# no need to delete a rule if the security group doesn't exist
|
||||
if secgroup is None:
|
||||
ret['comment'] = "security group does not exist"
|
||||
return ret
|
||||
|
||||
# This should probably be done with compare on fields instead of
|
||||
# rule_id in the future
|
||||
rule_exists = None
|
||||
for rule in secgroup['security_group_rules']:
|
||||
if _rule_compare(rule, {"id": rule_id}) is True:
|
||||
rule_exists = True
|
||||
|
||||
if rule_exists:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = {'id': kwargs['rule_id']}
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Security group rule will be deleted.'
|
||||
return ret
|
||||
|
||||
__salt__['neutronng.security_group_rule_delete'](rule_id=rule_id)
|
||||
ret['changes']['id'] = rule_id
|
||||
ret['comment'] = 'Deleted security group rule'
|
||||
|
||||
return ret
|
171
salt/states/neutron_subnet.py
Normal file
171
salt/states/neutron_subnet.py
Normal file
@ -0,0 +1,171 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Management of OpenStack Neutron Subnets
|
||||
=========================================
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
:depends: shade
|
||||
:configuration: see :py:mod:`salt.modules.neutronng` for setup instructions
|
||||
|
||||
Example States
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
create subnet:
|
||||
neutron_subnet.present:
|
||||
- name: subnet1
|
||||
- network_name_or_id: network1
|
||||
- cidr: 192.168.199.0/24
|
||||
|
||||
|
||||
delete subnet:
|
||||
neutron_subnet.absent:
|
||||
- name: subnet2
|
||||
|
||||
create subnet with optional params:
|
||||
neutron_subnet.present:
|
||||
- name: subnet1
|
||||
- network_name_or_id: network1
|
||||
- enable_dhcp: True
|
||||
- cidr: 192.168.199.0/24
|
||||
- allocation_pools:
|
||||
- start: 192.168.199.5
|
||||
end: 192.168.199.250
|
||||
- host_routes:
|
||||
- destination: 192.168..0.0/24
|
||||
nexthop: 192.168.0.1
|
||||
- gateway_ip: 192.168.199.1
|
||||
- dns_nameservers:
|
||||
- 8.8.8.8
|
||||
- 8.8.8.7
|
||||
|
||||
create ipv6 subnet:
|
||||
neutron_subnet.present:
|
||||
- name: v6subnet1
|
||||
- network_name_or_id: network1
|
||||
- ip_version: 6
|
||||
'''
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
__virtualname__ = 'neutron_subnet'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
if 'neutronng.list_subnets' in __salt__:
|
||||
return __virtualname__
|
||||
return (False, 'The neutronng execution module failed to load:\
|
||||
shade python module is not available')
|
||||
|
||||
|
||||
def present(name, auth=None, **kwargs):
|
||||
'''
|
||||
Ensure a subnet exists and is up-to-date
|
||||
|
||||
name
|
||||
Name of the subnet
|
||||
|
||||
network_name_or_id
|
||||
The unique name or ID of the attached network.
|
||||
If a non-unique name is supplied, an exception is raised.
|
||||
|
||||
allocation_pools
|
||||
A list of dictionaries of the start and end addresses
|
||||
for the allocation pools
|
||||
|
||||
gateway_ip
|
||||
The gateway IP address.
|
||||
|
||||
dns_nameservers
|
||||
A list of DNS name servers for the subnet.
|
||||
|
||||
host_routes
|
||||
A list of host route dictionaries for the subnet.
|
||||
|
||||
ipv6_ra_mode
|
||||
IPv6 Router Advertisement mode.
|
||||
Valid values are: ‘dhcpv6-stateful’, ‘dhcpv6-stateless’, or ‘slaac’.
|
||||
|
||||
ipv6_address_mode
|
||||
IPv6 address mode.
|
||||
Valid values are: ‘dhcpv6-stateful’, ‘dhcpv6-stateless’, or ‘slaac’.
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['neutronng.setup_clouds'](auth)
|
||||
|
||||
kwargs['subnet_name'] = name
|
||||
subnet = __salt__['neutronng.subnet_get'](name=name)
|
||||
|
||||
if subnet is None:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = kwargs
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Subnet will be created.'
|
||||
return ret
|
||||
|
||||
new_subnet = __salt__['neutronng.subnet_create'](**kwargs)
|
||||
ret['changes'] = new_subnet
|
||||
ret['comment'] = 'Created subnet'
|
||||
return ret
|
||||
|
||||
changes = __salt__['neutronng.compare_changes'](subnet, **kwargs)
|
||||
if changes:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = changes
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Project will be updated.'
|
||||
return ret
|
||||
|
||||
# update_subnet does not support changing cidr,
|
||||
# so we have to delete and recreate the subnet in this case.
|
||||
if 'cidr' in changes or 'tenant_id' in changes:
|
||||
__salt__['neutronng.subnet_delete'](name=name)
|
||||
new_subnet = __salt__['neutronng.subnet_create'](**kwargs)
|
||||
ret['changes'] = new_subnet
|
||||
ret['comment'] = 'Deleted and recreated subnet'
|
||||
return ret
|
||||
|
||||
__salt__['neutronng.subnet_update'](**kwargs)
|
||||
ret['changes'].update(changes)
|
||||
ret['comment'] = 'Updated subnet'
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
def absent(name, auth=None):
|
||||
'''
|
||||
Ensure a subnet does not exists
|
||||
|
||||
name
|
||||
Name of the subnet
|
||||
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': True,
|
||||
'comment': ''}
|
||||
|
||||
__salt__['neutronng.setup_clouds'](auth)
|
||||
|
||||
subnet = __salt__['neutronng.subnet_get'](name=name)
|
||||
|
||||
if subnet:
|
||||
if __opts__['test'] is True:
|
||||
ret['result'] = None
|
||||
ret['changes'] = {'id': subnet.id}
|
||||
ret['pchanges'] = ret['changes']
|
||||
ret['comment'] = 'Project will be deleted.'
|
||||
return ret
|
||||
|
||||
__salt__['neutronng.subnet_delete'](name=subnet)
|
||||
ret['changes']['id'] = name
|
||||
ret['comment'] = 'Deleted subnet'
|
||||
|
||||
return ret
|
@ -508,8 +508,10 @@ def _find_install_targets(name=None,
|
||||
# add it to the kwargs.
|
||||
kwargs['refresh'] = refresh
|
||||
|
||||
resolve_capabilities = kwargs.get('resolve_capabilities', False) and 'pkg.list_provides' in __salt__
|
||||
try:
|
||||
cur_pkgs = __salt__['pkg.list_pkgs'](versions_as_list=True, **kwargs)
|
||||
cur_prov = resolve_capabilities and __salt__['pkg.list_provides'](**kwargs) or dict()
|
||||
except CommandExecutionError as exc:
|
||||
return {'name': name,
|
||||
'changes': {},
|
||||
@ -669,6 +671,9 @@ def _find_install_targets(name=None,
|
||||
failed_verify = False
|
||||
for key, val in six.iteritems(desired):
|
||||
cver = cur_pkgs.get(key, [])
|
||||
if resolve_capabilities and not cver and key in cur_prov:
|
||||
cver = cur_pkgs.get(cur_prov.get(key)[0], [])
|
||||
|
||||
# Package not yet installed, so add to targets
|
||||
if not cver:
|
||||
targets[key] = val
|
||||
@ -786,13 +791,15 @@ def _find_install_targets(name=None,
|
||||
warnings, was_refreshed)
|
||||
|
||||
|
||||
def _verify_install(desired, new_pkgs, ignore_epoch=False):
|
||||
def _verify_install(desired, new_pkgs, ignore_epoch=False, new_caps=None):
|
||||
'''
|
||||
Determine whether or not the installed packages match what was requested in
|
||||
the SLS file.
|
||||
'''
|
||||
ok = []
|
||||
failed = []
|
||||
if not new_caps:
|
||||
new_caps = dict()
|
||||
for pkgname, pkgver in desired.items():
|
||||
# FreeBSD pkg supports `openjdk` and `java/openjdk7` package names.
|
||||
# Homebrew for Mac OSX does something similar with tap names
|
||||
@ -809,6 +816,8 @@ def _verify_install(desired, new_pkgs, ignore_epoch=False):
|
||||
cver = new_pkgs.get(pkgname.split('=')[0])
|
||||
else:
|
||||
cver = new_pkgs.get(pkgname)
|
||||
if not cver and pkgname in new_caps:
|
||||
cver = new_pkgs.get(new_caps.get(pkgname)[0])
|
||||
|
||||
if not cver:
|
||||
failed.append(pkgname)
|
||||
@ -873,6 +882,26 @@ def _nested_output(obj):
|
||||
return ret
|
||||
|
||||
|
||||
def _resolve_capabilities(pkgs, refresh=False, **kwargs):
|
||||
'''
|
||||
Resolve capabilities in ``pkgs`` and exchange them with real package
|
||||
names, when the result is distinct.
|
||||
This feature can be turned on while setting the paramter
|
||||
``resolve_capabilities`` to True.
|
||||
|
||||
Return the input dictionary with replaced capability names and as
|
||||
second return value a bool which say if a refresh need to be run.
|
||||
|
||||
In case of ``resolve_capabilities`` is False (disabled) or not
|
||||
supported by the implementation the input is returned unchanged.
|
||||
'''
|
||||
if not pkgs or 'pkg.resolve_capabilities' not in __salt__:
|
||||
return pkgs, refresh
|
||||
|
||||
ret = __salt__['pkg.resolve_capabilities'](pkgs, refresh=refresh, **kwargs)
|
||||
return ret, False
|
||||
|
||||
|
||||
def installed(
|
||||
name,
|
||||
version=None,
|
||||
@ -1105,6 +1134,11 @@ def installed(
|
||||
|
||||
.. versionadded:: 2014.1.1
|
||||
|
||||
:param bool resolve_capabilities:
|
||||
Turn on resolving capabilities. This allow to name "provides" or alias names for packages.
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
:param bool allow_updates:
|
||||
Allow the package to be updated outside Salt's control (e.g. auto
|
||||
updates on Windows). This means a package on the Minion can have a
|
||||
@ -1448,6 +1482,12 @@ def installed(
|
||||
|
||||
kwargs['saltenv'] = __env__
|
||||
refresh = salt.utils.pkg.check_refresh(__opts__, refresh)
|
||||
|
||||
# check if capabilities should be checked and modify the requested packages
|
||||
# accordingly.
|
||||
if pkgs:
|
||||
pkgs, refresh = _resolve_capabilities(pkgs, refresh=refresh, **kwargs)
|
||||
|
||||
if not isinstance(pkg_verify, list):
|
||||
pkg_verify = pkg_verify is True
|
||||
if (pkg_verify or isinstance(pkg_verify, list)) \
|
||||
@ -1707,8 +1747,13 @@ def installed(
|
||||
if __grains__['os'] == 'FreeBSD':
|
||||
kwargs['with_origin'] = True
|
||||
new_pkgs = __salt__['pkg.list_pkgs'](versions_as_list=True, **kwargs)
|
||||
if kwargs.get('resolve_capabilities', False) and 'pkg.list_provides' in __salt__:
|
||||
new_caps = __salt__['pkg.list_provides'](**kwargs)
|
||||
else:
|
||||
new_caps = {}
|
||||
ok, failed = _verify_install(desired, new_pkgs,
|
||||
ignore_epoch=ignore_epoch)
|
||||
ignore_epoch=ignore_epoch,
|
||||
new_caps=new_caps)
|
||||
modified = [x for x in ok if x in targets]
|
||||
not_modified = [x for x in ok
|
||||
if x not in targets
|
||||
@ -1927,6 +1972,11 @@ def downloaded(name,
|
||||
- dos2unix
|
||||
- salt-minion: 2015.8.5-1.el6
|
||||
|
||||
:param bool resolve_capabilities:
|
||||
Turn on resolving capabilities. This allow to name "provides" or alias names for packages.
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: yaml
|
||||
@ -1952,11 +2002,22 @@ def downloaded(name,
|
||||
ret['comment'] = 'No packages to download provided'
|
||||
return ret
|
||||
|
||||
# If just a name (and optionally a version) is passed, just pack them into
|
||||
# the pkgs argument.
|
||||
if name and not pkgs:
|
||||
if version:
|
||||
pkgs = [{name: version}]
|
||||
version = None
|
||||
else:
|
||||
pkgs = [name]
|
||||
|
||||
# It doesn't make sense here to received 'downloadonly' as kwargs
|
||||
# as we're explicitely passing 'downloadonly=True' to execution module.
|
||||
if 'downloadonly' in kwargs:
|
||||
del kwargs['downloadonly']
|
||||
|
||||
pkgs, _refresh = _resolve_capabilities(pkgs, **kwargs)
|
||||
|
||||
# Only downloading not yet downloaded packages
|
||||
targets = _find_download_targets(name,
|
||||
version,
|
||||
@ -2203,6 +2264,10 @@ def latest(
|
||||
This parameter is available only on Debian based distributions and
|
||||
has no effect on the rest.
|
||||
|
||||
:param bool resolve_capabilities:
|
||||
Turn on resolving capabilities. This allow to name "provides" or alias names for packages.
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
Multiple Package Installation Options:
|
||||
|
||||
@ -2300,6 +2365,10 @@ def latest(
|
||||
|
||||
kwargs['saltenv'] = __env__
|
||||
|
||||
# check if capabilities should be checked and modify the requested packages
|
||||
# accordingly.
|
||||
desired_pkgs, refresh = _resolve_capabilities(desired_pkgs, refresh=refresh, **kwargs)
|
||||
|
||||
try:
|
||||
avail = __salt__['pkg.latest_version'](*desired_pkgs,
|
||||
fromrepo=fromrepo,
|
||||
@ -2822,6 +2891,11 @@ def uptodate(name, refresh=False, pkgs=None, **kwargs):
|
||||
This parameter available only on Debian based distributions, and
|
||||
have no effect on the rest.
|
||||
|
||||
:param bool resolve_capabilities:
|
||||
Turn on resolving capabilities. This allow to name "provides" or alias names for packages.
|
||||
|
||||
.. versionadded:: Oxygen
|
||||
|
||||
kwargs
|
||||
Any keyword arguments to pass through to ``pkg.upgrade``.
|
||||
|
||||
@ -2842,6 +2916,7 @@ def uptodate(name, refresh=False, pkgs=None, **kwargs):
|
||||
return ret
|
||||
|
||||
if isinstance(refresh, bool):
|
||||
pkgs, refresh = _resolve_capabilities(pkgs, refresh=refresh, **kwargs)
|
||||
try:
|
||||
packages = __salt__['pkg.list_upgrades'](refresh=refresh, **kwargs)
|
||||
if isinstance(pkgs, list):
|
||||
|
@ -351,7 +351,6 @@ def state(name,
|
||||
|
||||
changes = {}
|
||||
fail = set()
|
||||
failures = {}
|
||||
no_change = set()
|
||||
|
||||
if fail_minions is None:
|
||||
@ -393,7 +392,7 @@ def state(name,
|
||||
if not m_state:
|
||||
if minion not in fail_minions:
|
||||
fail.add(minion)
|
||||
failures[minion] = m_ret or 'Minion did not respond'
|
||||
changes[minion] = m_ret
|
||||
continue
|
||||
try:
|
||||
for state_item in six.itervalues(m_ret):
|
||||
@ -418,18 +417,6 @@ def state(name,
|
||||
state_ret['comment'] += ' Updating {0}.'.format(', '.join(changes))
|
||||
if no_change:
|
||||
state_ret['comment'] += ' No changes made to {0}.'.format(', '.join(no_change))
|
||||
if failures:
|
||||
state_ret['comment'] += '\nFailures:\n'
|
||||
for minion, failure in six.iteritems(failures):
|
||||
state_ret['comment'] += '\n'.join(
|
||||
(' ' * 4 + l)
|
||||
for l in salt.output.out_format(
|
||||
{minion: failure},
|
||||
'highstate',
|
||||
__opts__,
|
||||
).splitlines()
|
||||
)
|
||||
state_ret['comment'] += '\n'
|
||||
if test or __opts__.get('test'):
|
||||
if state_ret['changes'] and state_ret['result'] is True:
|
||||
# Test mode with changes is the only case where result should ever be none
|
||||
@ -570,7 +557,6 @@ def function(
|
||||
|
||||
changes = {}
|
||||
fail = set()
|
||||
failures = {}
|
||||
|
||||
if fail_minions is None:
|
||||
fail_minions = ()
|
||||
@ -598,7 +584,7 @@ def function(
|
||||
if not m_func:
|
||||
if minion not in fail_minions:
|
||||
fail.add(minion)
|
||||
failures[minion] = m_ret and m_ret or 'Minion did not respond'
|
||||
changes[minion] = m_ret
|
||||
continue
|
||||
changes[minion] = m_ret
|
||||
if not cmd_ret:
|
||||
@ -614,18 +600,6 @@ def function(
|
||||
func_ret['comment'] = 'Function ran successfully.'
|
||||
if changes:
|
||||
func_ret['comment'] += ' Function {0} ran on {1}.'.format(name, ', '.join(changes))
|
||||
if failures:
|
||||
func_ret['comment'] += '\nFailures:\n'
|
||||
for minion, failure in six.iteritems(failures):
|
||||
func_ret['comment'] += '\n'.join(
|
||||
(' ' * 4 + l)
|
||||
for l in salt.output.out_format(
|
||||
{minion: failure},
|
||||
'highstate',
|
||||
__opts__,
|
||||
).splitlines()
|
||||
)
|
||||
func_ret['comment'] += '\n'
|
||||
return func_ret
|
||||
|
||||
|
||||
|
@ -451,10 +451,10 @@ def format_call(fun,
|
||||
continue
|
||||
extra[key] = copy.deepcopy(value)
|
||||
|
||||
# We'll be showing errors to the users until Salt Oxygen comes out, after
|
||||
# We'll be showing errors to the users until Salt Fluorine comes out, after
|
||||
# which, errors will be raised instead.
|
||||
salt.utils.versions.warn_until(
|
||||
'Oxygen',
|
||||
'Fluorine',
|
||||
'It\'s time to start raising `SaltInvocationError` instead of '
|
||||
'returning warnings',
|
||||
# Let's not show the deprecation warning on the console, there's no
|
||||
@ -491,7 +491,7 @@ def format_call(fun,
|
||||
'{0}. If you were trying to pass additional data to be used '
|
||||
'in a template context, please populate \'context\' with '
|
||||
'\'key: value\' pairs. Your approach will work until Salt '
|
||||
'Oxygen is out.{1}'.format(
|
||||
'Fluorine is out.{1}'.format(
|
||||
msg,
|
||||
'' if 'full' not in ret else ' Please update your state files.'
|
||||
)
|
||||
|
@ -910,7 +910,7 @@ class GitProvider(object):
|
||||
'''
|
||||
if self.branch == '__env__':
|
||||
target = self.opts.get('pillarenv') \
|
||||
or self.opts.get('environment') \
|
||||
or self.opts.get('saltenv') \
|
||||
or 'base'
|
||||
return self.opts['{0}_base'.format(self.role)] \
|
||||
if target == 'base' \
|
||||
|
@ -89,6 +89,28 @@ localtime.
|
||||
This will schedule the command: ``state.sls httpd test=True`` at 5:00 PM on
|
||||
Monday, Wednesday and Friday, and 3:00 PM on Tuesday and Thursday.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
schedule:
|
||||
job1:
|
||||
function: state.sls
|
||||
args:
|
||||
- httpd
|
||||
kwargs:
|
||||
test: True
|
||||
when:
|
||||
- 'tea time'
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
whens:
|
||||
tea time: 1:40pm
|
||||
deployment time: Friday 5:00pm
|
||||
|
||||
The Salt scheduler also allows custom phrases to be used for the `when`
|
||||
parameter. These `whens` can be stored as either pillar values or
|
||||
grain values.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
schedule:
|
||||
@ -333,7 +355,6 @@ import logging
|
||||
import errno
|
||||
import random
|
||||
import yaml
|
||||
import copy
|
||||
|
||||
# Import Salt libs
|
||||
import salt.config
|
||||
@ -409,6 +430,7 @@ class Schedule(object):
|
||||
self.proxy = proxy
|
||||
self.functions = functions
|
||||
self.standalone = standalone
|
||||
self.skip_function = None
|
||||
if isinstance(intervals, dict):
|
||||
self.intervals = intervals
|
||||
else:
|
||||
@ -745,6 +767,69 @@ class Schedule(object):
|
||||
evt.fire_event({'complete': True},
|
||||
tag='/salt/minion/minion_schedule_saved')
|
||||
|
||||
def postpone_job(self, name, data):
|
||||
'''
|
||||
Postpone a job in the scheduler.
|
||||
Ignores jobs from pillar
|
||||
'''
|
||||
time = data['time']
|
||||
new_time = data['new_time']
|
||||
|
||||
# ensure job exists, then disable it
|
||||
if name in self.opts['schedule']:
|
||||
if 'skip_explicit' not in self.opts['schedule'][name]:
|
||||
self.opts['schedule'][name]['skip_explicit'] = []
|
||||
self.opts['schedule'][name]['skip_explicit'].append(time)
|
||||
|
||||
if 'run_explicit' not in self.opts['schedule'][name]:
|
||||
self.opts['schedule'][name]['run_explicit'] = []
|
||||
self.opts['schedule'][name]['run_explicit'].append(new_time)
|
||||
|
||||
elif name in self._get_schedule(include_opts=False):
|
||||
log.warning('Cannot modify job {0}, '
|
||||
'it`s in the pillar!'.format(name))
|
||||
|
||||
# Fire the complete event back along with updated list of schedule
|
||||
evt = salt.utils.event.get_event('minion', opts=self.opts, listen=False)
|
||||
evt.fire_event({'complete': True, 'schedule': self._get_schedule()},
|
||||
tag='/salt/minion/minion_schedule_postpone_job_complete')
|
||||
|
||||
def skip_job(self, name, data):
|
||||
'''
|
||||
Skip a job at a specific time in the scheduler.
|
||||
Ignores jobs from pillar
|
||||
'''
|
||||
time = data['time']
|
||||
|
||||
# ensure job exists, then disable it
|
||||
if name in self.opts['schedule']:
|
||||
if 'skip_explicit' not in self.opts['schedule'][name]:
|
||||
self.opts['schedule'][name]['skip_explicit'] = []
|
||||
self.opts['schedule'][name]['skip_explicit'].append(time)
|
||||
|
||||
elif name in self._get_schedule(include_opts=False):
|
||||
log.warning('Cannot modify job {0}, '
|
||||
'it`s in the pillar!'.format(name))
|
||||
|
||||
# Fire the complete event back along with updated list of schedule
|
||||
evt = salt.utils.event.get_event('minion', opts=self.opts, listen=False)
|
||||
evt.fire_event({'complete': True, 'schedule': self._get_schedule()},
|
||||
tag='/salt/minion/minion_schedule_skip_job_complete')
|
||||
|
||||
def get_next_fire_time(self, name):
|
||||
'''
|
||||
Disable a job in the scheduler. Ignores jobs from pillar
|
||||
'''
|
||||
|
||||
schedule = self._get_schedule()
|
||||
if schedule:
|
||||
_next_fire_time = schedule[name]['_next_fire_time']
|
||||
|
||||
# Fire the complete event back along with updated list of schedule
|
||||
evt = salt.utils.event.get_event('minion', opts=self.opts, listen=False)
|
||||
evt.fire_event({'complete': True, 'next_fire_time': _next_fire_time},
|
||||
tag='/salt/minion/minion_schedule_next_fire_time_complete')
|
||||
|
||||
def handle_func(self, multiprocessing_enabled, func, data):
|
||||
'''
|
||||
Execute this method in a multiprocess or thread
|
||||
@ -948,11 +1033,16 @@ class Schedule(object):
|
||||
# Let's make sure we exit the process!
|
||||
sys.exit(salt.defaults.exitcodes.EX_GENERIC)
|
||||
|
||||
def eval(self):
|
||||
def eval(self, now=None):
|
||||
'''
|
||||
Evaluate and execute the schedule
|
||||
|
||||
:param int now: Override current time with a Unix timestamp``
|
||||
|
||||
'''
|
||||
|
||||
log.trace('==== evaluating schedule =====')
|
||||
|
||||
def _splay(splaytime):
|
||||
'''
|
||||
Calculate splaytime
|
||||
@ -974,9 +1064,13 @@ class Schedule(object):
|
||||
raise ValueError('Schedule must be of type dict.')
|
||||
if 'enabled' in schedule and not schedule['enabled']:
|
||||
return
|
||||
if 'skip_function' in schedule:
|
||||
self.skip_function = schedule['skip_function']
|
||||
for job, data in six.iteritems(schedule):
|
||||
if job == 'enabled' or not data:
|
||||
continue
|
||||
if job == 'skip_function' or not data:
|
||||
continue
|
||||
if not isinstance(data, dict):
|
||||
log.error('Scheduled job "{0}" should have a dict value, not {1}'.format(job, type(data)))
|
||||
continue
|
||||
@ -1011,7 +1105,8 @@ class Schedule(object):
|
||||
'_run_on_start' not in data:
|
||||
data['_run_on_start'] = True
|
||||
|
||||
now = int(time.time())
|
||||
if not now:
|
||||
now = int(time.time())
|
||||
|
||||
if 'until' in data:
|
||||
if not _WHEN_SUPPORTED:
|
||||
@ -1065,6 +1160,23 @@ class Schedule(object):
|
||||
'", "'.join(scheduling_elements)))
|
||||
continue
|
||||
|
||||
if 'run_explicit' in data:
|
||||
_run_explicit = data['run_explicit']
|
||||
|
||||
if isinstance(_run_explicit, six.string_types):
|
||||
_run_explicit = [_run_explicit]
|
||||
|
||||
# Copy the list so we can loop through it
|
||||
for i in copy.deepcopy(_run_explicit):
|
||||
if len(_run_explicit) > 1:
|
||||
if i < now - self.opts['loop_interval']:
|
||||
_run_explicit.remove(i)
|
||||
|
||||
if _run_explicit:
|
||||
if _run_explicit[0] <= now < (_run_explicit[0] + self.opts['loop_interval']):
|
||||
run = True
|
||||
data['_next_fire_time'] = _run_explicit[0]
|
||||
|
||||
if True in [True for item in time_elements if item in data]:
|
||||
if '_seconds' not in data:
|
||||
interval = int(data.get('seconds', 0))
|
||||
@ -1153,10 +1265,11 @@ class Schedule(object):
|
||||
|
||||
# Copy the list so we can loop through it
|
||||
for i in copy.deepcopy(_when):
|
||||
if i < now and len(_when) > 1:
|
||||
# Remove all missed schedules except the latest one.
|
||||
# We need it to detect if it was triggered previously.
|
||||
_when.remove(i)
|
||||
if len(_when) > 1:
|
||||
if i < now - self.opts['loop_interval']:
|
||||
# Remove all missed schedules except the latest one.
|
||||
# We need it to detect if it was triggered previously.
|
||||
_when.remove(i)
|
||||
|
||||
if _when:
|
||||
# Grab the first element, which is the next run time or
|
||||
@ -1258,19 +1371,21 @@ class Schedule(object):
|
||||
seconds = data['_next_fire_time'] - now
|
||||
if data['_splay']:
|
||||
seconds = data['_splay'] - now
|
||||
if seconds <= 0:
|
||||
if '_seconds' in data:
|
||||
if '_seconds' in data:
|
||||
if seconds <= 0:
|
||||
run = True
|
||||
elif 'when' in data and data['_run']:
|
||||
elif 'when' in data and data['_run']:
|
||||
if data['_next_fire_time'] <= now <= (data['_next_fire_time'] + self.opts['loop_interval']):
|
||||
data['_run'] = False
|
||||
run = True
|
||||
elif 'cron' in data:
|
||||
# Reset next scheduled time because it is in the past now,
|
||||
# and we should trigger the job run, then wait for the next one.
|
||||
elif 'cron' in data:
|
||||
# Reset next scheduled time because it is in the past now,
|
||||
# and we should trigger the job run, then wait for the next one.
|
||||
if seconds <= 0:
|
||||
data['_next_fire_time'] = None
|
||||
run = True
|
||||
elif seconds == 0:
|
||||
run = True
|
||||
elif seconds == 0:
|
||||
run = True
|
||||
|
||||
if '_run_on_start' in data and data['_run_on_start']:
|
||||
run = True
|
||||
@ -1312,7 +1427,11 @@ class Schedule(object):
|
||||
if start <= now <= end:
|
||||
run = True
|
||||
else:
|
||||
run = False
|
||||
if self.skip_function:
|
||||
run = True
|
||||
func = self.skip_function
|
||||
else:
|
||||
run = False
|
||||
else:
|
||||
log.error('schedule.handle_func: Invalid range, end must be larger than start. \
|
||||
Ignoring job {0}.'.format(job))
|
||||
@ -1322,6 +1441,62 @@ class Schedule(object):
|
||||
Ignoring job {0}.'.format(job))
|
||||
continue
|
||||
|
||||
if 'skip_during_range' in data:
|
||||
if not _RANGE_SUPPORTED:
|
||||
log.error('Missing python-dateutil. Ignoring job {0}'.format(job))
|
||||
continue
|
||||
else:
|
||||
if isinstance(data['skip_during_range'], dict):
|
||||
try:
|
||||
start = int(time.mktime(dateutil_parser.parse(data['skip_during_range']['start']).timetuple()))
|
||||
except ValueError:
|
||||
log.error('Invalid date string for start in skip_during_range. Ignoring job {0}.'.format(job))
|
||||
continue
|
||||
try:
|
||||
end = int(time.mktime(dateutil_parser.parse(data['skip_during_range']['end']).timetuple()))
|
||||
except ValueError:
|
||||
log.error('Invalid date string for end in skip_during_range. Ignoring job {0}.'.format(job))
|
||||
log.error(data)
|
||||
continue
|
||||
if end > start:
|
||||
if start <= now <= end:
|
||||
if self.skip_function:
|
||||
run = True
|
||||
func = self.skip_function
|
||||
else:
|
||||
run = False
|
||||
else:
|
||||
run = True
|
||||
else:
|
||||
log.error('schedule.handle_func: Invalid range, end must be larger than start. \
|
||||
Ignoring job {0}.'.format(job))
|
||||
continue
|
||||
else:
|
||||
log.error('schedule.handle_func: Invalid, range must be specified as a dictionary. \
|
||||
Ignoring job {0}.'.format(job))
|
||||
continue
|
||||
|
||||
if 'skip_explicit' in data:
|
||||
_skip_explicit = data['skip_explicit']
|
||||
|
||||
if isinstance(_skip_explicit, six.string_types):
|
||||
_skip_explicit = [_skip_explicit]
|
||||
|
||||
# Copy the list so we can loop through it
|
||||
for i in copy.deepcopy(_skip_explicit):
|
||||
if i < now - self.opts['loop_interval']:
|
||||
_skip_explicit.remove(i)
|
||||
|
||||
if _skip_explicit:
|
||||
if _skip_explicit[0] <= now <= (_skip_explicit[0] + self.opts['loop_interval']):
|
||||
if self.skip_function:
|
||||
run = True
|
||||
func = self.skip_function
|
||||
else:
|
||||
run = False
|
||||
else:
|
||||
run = True
|
||||
|
||||
if not run:
|
||||
continue
|
||||
|
||||
@ -1374,6 +1549,7 @@ class Schedule(object):
|
||||
finally:
|
||||
if '_seconds' in data:
|
||||
data['_next_fire_time'] = now + data['_seconds']
|
||||
data['_last_run'] = now
|
||||
data['_splay'] = None
|
||||
if salt.utils.platform.is_windows():
|
||||
# Restore our function references.
|
||||
|
@ -108,6 +108,7 @@ try:
|
||||
except ImportError:
|
||||
HAS_GSSAPI = False
|
||||
|
||||
|
||||
# Get Logging Started
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
@ -1015,18 +1016,37 @@ def get_network_adapter_type(adapter_type):
|
||||
adpater_type
|
||||
The adapter type from which to obtain the network adapter type.
|
||||
'''
|
||||
if adapter_type == "vmxnet":
|
||||
if adapter_type == 'vmxnet':
|
||||
return vim.vm.device.VirtualVmxnet()
|
||||
elif adapter_type == "vmxnet2":
|
||||
elif adapter_type == 'vmxnet2':
|
||||
return vim.vm.device.VirtualVmxnet2()
|
||||
elif adapter_type == "vmxnet3":
|
||||
elif adapter_type == 'vmxnet3':
|
||||
return vim.vm.device.VirtualVmxnet3()
|
||||
elif adapter_type == "e1000":
|
||||
elif adapter_type == 'e1000':
|
||||
return vim.vm.device.VirtualE1000()
|
||||
elif adapter_type == "e1000e":
|
||||
elif adapter_type == 'e1000e':
|
||||
return vim.vm.device.VirtualE1000e()
|
||||
|
||||
|
||||
def get_network_adapter_object_type(adapter_object):
|
||||
'''
|
||||
Returns the network adapter type.
|
||||
|
||||
adapter_object
|
||||
The adapter object from which to obtain the network adapter type.
|
||||
'''
|
||||
if isinstance(adapter_object, vim.vm.device.VirtualVmxnet2):
|
||||
return 'vmxnet2'
|
||||
if isinstance(adapter_object, vim.vm.device.VirtualVmxnet3):
|
||||
return 'vmxnet3'
|
||||
if isinstance(adapter_object, vim.vm.device.VirtualVmxnet):
|
||||
return 'vmxnet'
|
||||
if isinstance(adapter_object, vim.vm.device.VirtualE1000e):
|
||||
return 'e1000e'
|
||||
if isinstance(adapter_object, vim.vm.device.VirtualE1000):
|
||||
return 'e1000'
|
||||
|
||||
|
||||
def get_dvss(dc_ref, dvs_names=None, get_all_dvss=False):
|
||||
'''
|
||||
Returns distributed virtual switches (DVSs) in a datacenter.
|
||||
@ -1354,6 +1374,52 @@ def remove_dvportgroup(portgroup_ref):
|
||||
wait_for_task(task, pg_name, str(task.__class__))
|
||||
|
||||
|
||||
def get_networks(parent_ref, network_names=None, get_all_networks=False):
|
||||
'''
|
||||
Returns networks of standard switches.
|
||||
The parent object can be a datacenter.
|
||||
|
||||
parent_ref
|
||||
The parent object reference. A datacenter object.
|
||||
|
||||
network_names
|
||||
The name of the standard switch networks. Default is None.
|
||||
|
||||
get_all_networks
|
||||
Boolean indicates whether to return all networks in the parent.
|
||||
Default is False.
|
||||
'''
|
||||
|
||||
if not isinstance(parent_ref, vim.Datacenter):
|
||||
raise salt.exceptions.ArgumentValueError(
|
||||
'Parent has to be a datacenter.')
|
||||
parent_name = get_managed_object_name(parent_ref)
|
||||
log.trace('Retrieving network from {0} \'{1}\', network_names=\'{2}\', '
|
||||
'get_all_networks={3}'.format(
|
||||
type(parent_ref).__name__, parent_name,
|
||||
','.join(network_names) if network_names else None,
|
||||
get_all_networks))
|
||||
properties = ['name']
|
||||
service_instance = get_service_instance_from_managed_object(parent_ref)
|
||||
traversal_spec = vmodl.query.PropertyCollector.TraversalSpec(
|
||||
path='networkFolder',
|
||||
skip=True,
|
||||
type=vim.Datacenter,
|
||||
selectSet=[vmodl.query.PropertyCollector.TraversalSpec(
|
||||
path='childEntity',
|
||||
skip=False,
|
||||
type=vim.Folder)])
|
||||
items = [i['object'] for i in
|
||||
get_mors_with_properties(service_instance,
|
||||
vim.Network,
|
||||
container_ref=parent_ref,
|
||||
property_list=properties,
|
||||
traversal_spec=traversal_spec)
|
||||
if get_all_networks or
|
||||
(network_names and i['name'] in network_names)]
|
||||
return items
|
||||
|
||||
|
||||
def list_objects(service_instance, vim_object, properties=None):
|
||||
'''
|
||||
Returns a simple list of objects from a given service instance.
|
||||
@ -1869,6 +1935,53 @@ def list_datastores(service_instance):
|
||||
return list_objects(service_instance, vim.Datastore)
|
||||
|
||||
|
||||
def get_datastore_files(service_instance, directory, datastores, container_object, browser_spec):
|
||||
'''
|
||||
Get the files with a given browser specification from the datastore.
|
||||
|
||||
service_instance
|
||||
The Service Instance Object from which to obtain datastores.
|
||||
|
||||
directory
|
||||
The name of the directory where we would like to search
|
||||
|
||||
datastores
|
||||
Name of the datastores
|
||||
|
||||
container_object
|
||||
The base object for searches
|
||||
|
||||
browser_spec
|
||||
BrowserSpec object which defines the search criteria
|
||||
|
||||
return
|
||||
list of vim.host.DatastoreBrowser.SearchResults objects
|
||||
'''
|
||||
|
||||
files = []
|
||||
datastore_objects = get_datastores(service_instance, container_object, datastore_names=datastores)
|
||||
for datobj in datastore_objects:
|
||||
try:
|
||||
task = datobj.browser.SearchDatastore_Task(datastorePath='[{}] {}'.format(datobj.name, directory),
|
||||
searchSpec=browser_spec)
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(
|
||||
'Not enough permissions. Required privilege: '
|
||||
'{}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareRuntimeError(exc.msg)
|
||||
try:
|
||||
files.append(salt.utils.vmware.wait_for_task(task, directory, 'query virtual machine files'))
|
||||
except salt.exceptions.VMwareFileNotFoundError:
|
||||
pass
|
||||
return files
|
||||
|
||||
|
||||
def get_datastores(service_instance, reference, datastore_names=None,
|
||||
backing_disk_ids=None, get_all_datastores=False):
|
||||
'''
|
||||
@ -2835,6 +2948,53 @@ def list_hosts(service_instance):
|
||||
return list_objects(service_instance, vim.HostSystem)
|
||||
|
||||
|
||||
def get_resource_pools(service_instance, resource_pool_names, datacenter_name=None,
|
||||
get_all_resource_pools=False):
|
||||
'''
|
||||
Retrieves resource pool objects
|
||||
|
||||
service_instance
|
||||
The service instance object to query the vCenter
|
||||
|
||||
resource_pool_names
|
||||
Resource pool names
|
||||
|
||||
datacenter_name
|
||||
Name of the datacenter where the resource pool is available
|
||||
|
||||
get_all_resource_pools
|
||||
Boolean
|
||||
|
||||
return
|
||||
Resourcepool managed object reference
|
||||
'''
|
||||
|
||||
properties = ['name']
|
||||
if not resource_pool_names:
|
||||
resource_pool_names = []
|
||||
if datacenter_name:
|
||||
container_ref = get_datacenter(service_instance, datacenter_name)
|
||||
else:
|
||||
container_ref = get_root_folder(service_instance)
|
||||
|
||||
resource_pools = get_mors_with_properties(service_instance,
|
||||
vim.ResourcePool,
|
||||
container_ref=container_ref,
|
||||
property_list=properties)
|
||||
|
||||
selected_pools = []
|
||||
for pool in resource_pools:
|
||||
if get_all_resource_pools or (pool['name'] in resource_pool_names):
|
||||
selected_pools.append(pool['object'])
|
||||
if not selected_pools:
|
||||
raise salt.exceptions.VMwareObjectRetrievalError(
|
||||
'The resource pools with properties '
|
||||
'names={} get_all={} could not be found'.format(selected_pools,
|
||||
get_all_resource_pools))
|
||||
|
||||
return selected_pools
|
||||
|
||||
|
||||
def list_resourcepools(service_instance):
|
||||
'''
|
||||
Returns a list of resource pools associated with a given service instance.
|
||||
@ -2938,6 +3098,9 @@ def wait_for_task(task, instance_name, task_type, sleep_seconds=1, log_level='de
|
||||
raise salt.exceptions.VMwareApiError(
|
||||
'Not enough permissions. Required privilege: '
|
||||
'{}'.format(exc.privilegeId))
|
||||
except vim.fault.FileNotFound as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareFileNotFoundError(exc.msg)
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(exc.msg)
|
||||
@ -2961,6 +3124,9 @@ def wait_for_task(task, instance_name, task_type, sleep_seconds=1, log_level='de
|
||||
raise salt.exceptions.VMwareApiError(
|
||||
'Not enough permissions. Required privilege: '
|
||||
'{}'.format(exc.privilegeId))
|
||||
except vim.fault.FileNotFound as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareFileNotFoundError(exc.msg)
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(exc.msg)
|
||||
@ -2985,6 +3151,9 @@ def wait_for_task(task, instance_name, task_type, sleep_seconds=1, log_level='de
|
||||
raise salt.exceptions.VMwareApiError(
|
||||
'Not enough permissions. Required privilege: '
|
||||
'{}'.format(exc.privilegeId))
|
||||
except vim.fault.FileNotFound as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareFileNotFoundError(exc.msg)
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(exc.msg)
|
||||
@ -2998,3 +3167,416 @@ def wait_for_task(task, instance_name, task_type, sleep_seconds=1, log_level='de
|
||||
exc_message = '{0} ({1})'.format(exc_message,
|
||||
exc.faultMessage[0].message)
|
||||
raise salt.exceptions.VMwareApiError(exc_message)
|
||||
|
||||
|
||||
def get_vm_by_property(service_instance, name, datacenter=None, vm_properties=None,
|
||||
traversal_spec=None, parent_ref=None):
|
||||
'''
|
||||
Get virtual machine properties based on the traversal specs and properties list,
|
||||
returns Virtual Machine object with properties.
|
||||
|
||||
service_instance
|
||||
Service instance object to access vCenter
|
||||
|
||||
name
|
||||
Name of the virtual machine.
|
||||
|
||||
datacenter
|
||||
Datacenter name
|
||||
|
||||
vm_properties
|
||||
List of vm properties.
|
||||
|
||||
traversal_spec
|
||||
Traversal Spec object(s) for searching.
|
||||
|
||||
parent_ref
|
||||
Container Reference object for searching under a given object.
|
||||
'''
|
||||
if datacenter and not parent_ref:
|
||||
parent_ref = salt.utils.vmware.get_datacenter(service_instance, datacenter)
|
||||
if not vm_properties:
|
||||
vm_properties = ['name',
|
||||
'config.hardware.device',
|
||||
'summary.storage.committed',
|
||||
'summary.storage.uncommitted',
|
||||
'summary.storage.unshared',
|
||||
'layoutEx.file',
|
||||
'config.guestFullName',
|
||||
'config.guestId',
|
||||
'guest.net',
|
||||
'config.hardware.memoryMB',
|
||||
'config.hardware.numCPU',
|
||||
'config.files.vmPathName',
|
||||
'summary.runtime.powerState',
|
||||
'guest.toolsStatus']
|
||||
vm_list = salt.utils.vmware.get_mors_with_properties(service_instance,
|
||||
vim.VirtualMachine,
|
||||
vm_properties,
|
||||
container_ref=parent_ref,
|
||||
traversal_spec=traversal_spec)
|
||||
vm_formatted = [vm for vm in vm_list if vm['name'] == name]
|
||||
if not vm_formatted:
|
||||
raise salt.exceptions.VMwareObjectRetrievalError('The virtual machine was not found.')
|
||||
elif len(vm_formatted) > 1:
|
||||
raise salt.exceptions.VMwareMultipleObjectsError('Multiple virtual machines were found with the '
|
||||
'same name, please specify a container.')
|
||||
return vm_formatted[0]
|
||||
|
||||
|
||||
def get_folder(service_instance, datacenter, placement, base_vm_name=None):
|
||||
'''
|
||||
Returns a Folder Object
|
||||
|
||||
service_instance
|
||||
Service instance object
|
||||
|
||||
datacenter
|
||||
Name of the datacenter
|
||||
|
||||
placement
|
||||
Placement dictionary
|
||||
|
||||
base_vm_name
|
||||
Existing virtual machine name (for cloning)
|
||||
'''
|
||||
log.trace('Retrieving folder information')
|
||||
if base_vm_name:
|
||||
vm_object = get_vm_by_property(service_instance, base_vm_name, vm_properties=['name'])
|
||||
vm_props = salt.utils.vmware.get_properties_of_managed_object(vm_object, properties=['parent'])
|
||||
if 'parent' in vm_props:
|
||||
folder_object = vm_props['parent']
|
||||
else:
|
||||
raise salt.exceptions.VMwareObjectRetrievalError('The virtual machine parent '
|
||||
'object is not defined')
|
||||
elif 'folder' in placement:
|
||||
folder_objects = salt.utils.vmware.get_folders(service_instance, [placement['folder']], datacenter)
|
||||
if len(folder_objects) > 1:
|
||||
raise salt.exceptions.VMwareMultipleObjectsError('Multiple instances are available of the '
|
||||
'specified folder {0}'.format(placement['folder']))
|
||||
folder_object = folder_objects[0]
|
||||
elif datacenter:
|
||||
datacenter_object = salt.utils.vmware.get_datacenter(service_instance, datacenter)
|
||||
dc_props = salt.utils.vmware.get_properties_of_managed_object(datacenter_object, properties=['vmFolder'])
|
||||
if 'vmFolder' in dc_props:
|
||||
folder_object = dc_props['vmFolder']
|
||||
else:
|
||||
raise salt.exceptions.VMwareObjectRetrievalError('The datacenter vm folder object is not defined')
|
||||
return folder_object
|
||||
|
||||
|
||||
def get_placement(service_instance, datacenter, placement=None):
|
||||
'''
|
||||
To create a virtual machine a resource pool needs to be supplied, we would like to use the strictest as possible.
|
||||
|
||||
datacenter
|
||||
Name of the datacenter
|
||||
|
||||
placement
|
||||
Dictionary with the placement info, cluster, host resource pool name
|
||||
|
||||
return
|
||||
Resource pool, cluster and host object if any applies
|
||||
'''
|
||||
log.trace('Retrieving placement information')
|
||||
resourcepool_object, placement_object = None, None
|
||||
if 'host' in placement:
|
||||
host_objects = get_hosts(service_instance, datacenter_name=datacenter, host_names=[placement['host']])
|
||||
if not host_objects:
|
||||
raise salt.exceptions.VMwareObjectRetrievalError('The specified host {0} cannot be found.'.format(placement['host']))
|
||||
try:
|
||||
host_props = get_properties_of_managed_object(host_objects[0],
|
||||
properties=['resourcePool'])
|
||||
resourcepool_object = host_props['resourcePool']
|
||||
except vmodl.query.InvalidProperty:
|
||||
traversal_spec = vmodl.query.PropertyCollector.TraversalSpec(
|
||||
path='parent',
|
||||
skip=True,
|
||||
type=vim.HostSystem,
|
||||
selectSet=[vmodl.query.PropertyCollector.TraversalSpec(
|
||||
path='resourcePool',
|
||||
skip=False,
|
||||
type=vim.ClusterComputeResource)])
|
||||
resourcepools = get_mors_with_properties(service_instance,
|
||||
vim.ResourcePool,
|
||||
container_ref=host_objects[0],
|
||||
property_list=['name'],
|
||||
traversal_spec=traversal_spec)
|
||||
if resourcepools:
|
||||
resourcepool_object = resourcepools[0]['object']
|
||||
else:
|
||||
raise salt.exceptions.VMwareObjectRetrievalError(
|
||||
'The resource pool of host {0} cannot be found.'.format(placement['host']))
|
||||
placement_object = host_objects[0]
|
||||
elif 'resourcepool' in placement:
|
||||
resourcepool_objects = get_resource_pools(service_instance,
|
||||
[placement['resourcepool']],
|
||||
datacenter_name=datacenter)
|
||||
if len(resourcepool_objects) > 1:
|
||||
raise salt.exceptions.VMwareMultipleObjectsError('Multiple instances are available of the '
|
||||
'specified host {}.'.format(placement['host']))
|
||||
resourcepool_object = resourcepool_objects[0]
|
||||
res_props = get_properties_of_managed_object(resourcepool_object,
|
||||
properties=['parent'])
|
||||
if 'parent' in res_props:
|
||||
placement_object = res_props['parent']
|
||||
else:
|
||||
raise salt.exceptions.VMwareObjectRetrievalError('The resource pool\'s parent '
|
||||
'object is not defined')
|
||||
elif 'cluster' in placement:
|
||||
datacenter_object = get_datacenter(service_instance, datacenter)
|
||||
cluster_object = get_cluster(datacenter_object, placement['cluster'])
|
||||
clus_props = get_properties_of_managed_object(cluster_object,
|
||||
properties=['resourcePool'])
|
||||
if 'resourcePool' in clus_props:
|
||||
resourcepool_object = clus_props['resourcePool']
|
||||
else:
|
||||
raise salt.exceptions.VMwareObjectRetrievalError('The cluster\'s resource pool '
|
||||
'object is not defined')
|
||||
placement_object = cluster_object
|
||||
else:
|
||||
# We are checking the schema for this object, this exception should never be raised
|
||||
raise salt.exceptions.VMwareObjectRetrievalError('Placement is not defined.')
|
||||
return (resourcepool_object, placement_object)
|
||||
|
||||
|
||||
def convert_to_kb(unit, size):
|
||||
'''
|
||||
Converts the given size to KB based on the unit, returns a long integer.
|
||||
|
||||
unit
|
||||
Unit of the size eg. GB; Note: to VMware a GB is the same as GiB = 1024MiB
|
||||
size
|
||||
Number which represents the size
|
||||
'''
|
||||
if unit.lower() == 'gb':
|
||||
# vCenter needs long value
|
||||
target_size = int(size * 1024 * 1024)
|
||||
elif unit.lower() == 'mb':
|
||||
target_size = int(size * 1024)
|
||||
elif unit.lower() == 'kb':
|
||||
target_size = int(size)
|
||||
else:
|
||||
raise salt.exceptions.ArgumentValueError('The unit is not specified')
|
||||
return {'size': target_size, 'unit': 'KB'}
|
||||
|
||||
|
||||
def power_cycle_vm(virtual_machine, action='on'):
|
||||
'''
|
||||
Powers on/off a virtual machine specified by it's name.
|
||||
|
||||
virtual_machine
|
||||
vim.VirtualMachine object to power on/off virtual machine
|
||||
|
||||
action
|
||||
Operation option to power on/off the machine
|
||||
'''
|
||||
if action == 'on':
|
||||
try:
|
||||
task = virtual_machine.PowerOn()
|
||||
task_name = 'power on'
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(
|
||||
'Not enough permissions. Required privilege: '
|
||||
'{}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareRuntimeError(exc.msg)
|
||||
elif action == 'off':
|
||||
try:
|
||||
task = virtual_machine.PowerOff()
|
||||
task_name = 'power off'
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(
|
||||
'Not enough permissions. Required privilege: '
|
||||
'{}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareRuntimeError(exc.msg)
|
||||
else:
|
||||
raise salt.exceptions.ArgumentValueError('The given action is not supported')
|
||||
try:
|
||||
wait_for_task(task, get_managed_object_name(virtual_machine), task_name)
|
||||
except salt.exceptions.VMwareFileNotFoundError as exc:
|
||||
raise salt.exceptions.VMwarePowerOnError('An error occurred during power '
|
||||
'operation, a file was not found: {0}'.format(str(exc)))
|
||||
return virtual_machine
|
||||
|
||||
|
||||
def create_vm(vm_name, vm_config_spec, folder_object, resourcepool_object, host_object=None):
|
||||
'''
|
||||
Creates virtual machine from config spec
|
||||
|
||||
vm_name
|
||||
Virtual machine name to be created
|
||||
|
||||
vm_config_spec
|
||||
Virtual Machine Config Spec object
|
||||
|
||||
folder_object
|
||||
vm Folder managed object reference
|
||||
|
||||
resourcepool_object
|
||||
Resource pool object where the machine will be created
|
||||
|
||||
host_object
|
||||
Host object where the machine will ne placed (optional)
|
||||
|
||||
return
|
||||
Virtual Machine managed object reference
|
||||
'''
|
||||
try:
|
||||
if host_object and isinstance(host_object, vim.HostSystem):
|
||||
task = folder_object.CreateVM_Task(vm_config_spec,
|
||||
pool=resourcepool_object,
|
||||
host=host_object)
|
||||
else:
|
||||
task = folder_object.CreateVM_Task(vm_config_spec,
|
||||
pool=resourcepool_object)
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(
|
||||
'Not enough permissions. Required privilege: '
|
||||
'{}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareRuntimeError(exc.msg)
|
||||
vm_object = wait_for_task(task, vm_name, 'CreateVM Task', 10, 'info')
|
||||
return vm_object
|
||||
|
||||
|
||||
def register_vm(datacenter, name, vmx_path, resourcepool_object, host_object=None):
|
||||
'''
|
||||
Registers a virtual machine to the inventory with the given vmx file, on success
|
||||
it returns the vim.VirtualMachine managed object reference
|
||||
|
||||
datacenter
|
||||
Datacenter object of the virtual machine, vim.Datacenter object
|
||||
|
||||
name
|
||||
Name of the virtual machine
|
||||
|
||||
vmx_path:
|
||||
Full path to the vmx file, datastore name should be included
|
||||
|
||||
resourcepool
|
||||
Placement resource pool of the virtual machine, vim.ResourcePool object
|
||||
|
||||
host
|
||||
Placement host of the virtual machine, vim.HostSystem object
|
||||
'''
|
||||
try:
|
||||
if host_object:
|
||||
task = datacenter.vmFolder.RegisterVM_Task(path=vmx_path, name=name,
|
||||
asTemplate=False,
|
||||
host=host_object,
|
||||
pool=resourcepool_object)
|
||||
else:
|
||||
task = datacenter.vmFolder.RegisterVM_Task(path=vmx_path, name=name,
|
||||
asTemplate=False,
|
||||
pool=resourcepool_object)
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(
|
||||
'Not enough permissions. Required privilege: '
|
||||
'{}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareRuntimeError(exc.msg)
|
||||
try:
|
||||
vm_ref = wait_for_task(task, name, 'RegisterVM Task')
|
||||
except salt.exceptions.VMwareFileNotFoundError as exc:
|
||||
raise salt.exceptions.VMwareVmRegisterError(
|
||||
'An error occurred during registration operation, the '
|
||||
'configuration file was not found: {0}'.format(str(exc)))
|
||||
return vm_ref
|
||||
|
||||
|
||||
def update_vm(vm_ref, vm_config_spec):
|
||||
'''
|
||||
Updates the virtual machine configuration with the given object
|
||||
|
||||
vm_ref
|
||||
Virtual machine managed object reference
|
||||
|
||||
vm_config_spec
|
||||
Virtual machine config spec object to update
|
||||
'''
|
||||
vm_name = get_managed_object_name(vm_ref)
|
||||
log.trace('Updating vm \'{0}\''.format(vm_name))
|
||||
try:
|
||||
task = vm_ref.ReconfigVM_Task(vm_config_spec)
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(
|
||||
'Not enough permissions. Required privilege: '
|
||||
'{}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareRuntimeError(exc.msg)
|
||||
vm_ref = wait_for_task(task, vm_name, 'ReconfigureVM Task')
|
||||
return vm_ref
|
||||
|
||||
|
||||
def delete_vm(vm_ref):
|
||||
'''
|
||||
Destroys the virtual machine
|
||||
|
||||
vm_ref
|
||||
Managed object reference of a virtual machine object
|
||||
'''
|
||||
vm_name = get_managed_object_name(vm_ref)
|
||||
log.trace('Destroying vm \'{0}\''.format(vm_name))
|
||||
try:
|
||||
task = vm_ref.Destroy_Task()
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(
|
||||
'Not enough permissions. Required privilege: '
|
||||
'{}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareRuntimeError(exc.msg)
|
||||
wait_for_task(task, vm_name, 'Destroy Task')
|
||||
|
||||
|
||||
def unregister_vm(vm_ref):
|
||||
'''
|
||||
Destroys the virtual machine
|
||||
|
||||
vm_ref
|
||||
Managed object reference of a virtual machine object
|
||||
'''
|
||||
vm_name = get_managed_object_name(vm_ref)
|
||||
log.trace('Destroying vm \'{0}\''.format(vm_name))
|
||||
try:
|
||||
vm_ref.UnregisterVM()
|
||||
except vim.fault.NoPermission as exc:
|
||||
log.exception(exc)
|
||||
raise salt.exceptions.VMwareApiError(
|
||||
'Not enough permissions. Required privilege: '
|
||||
'{}'.format(exc.privilegeId))
|
||||
except vim.fault.VimFault as exc:
|
||||
raise salt.exceptions.VMwareApiError(exc.msg)
|
||||
except vmodl.RuntimeFault as exc:
|
||||
raise salt.exceptions.VMwareRuntimeError(exc.msg)
|
||||
|
@ -125,3 +125,26 @@ def modules_available(*names):
|
||||
if not fnmatch.filter(list(__salt__), name):
|
||||
not_found.append(name)
|
||||
return not_found
|
||||
|
||||
|
||||
def nonzero_retcode_return_true():
|
||||
'''
|
||||
Sets a nonzero retcode before returning. Designed to test orchestration.
|
||||
'''
|
||||
__context__['retcode'] = 1
|
||||
return True
|
||||
|
||||
|
||||
def nonzero_retcode_return_false():
|
||||
'''
|
||||
Sets a nonzero retcode before returning. Designed to test orchestration.
|
||||
'''
|
||||
__context__['retcode'] = 1
|
||||
return False
|
||||
|
||||
|
||||
def fail_function(*args, **kwargs): # pylint: disable=unused-argument
|
||||
'''
|
||||
Return False no matter what is passed to it
|
||||
'''
|
||||
return False
|
||||
|
@ -0,0 +1,2 @@
|
||||
test fail with changes:
|
||||
test.fail_with_changes
|
11
tests/integration/files/file/base/orch/issue43204/init.sls
Normal file
11
tests/integration/files/file/base/orch/issue43204/init.sls
Normal file
@ -0,0 +1,11 @@
|
||||
Step01:
|
||||
salt.state:
|
||||
- tgt: 'minion'
|
||||
- sls:
|
||||
- orch.issue43204.fail_with_changes
|
||||
|
||||
Step02:
|
||||
salt.function:
|
||||
- name: runtests_helpers.nonzero_retcode_return_false
|
||||
- tgt: 'minion'
|
||||
- fail_function: runtests_helpers.fail_function
|
@ -6,6 +6,7 @@ Tests for the state runner
|
||||
# Import Python Libs
|
||||
from __future__ import absolute_import
|
||||
import errno
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import signal
|
||||
@ -81,6 +82,58 @@ class StateRunnerTest(ShellCase):
|
||||
self.assertFalse(os.path.exists('/tmp/ewu-2016-12-13'))
|
||||
self.assertNotEqual(code, 0)
|
||||
|
||||
def test_orchestrate_state_and_function_failure(self):
|
||||
'''
|
||||
Ensure that returns from failed minions are in the changes dict where
|
||||
they belong, so they can be programatically analyzed.
|
||||
|
||||
See https://github.com/saltstack/salt/issues/43204
|
||||
'''
|
||||
self.run_run('saltutil.sync_modules')
|
||||
ret = json.loads(
|
||||
'\n'.join(
|
||||
self.run_run(u'state.orchestrate orch.issue43204 --out=json')
|
||||
)
|
||||
)
|
||||
# Drill down to the changes dict
|
||||
state_ret = ret[u'data'][u'master'][u'salt_|-Step01_|-Step01_|-state'][u'changes']
|
||||
func_ret = ret[u'data'][u'master'][u'salt_|-Step02_|-runtests_helpers.nonzero_retcode_return_false_|-function'][u'changes']
|
||||
|
||||
# Remove duration and start time from the results, since they would
|
||||
# vary with each run and that would make it impossible to test.
|
||||
for item in ('duration', 'start_time'):
|
||||
state_ret['ret']['minion']['test_|-test fail with changes_|-test fail with changes_|-fail_with_changes'].pop(item)
|
||||
|
||||
self.assertEqual(
|
||||
state_ret,
|
||||
{
|
||||
u'out': u'highstate',
|
||||
u'ret': {
|
||||
u'minion': {
|
||||
u'test_|-test fail with changes_|-test fail with changes_|-fail_with_changes': {
|
||||
u'__id__': u'test fail with changes',
|
||||
u'__run_num__': 0,
|
||||
u'__sls__': u'orch.issue43204.fail_with_changes',
|
||||
u'changes': {
|
||||
u'testing': {
|
||||
u'new': u'Something pretended to change',
|
||||
u'old': u'Unchanged'
|
||||
}
|
||||
},
|
||||
u'comment': u'Failure!',
|
||||
u'name': u'test fail with changes',
|
||||
u'result': False,
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
self.assertEqual(
|
||||
func_ret,
|
||||
{u'out': u'highstate', u'ret': {u'minion': False}}
|
||||
)
|
||||
|
||||
def test_orchestrate_target_exists(self):
|
||||
'''
|
||||
test orchestration when target exists
|
||||
|
@ -37,11 +37,15 @@ _PKG_TARGETS = {
|
||||
'Debian': ['python-plist', 'apg'],
|
||||
'RedHat': ['units', 'zsh-html'],
|
||||
'FreeBSD': ['aalib', 'pth'],
|
||||
'Suse': ['aalib', 'python-pssh'],
|
||||
'Suse': ['aalib', 'rpm-python'],
|
||||
'MacOS': ['libpng', 'jpeg'],
|
||||
'Windows': ['firefox', '7zip'],
|
||||
}
|
||||
|
||||
_PKG_CAP_TARGETS = {
|
||||
'Suse': [('w3m_ssl', 'w3m')],
|
||||
}
|
||||
|
||||
_PKG_TARGETS_32 = {
|
||||
'CentOS': 'xz-devel.i686'
|
||||
}
|
||||
@ -793,3 +797,260 @@ class PkgTest(ModuleCase, SaltReturnAssertsMixin):
|
||||
self.assertEqual(ret_comment, 'An error was encountered while installing/updating group '
|
||||
'\'handle_missing_pkg_group\': Group \'handle_missing_pkg_group\' '
|
||||
'not found.')
|
||||
|
||||
@skipIf(salt.utils.platform.is_windows(), 'minion is windows')
|
||||
@requires_system_grains
|
||||
def test_pkg_cap_001_installed(self, grains=None):
|
||||
'''
|
||||
This is a destructive test as it installs and then removes a package
|
||||
'''
|
||||
# Skip test if package manager not available
|
||||
if not pkgmgr_avail(self.run_function, self.run_function('grains.items')):
|
||||
self.skipTest('Package manager is not available')
|
||||
|
||||
os_family = grains.get('os_family', '')
|
||||
pkg_cap_targets = _PKG_CAP_TARGETS.get(os_family, [])
|
||||
if not len(pkg_cap_targets) > 0:
|
||||
self.skipTest('Capability not provided')
|
||||
|
||||
target, realpkg = pkg_cap_targets[0]
|
||||
version = self.run_function('pkg.version', [target])
|
||||
realver = self.run_function('pkg.version', [realpkg])
|
||||
|
||||
# If this assert fails, we need to find new targets, this test needs to
|
||||
# be able to test successful installation of packages, so this package
|
||||
# needs to not be installed before we run the states below
|
||||
self.assertFalse(version)
|
||||
self.assertFalse(realver)
|
||||
|
||||
ret = self.run_state('pkg.installed', name=target, refresh=False, resolve_capabilities=True, test=True)
|
||||
self.assertInSaltComment("The following packages would be installed/updated: {0}".format(realpkg), ret)
|
||||
ret = self.run_state('pkg.installed', name=target, refresh=False, resolve_capabilities=True)
|
||||
self.assertSaltTrueReturn(ret)
|
||||
ret = self.run_state('pkg.removed', name=realpkg)
|
||||
self.assertSaltTrueReturn(ret)
|
||||
|
||||
@skipIf(salt.utils.platform.is_windows(), 'minion is windows')
|
||||
@requires_system_grains
|
||||
def test_pkg_cap_002_already_installed(self, grains=None):
|
||||
'''
|
||||
This is a destructive test as it installs and then removes a package
|
||||
'''
|
||||
# Skip test if package manager not available
|
||||
if not pkgmgr_avail(self.run_function, self.run_function('grains.items')):
|
||||
self.skipTest('Package manager is not available')
|
||||
|
||||
os_family = grains.get('os_family', '')
|
||||
pkg_cap_targets = _PKG_CAP_TARGETS.get(os_family, [])
|
||||
if not len(pkg_cap_targets) > 0:
|
||||
self.skipTest('Capability not provided')
|
||||
|
||||
target, realpkg = pkg_cap_targets[0]
|
||||
version = self.run_function('pkg.version', [target])
|
||||
realver = self.run_function('pkg.version', [realpkg])
|
||||
|
||||
# If this assert fails, we need to find new targets, this test needs to
|
||||
# be able to test successful installation of packages, so this package
|
||||
# needs to not be installed before we run the states below
|
||||
self.assertFalse(version)
|
||||
self.assertFalse(realver)
|
||||
|
||||
# install the package already
|
||||
ret = self.run_state('pkg.installed', name=realpkg, refresh=False)
|
||||
|
||||
ret = self.run_state('pkg.installed', name=target, refresh=False, resolve_capabilities=True, test=True)
|
||||
self.assertInSaltComment("All specified packages are already installed", ret)
|
||||
|
||||
ret = self.run_state('pkg.installed', name=target, refresh=False, resolve_capabilities=True)
|
||||
self.assertSaltTrueReturn(ret)
|
||||
|
||||
self.assertInSaltComment("packages are already installed", ret)
|
||||
ret = self.run_state('pkg.removed', name=realpkg)
|
||||
self.assertSaltTrueReturn(ret)
|
||||
|
||||
@skipIf(salt.utils.platform.is_windows(), 'minion is windows')
|
||||
@requires_system_grains
|
||||
def test_pkg_cap_003_installed_multipkg_with_version(self, grains=None):
|
||||
'''
|
||||
This is a destructive test as it installs and then removes two packages
|
||||
'''
|
||||
# Skip test if package manager not available
|
||||
if not pkgmgr_avail(self.run_function, self.run_function('grains.items')):
|
||||
self.skipTest('Package manager is not available')
|
||||
|
||||
os_family = grains.get('os_family', '')
|
||||
pkg_cap_targets = _PKG_CAP_TARGETS.get(os_family, [])
|
||||
if not len(pkg_cap_targets) > 0:
|
||||
self.skipTest('Capability not provided')
|
||||
pkg_targets = _PKG_TARGETS.get(os_family, [])
|
||||
|
||||
# Don't perform this test on FreeBSD since version specification is not
|
||||
# supported.
|
||||
if os_family == 'FreeBSD':
|
||||
return
|
||||
|
||||
# Make sure that we have targets that match the os_family. If this
|
||||
# fails then the _PKG_TARGETS dict above needs to have an entry added,
|
||||
# with two packages that are not installed before these tests are run
|
||||
self.assertTrue(bool(pkg_cap_targets))
|
||||
self.assertTrue(bool(pkg_targets))
|
||||
|
||||
if os_family == 'Arch':
|
||||
for idx in range(13):
|
||||
if idx == 12:
|
||||
raise Exception('Package database locked after 60 seconds, '
|
||||
'bailing out')
|
||||
if not os.path.isfile('/var/lib/pacman/db.lck'):
|
||||
break
|
||||
time.sleep(5)
|
||||
|
||||
capability, realpkg = pkg_cap_targets[0]
|
||||
version = latest_version(self.run_function, pkg_targets[0])
|
||||
realver = latest_version(self.run_function, realpkg)
|
||||
|
||||
# If this assert fails, we need to find new targets, this test needs to
|
||||
# be able to test successful installation of packages, so these
|
||||
# packages need to not be installed before we run the states below
|
||||
self.assertTrue(bool(version))
|
||||
self.assertTrue(bool(realver))
|
||||
|
||||
pkgs = [{pkg_targets[0]: version}, pkg_targets[1], {capability: realver}]
|
||||
ret = self.run_state('pkg.installed',
|
||||
name='test_pkg_cap_003_installed_multipkg_with_version-install',
|
||||
pkgs=pkgs,
|
||||
refresh=False)
|
||||
self.assertSaltFalseReturn(ret)
|
||||
|
||||
ret = self.run_state('pkg.installed',
|
||||
name='test_pkg_cap_003_installed_multipkg_with_version-install-capability',
|
||||
pkgs=pkgs,
|
||||
refresh=False, resolve_capabilities=True, test=True)
|
||||
self.assertInSaltComment("packages would be installed/updated", ret)
|
||||
self.assertInSaltComment("{0}={1}".format(realpkg, realver), ret)
|
||||
|
||||
ret = self.run_state('pkg.installed',
|
||||
name='test_pkg_cap_003_installed_multipkg_with_version-install-capability',
|
||||
pkgs=pkgs,
|
||||
refresh=False, resolve_capabilities=True)
|
||||
self.assertSaltTrueReturn(ret)
|
||||
cleanup_pkgs = pkg_targets
|
||||
cleanup_pkgs.append(realpkg)
|
||||
ret = self.run_state('pkg.removed',
|
||||
name='test_pkg_cap_003_installed_multipkg_with_version-remove',
|
||||
pkgs=cleanup_pkgs)
|
||||
self.assertSaltTrueReturn(ret)
|
||||
|
||||
@skipIf(salt.utils.platform.is_windows(), 'minion is windows')
|
||||
@requires_system_grains
|
||||
def test_pkg_cap_004_latest(self, grains=None):
|
||||
'''
|
||||
This tests pkg.latest with a package that has no epoch (or a zero
|
||||
epoch).
|
||||
'''
|
||||
# Skip test if package manager not available
|
||||
if not pkgmgr_avail(self.run_function, self.run_function('grains.items')):
|
||||
self.skipTest('Package manager is not available')
|
||||
|
||||
os_family = grains.get('os_family', '')
|
||||
pkg_cap_targets = _PKG_CAP_TARGETS.get(os_family, [])
|
||||
if not len(pkg_cap_targets) > 0:
|
||||
self.skipTest('Capability not provided')
|
||||
|
||||
target, realpkg = pkg_cap_targets[0]
|
||||
version = self.run_function('pkg.version', [target])
|
||||
realver = self.run_function('pkg.version', [realpkg])
|
||||
|
||||
# If this assert fails, we need to find new targets, this test needs to
|
||||
# be able to test successful installation of packages, so this package
|
||||
# needs to not be installed before we run the states below
|
||||
self.assertFalse(version)
|
||||
self.assertFalse(realver)
|
||||
|
||||
ret = self.run_state('pkg.latest', name=target, refresh=False, resolve_capabilities=True, test=True)
|
||||
self.assertInSaltComment("The following packages would be installed/upgraded: {0}".format(realpkg), ret)
|
||||
ret = self.run_state('pkg.latest', name=target, refresh=False, resolve_capabilities=True)
|
||||
self.assertSaltTrueReturn(ret)
|
||||
|
||||
ret = self.run_state('pkg.latest', name=target, refresh=False, resolve_capabilities=True)
|
||||
self.assertSaltTrueReturn(ret)
|
||||
self.assertInSaltComment("is already up-to-date", ret)
|
||||
|
||||
ret = self.run_state('pkg.removed', name=realpkg)
|
||||
self.assertSaltTrueReturn(ret)
|
||||
|
||||
@skipIf(salt.utils.platform.is_windows(), 'minion is windows')
|
||||
@requires_system_grains
|
||||
def test_pkg_cap_005_downloaded(self, grains=None):
|
||||
'''
|
||||
This is a destructive test as it installs and then removes a package
|
||||
'''
|
||||
# Skip test if package manager not available
|
||||
if not pkgmgr_avail(self.run_function, self.run_function('grains.items')):
|
||||
self.skipTest('Package manager is not available')
|
||||
|
||||
os_family = grains.get('os_family', '')
|
||||
pkg_cap_targets = _PKG_CAP_TARGETS.get(os_family, [])
|
||||
if not len(pkg_cap_targets) > 0:
|
||||
self.skipTest('Capability not provided')
|
||||
|
||||
target, realpkg = pkg_cap_targets[0]
|
||||
version = self.run_function('pkg.version', [target])
|
||||
realver = self.run_function('pkg.version', [realpkg])
|
||||
|
||||
# If this assert fails, we need to find new targets, this test needs to
|
||||
# be able to test successful installation of packages, so this package
|
||||
# needs to not be installed before we run the states below
|
||||
self.assertFalse(version)
|
||||
self.assertFalse(realver)
|
||||
|
||||
ret = self.run_state('pkg.downloaded', name=target, refresh=False)
|
||||
self.assertSaltFalseReturn(ret)
|
||||
|
||||
ret = self.run_state('pkg.downloaded', name=target, refresh=False, resolve_capabilities=True, test=True)
|
||||
self.assertInSaltComment("The following packages would be downloaded: {0}".format(realpkg), ret)
|
||||
|
||||
ret = self.run_state('pkg.downloaded', name=target, refresh=False, resolve_capabilities=True)
|
||||
self.assertSaltTrueReturn(ret)
|
||||
|
||||
@skipIf(salt.utils.platform.is_windows(), 'minion is windows')
|
||||
@requires_system_grains
|
||||
def test_pkg_cap_006_uptodate(self, grains=None):
|
||||
'''
|
||||
This is a destructive test as it installs and then removes a package
|
||||
'''
|
||||
# Skip test if package manager not available
|
||||
if not pkgmgr_avail(self.run_function, self.run_function('grains.items')):
|
||||
self.skipTest('Package manager is not available')
|
||||
|
||||
os_family = grains.get('os_family', '')
|
||||
pkg_cap_targets = _PKG_CAP_TARGETS.get(os_family, [])
|
||||
if not len(pkg_cap_targets) > 0:
|
||||
self.skipTest('Capability not provided')
|
||||
|
||||
target, realpkg = pkg_cap_targets[0]
|
||||
version = self.run_function('pkg.version', [target])
|
||||
realver = self.run_function('pkg.version', [realpkg])
|
||||
|
||||
# If this assert fails, we need to find new targets, this test needs to
|
||||
# be able to test successful installation of packages, so this package
|
||||
# needs to not be installed before we run the states below
|
||||
self.assertFalse(version)
|
||||
self.assertFalse(realver)
|
||||
|
||||
ret = self.run_state('pkg.installed', name=target,
|
||||
refresh=False, resolve_capabilities=True)
|
||||
self.assertSaltTrueReturn(ret)
|
||||
ret = self.run_state('pkg.uptodate',
|
||||
name='test_pkg_cap_006_uptodate',
|
||||
pkgs=[target],
|
||||
refresh=False,
|
||||
resolve_capabilities=True)
|
||||
self.assertSaltTrueReturn(ret)
|
||||
self.assertInSaltComment("System is already up-to-date", ret)
|
||||
ret = self.run_state('pkg.removed', name=realpkg)
|
||||
self.assertSaltTrueReturn(ret)
|
||||
ret = self.run_state('pkg.uptodate',
|
||||
name='test_pkg_cap_006_uptodate',
|
||||
refresh=False,
|
||||
test=True)
|
||||
self.assertInSaltComment("System update will be performed", ret)
|
||||
|
@ -90,7 +90,7 @@ def get_salt_vars():
|
||||
__opts__,
|
||||
__grains__,
|
||||
__opts__.get('id'),
|
||||
__opts__.get('environment'),
|
||||
__opts__.get('saltenv'),
|
||||
).compile_pillar()
|
||||
else:
|
||||
__pillar__ = {}
|
||||
|
@ -631,6 +631,18 @@ class VMwareTestCase(ExtendedTestCase):
|
||||
call='function'
|
||||
)
|
||||
|
||||
def test_convert_to_template_call(self):
|
||||
'''
|
||||
Tests that a SaltCloudSystemExit is raised when trying to call convert_to_template
|
||||
with anything other than --action or -a.
|
||||
'''
|
||||
self.assertRaises(
|
||||
SaltCloudSystemExit,
|
||||
vmware.convert_to_template,
|
||||
name=VM_NAME,
|
||||
call='function'
|
||||
)
|
||||
|
||||
def test_avail_sizes(self):
|
||||
'''
|
||||
Tests that avail_sizes returns an empty dictionary.
|
||||
|
@ -278,6 +278,38 @@ class LazyLoaderWhitelistTest(TestCase):
|
||||
self.assertNotIn('grains.get', self.loader)
|
||||
|
||||
|
||||
class LazyLoaderSingleItem(TestCase):
|
||||
'''
|
||||
Test loading a single item via the _load() function
|
||||
'''
|
||||
@classmethod
|
||||
def setUpClass(cls):
|
||||
cls.opts = salt.config.minion_config(None)
|
||||
cls.opts['grains'] = grains(cls.opts)
|
||||
|
||||
def setUp(self):
|
||||
self.loader = LazyLoader(_module_dirs(copy.deepcopy(self.opts), 'modules', 'module'),
|
||||
copy.deepcopy(self.opts),
|
||||
tag='module')
|
||||
|
||||
def tearDown(self):
|
||||
del self.loader
|
||||
|
||||
def test_single_item_no_dot(self):
|
||||
'''
|
||||
Checks that a KeyError is raised when the function key does not contain a '.'
|
||||
'''
|
||||
with self.assertRaises(KeyError) as err:
|
||||
inspect.isfunction(self.loader['testing_no_dot'])
|
||||
|
||||
if six.PY2:
|
||||
self.assertEqual(err.exception[0],
|
||||
'The key \'%s\' should contain a \'.\'')
|
||||
else:
|
||||
self.assertEqual(str(err.exception),
|
||||
str(("The key '%s' should contain a '.'", 'testing_no_dot')))
|
||||
|
||||
|
||||
module_template = '''
|
||||
__load__ = ['test', 'test_alias']
|
||||
__func_alias__ = dict(test_alias='working_alias')
|
||||
|
@ -126,3 +126,26 @@ description:
|
||||
patch('salt.modules.ansiblegate.importlib.import_module', lambda x: x):
|
||||
with pytest.raises(LoaderError) as loader_error:
|
||||
self.resolver.load_module('something.strange')
|
||||
|
||||
def test_virtual_function_no_ansible_installed(self):
|
||||
'''
|
||||
Test Ansible module __virtual__ when ansible is not installed on the minion.
|
||||
:return:
|
||||
'''
|
||||
with patch('salt.modules.ansiblegate.ansible', None):
|
||||
assert ansible.__virtual__() == (False, 'Ansible is not installed on this system')
|
||||
|
||||
@patch('salt.modules.ansiblegate.ansible', MagicMock())
|
||||
@patch('salt.modules.ansiblegate.list', MagicMock())
|
||||
@patch('salt.modules.ansiblegate._set_callables', MagicMock())
|
||||
@patch('salt.modules.ansiblegate.AnsibleModuleCaller', MagicMock())
|
||||
def test_virtual_function_ansible_is_installed(self):
|
||||
'''
|
||||
Test Ansible module __virtual__ when ansible is installed on the minion.
|
||||
:return:
|
||||
'''
|
||||
resolver = MagicMock()
|
||||
resolver.resolve = MagicMock()
|
||||
resolver.resolve.install = MagicMock()
|
||||
with patch('salt.modules.ansiblegate.AnsibleModuleResolver', resolver):
|
||||
assert ansible.__virtual__() == (True, None)
|
||||
|
@ -25,7 +25,7 @@ import salt.utils.hashutils
|
||||
import salt.utils.odict
|
||||
import salt.utils.platform
|
||||
import salt.modules.state as state
|
||||
from salt.exceptions import SaltInvocationError
|
||||
from salt.exceptions import CommandExecutionError, SaltInvocationError
|
||||
from salt.ext import six
|
||||
|
||||
|
||||
@ -362,7 +362,7 @@ class StateTestCase(TestCase, LoaderModuleMockMixin):
|
||||
state: {
|
||||
'__opts__': {
|
||||
'cachedir': '/D',
|
||||
'environment': None,
|
||||
'saltenv': None,
|
||||
'__cli': 'salt',
|
||||
},
|
||||
'__utils__': utils,
|
||||
@ -632,7 +632,7 @@ class StateTestCase(TestCase, LoaderModuleMockMixin):
|
||||
with patch.dict(state.__opts__, {"test": "A"}):
|
||||
mock = MagicMock(
|
||||
return_value={'test': True,
|
||||
'environment': None}
|
||||
'saltenv': None}
|
||||
)
|
||||
with patch.object(state, '_get_opts', mock):
|
||||
mock = MagicMock(return_value=True)
|
||||
@ -659,7 +659,7 @@ class StateTestCase(TestCase, LoaderModuleMockMixin):
|
||||
with patch.dict(state.__opts__, {"test": "A"}):
|
||||
mock = MagicMock(
|
||||
return_value={'test': True,
|
||||
'environment': None}
|
||||
'saltenv': None}
|
||||
)
|
||||
with patch.object(state, '_get_opts', mock):
|
||||
MockState.State.flag = True
|
||||
@ -681,7 +681,7 @@ class StateTestCase(TestCase, LoaderModuleMockMixin):
|
||||
with patch.dict(state.__opts__, {"test": "A"}):
|
||||
mock = MagicMock(
|
||||
return_value={'test': True,
|
||||
'environment': None}
|
||||
'saltenv': None}
|
||||
)
|
||||
with patch.object(state, '_get_opts', mock):
|
||||
mock = MagicMock(return_value=True)
|
||||
@ -881,7 +881,7 @@ class StateTestCase(TestCase, LoaderModuleMockMixin):
|
||||
|
||||
with patch.dict(state.__opts__, {"test": None}):
|
||||
mock = MagicMock(return_value={"test": "",
|
||||
"environment": None})
|
||||
"saltenv": None})
|
||||
with patch.object(state, '_get_opts', mock):
|
||||
mock = MagicMock(return_value=True)
|
||||
with patch.object(salt.utils,
|
||||
@ -993,3 +993,82 @@ class StateTestCase(TestCase, LoaderModuleMockMixin):
|
||||
else:
|
||||
with patch('salt.utils.files.fopen', mock_open()):
|
||||
self.assertTrue(state.pkg(tar_file, 0, "md5"))
|
||||
|
||||
def test_lock_saltenv(self):
|
||||
'''
|
||||
Tests lock_saltenv in each function which accepts saltenv on the CLI
|
||||
'''
|
||||
lock_msg = 'lock_saltenv is enabled, saltenv cannot be changed'
|
||||
empty_list_mock = MagicMock(return_value=[])
|
||||
with patch.dict(state.__opts__, {'lock_saltenv': True}), \
|
||||
patch.dict(state.__salt__, {'grains.get': empty_list_mock}), \
|
||||
patch.object(state, 'running', empty_list_mock):
|
||||
|
||||
# Test high
|
||||
with self.assertRaisesRegex(CommandExecutionError, lock_msg):
|
||||
state.high(
|
||||
[{"vim": {"pkg": ["installed"]}}], saltenv='base')
|
||||
|
||||
# Test template
|
||||
with self.assertRaisesRegex(CommandExecutionError, lock_msg):
|
||||
state.template('foo', saltenv='base')
|
||||
|
||||
# Test template_str
|
||||
with self.assertRaisesRegex(CommandExecutionError, lock_msg):
|
||||
state.template_str('foo', saltenv='base')
|
||||
|
||||
# Test apply_ with SLS
|
||||
with self.assertRaisesRegex(CommandExecutionError, lock_msg):
|
||||
state.apply_('foo', saltenv='base')
|
||||
|
||||
# Test apply_ with Highstate
|
||||
with self.assertRaisesRegex(CommandExecutionError, lock_msg):
|
||||
state.apply_(saltenv='base')
|
||||
|
||||
# Test highstate
|
||||
with self.assertRaisesRegex(CommandExecutionError, lock_msg):
|
||||
state.highstate(saltenv='base')
|
||||
|
||||
# Test sls
|
||||
with self.assertRaisesRegex(CommandExecutionError, lock_msg):
|
||||
state.sls('foo', saltenv='base')
|
||||
|
||||
# Test top
|
||||
with self.assertRaisesRegex(CommandExecutionError, lock_msg):
|
||||
state.top('foo.sls', saltenv='base')
|
||||
|
||||
# Test show_highstate
|
||||
with self.assertRaisesRegex(CommandExecutionError, lock_msg):
|
||||
state.show_highstate(saltenv='base')
|
||||
|
||||
# Test show_lowstate
|
||||
with self.assertRaisesRegex(CommandExecutionError, lock_msg):
|
||||
state.show_lowstate(saltenv='base')
|
||||
|
||||
# Test sls_id
|
||||
with self.assertRaisesRegex(CommandExecutionError, lock_msg):
|
||||
state.sls_id('foo', 'bar', saltenv='base')
|
||||
|
||||
# Test show_low_sls
|
||||
with self.assertRaisesRegex(CommandExecutionError, lock_msg):
|
||||
state.show_low_sls('foo', saltenv='base')
|
||||
|
||||
# Test show_sls
|
||||
with self.assertRaisesRegex(CommandExecutionError, lock_msg):
|
||||
state.show_sls('foo', saltenv='base')
|
||||
|
||||
# Test show_top
|
||||
with self.assertRaisesRegex(CommandExecutionError, lock_msg):
|
||||
state.show_top(saltenv='base')
|
||||
|
||||
# Test single
|
||||
with self.assertRaisesRegex(CommandExecutionError, lock_msg):
|
||||
state.single('foo.bar', name='baz', saltenv='base')
|
||||
|
||||
# Test pkg
|
||||
with self.assertRaisesRegex(CommandExecutionError, lock_msg):
|
||||
state.pkg(
|
||||
'/tmp/salt_state.tgz',
|
||||
'760a9353810e36f6d81416366fc426dc',
|
||||
'md5',
|
||||
saltenv='base')
|
||||
|
@ -916,7 +916,7 @@ class GetServiceInstanceViaProxyTestCase(TestCase, LoaderModuleMockMixin):
|
||||
}
|
||||
|
||||
def test_supported_proxies(self):
|
||||
supported_proxies = ['esxi', 'esxcluster', 'esxdatacenter', 'vcenter']
|
||||
supported_proxies = ['esxi', 'esxcluster', 'esxdatacenter', 'vcenter', 'esxvm']
|
||||
for proxy_type in supported_proxies:
|
||||
with patch('salt.modules.vsphere.get_proxy_type',
|
||||
MagicMock(return_value=proxy_type)):
|
||||
@ -959,7 +959,7 @@ class DisconnectTestCase(TestCase, LoaderModuleMockMixin):
|
||||
}
|
||||
|
||||
def test_supported_proxies(self):
|
||||
supported_proxies = ['esxi', 'esxcluster', 'esxdatacenter', 'vcenter']
|
||||
supported_proxies = ['esxi', 'esxcluster', 'esxdatacenter', 'vcenter', 'esxvm']
|
||||
for proxy_type in supported_proxies:
|
||||
with patch('salt.modules.vsphere.get_proxy_type',
|
||||
MagicMock(return_value=proxy_type)):
|
||||
@ -1000,7 +1000,7 @@ class TestVcenterConnectionTestCase(TestCase, LoaderModuleMockMixin):
|
||||
}
|
||||
|
||||
def test_supported_proxies(self):
|
||||
supported_proxies = ['esxi', 'esxcluster', 'esxdatacenter', 'vcenter']
|
||||
supported_proxies = ['esxi', 'esxcluster', 'esxdatacenter', 'vcenter', 'esxvm']
|
||||
for proxy_type in supported_proxies:
|
||||
with patch('salt.modules.vsphere.get_proxy_type',
|
||||
MagicMock(return_value=proxy_type)):
|
||||
@ -1076,7 +1076,7 @@ class ListDatacentersViaProxyTestCase(TestCase, LoaderModuleMockMixin):
|
||||
}
|
||||
|
||||
def test_supported_proxies(self):
|
||||
supported_proxies = ['esxcluster', 'esxdatacenter', 'vcenter']
|
||||
supported_proxies = ['esxcluster', 'esxdatacenter', 'vcenter', 'esxvm']
|
||||
for proxy_type in supported_proxies:
|
||||
with patch('salt.modules.vsphere.get_proxy_type',
|
||||
MagicMock(return_value=proxy_type)):
|
||||
|
@ -669,8 +669,8 @@ Repository 'DUMMY' not found by its alias, number, or URI.
|
||||
zypper_mock.assert_called_once_with(
|
||||
'--no-refresh',
|
||||
'install',
|
||||
'--name',
|
||||
'--auto-agree-with-licenses',
|
||||
'--name',
|
||||
'--download-only',
|
||||
'vim'
|
||||
)
|
||||
@ -699,8 +699,8 @@ Repository 'DUMMY' not found by its alias, number, or URI.
|
||||
zypper_mock.assert_called_once_with(
|
||||
'--no-refresh',
|
||||
'install',
|
||||
'--name',
|
||||
'--auto-agree-with-licenses',
|
||||
'--name',
|
||||
'--download-only',
|
||||
'vim'
|
||||
)
|
||||
@ -724,8 +724,8 @@ Repository 'DUMMY' not found by its alias, number, or URI.
|
||||
zypper_mock.assert_called_once_with(
|
||||
'--no-refresh',
|
||||
'install',
|
||||
'--name',
|
||||
'--auto-agree-with-licenses',
|
||||
'--name',
|
||||
'patch:SUSE-PATCH-1234'
|
||||
)
|
||||
self.assertDictEqual(ret, {"vim": {"old": "1.1", "new": "1.2"}})
|
||||
|
1
tests/unit/sdb/__init__.py
Normal file
1
tests/unit/sdb/__init__.py
Normal file
@ -0,0 +1 @@
|
||||
# -*- coding: utf-8 -*-
|
50
tests/unit/sdb/test_yaml.py
Normal file
50
tests/unit/sdb/test_yaml.py
Normal file
@ -0,0 +1,50 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Test case for the YAML SDB module
|
||||
'''
|
||||
|
||||
# Import python libs
|
||||
from __future__ import absolute_import
|
||||
|
||||
# Import Salt Testing libs
|
||||
from tests.support.unit import skipIf, TestCase
|
||||
from tests.support.mock import (
|
||||
NO_MOCK,
|
||||
NO_MOCK_REASON,
|
||||
MagicMock,
|
||||
patch)
|
||||
|
||||
# Import Salt libs
|
||||
import salt.sdb.yaml as sdb
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
class TestYamlRenderer(TestCase):
|
||||
'''
|
||||
Test case for the YAML SDB module
|
||||
'''
|
||||
|
||||
def test_plaintext(self):
|
||||
'''
|
||||
Retrieve a value from the top level of the dictionary
|
||||
'''
|
||||
plain = {'foo': 'bar'}
|
||||
with patch('salt.sdb.yaml._get_values', MagicMock(return_value=plain)):
|
||||
self.assertEqual(sdb.get('foo'), 'bar')
|
||||
|
||||
def test_nested(self):
|
||||
'''
|
||||
Retrieve a value from a nested level of the dictionary
|
||||
'''
|
||||
plain = {'foo': {'bar': 'baz'}}
|
||||
with patch('salt.sdb.yaml._get_values', MagicMock(return_value=plain)):
|
||||
self.assertEqual(sdb.get('foo:bar'), 'baz')
|
||||
|
||||
def test_encrypted(self):
|
||||
'''
|
||||
Assume the content is plaintext if GPG is not configured
|
||||
'''
|
||||
plain = {'foo': 'bar'}
|
||||
with patch('salt.sdb.yaml._decrypt', MagicMock(return_value=plain)):
|
||||
with patch('salt.sdb.yaml._get_values', MagicMock(return_value=None)):
|
||||
self.assertEqual(sdb.get('foo', profile={'gpg': True}), 'bar')
|
@ -815,7 +815,7 @@ class TestFileState(TestCase, LoaderModuleMockMixin):
|
||||
'comment': comt,
|
||||
'result': None,
|
||||
'pchanges': p_chg,
|
||||
'changes': {'/etc/grub.conf': {'directory': 'new'}}
|
||||
'changes': {}
|
||||
})
|
||||
self.assertDictEqual(filestate.directory(name,
|
||||
user=user,
|
||||
@ -841,6 +841,11 @@ class TestFileState(TestCase, LoaderModuleMockMixin):
|
||||
ret)
|
||||
|
||||
recurse = ['ignore_files', 'ignore_dirs']
|
||||
ret.update({'comment': 'Must not specify "recurse" '
|
||||
'options "ignore_files" and '
|
||||
'"ignore_dirs" at the same '
|
||||
'time.',
|
||||
'pchanges': {}})
|
||||
with patch.object(os.path, 'isdir', mock_t):
|
||||
self.assertDictEqual(filestate.directory
|
||||
(name, user=user,
|
||||
|
@ -55,7 +55,7 @@ class PillarTestCase(TestCase):
|
||||
'os': 'Ubuntu',
|
||||
}
|
||||
pillar = salt.pillar.Pillar(opts, grains, 'mocked-minion', 'dev')
|
||||
self.assertEqual(pillar.opts['environment'], 'dev')
|
||||
self.assertEqual(pillar.opts['saltenv'], 'dev')
|
||||
self.assertEqual(pillar.opts['pillarenv'], 'dev')
|
||||
|
||||
def test_ext_pillar_no_extra_minion_data_val_dict(self):
|
||||
@ -416,7 +416,7 @@ class PillarTestCase(TestCase):
|
||||
'state_top': '',
|
||||
'pillar_roots': [],
|
||||
'extension_modules': '',
|
||||
'environment': 'base',
|
||||
'saltenv': 'base',
|
||||
'file_roots': [],
|
||||
}
|
||||
grains = {
|
||||
@ -584,7 +584,7 @@ class RemotePillarTestCase(TestCase):
|
||||
|
||||
salt.pillar.RemotePillar({}, self.grains, 'mocked-minion', 'dev')
|
||||
mock_get_extra_minion_data.assert_called_once_with(
|
||||
{'environment': 'dev'})
|
||||
{'saltenv': 'dev'})
|
||||
|
||||
def test_multiple_keys_in_opts_added_to_pillar(self):
|
||||
opts = {
|
||||
@ -702,7 +702,7 @@ class AsyncRemotePillarTestCase(TestCase):
|
||||
|
||||
salt.pillar.RemotePillar({}, self.grains, 'mocked-minion', 'dev')
|
||||
mock_get_extra_minion_data.assert_called_once_with(
|
||||
{'environment': 'dev'})
|
||||
{'saltenv': 'dev'})
|
||||
|
||||
def test_pillar_send_extra_minion_data_from_config(self):
|
||||
opts = {
|
||||
|
293
tests/unit/utils/vmware/test_vm.py
Normal file
293
tests/unit/utils/vmware/test_vm.py
Normal file
@ -0,0 +1,293 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
:codeauthor: :email:`Agnes Tevesz <agnes.tevesz@morganstanley.com>`
|
||||
|
||||
Tests for virtual machine related functions in salt.utils.vmware
|
||||
'''
|
||||
|
||||
# Import python libraries
|
||||
from __future__ import absolute_import
|
||||
import logging
|
||||
|
||||
# Import Salt testing libraries
|
||||
from tests.support.unit import TestCase, skipIf
|
||||
from tests.support.mock import NO_MOCK, NO_MOCK_REASON, patch, MagicMock
|
||||
|
||||
from salt.exceptions import VMwareRuntimeError, VMwareApiError, ArgumentValueError
|
||||
|
||||
# Import Salt libraries
|
||||
import salt.utils.vmware as vmware
|
||||
|
||||
# Import Third Party Libs
|
||||
try:
|
||||
from pyVmomi import vim, vmodl
|
||||
HAS_PYVMOMI = True
|
||||
except ImportError:
|
||||
HAS_PYVMOMI = False
|
||||
|
||||
# Get Logging Started
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
class ConvertToKbTestCase(TestCase):
|
||||
'''Tests for converting units'''
|
||||
|
||||
def setUp(self):
|
||||
pass
|
||||
|
||||
def test_gb_conversion_call(self):
|
||||
self.assertEqual(vmware.convert_to_kb('Gb', 10), {'size': int(10485760), 'unit': 'KB'})
|
||||
|
||||
def test_mb_conversion_call(self):
|
||||
self.assertEqual(vmware.convert_to_kb('Mb', 10), {'size': int(10240), 'unit': 'KB'})
|
||||
|
||||
def test_kb_conversion_call(self):
|
||||
self.assertEqual(vmware.convert_to_kb('Kb', 10), {'size': int(10), 'unit': 'KB'})
|
||||
|
||||
def test_conversion_bad_input_argument_fault(self):
|
||||
self.assertRaises(ArgumentValueError, vmware.convert_to_kb, 'test', 10)
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
|
||||
@patch('salt.utils.vmware.get_managed_object_name', MagicMock())
|
||||
@patch('salt.utils.vmware.wait_for_task', MagicMock())
|
||||
class CreateVirtualMachineTestCase(TestCase):
|
||||
'''Tests for salt.utils.vmware.create_vm'''
|
||||
|
||||
def setUp(self):
|
||||
self.vm_name = 'fake_vm'
|
||||
self.mock_task = MagicMock()
|
||||
self.mock_config_spec = MagicMock()
|
||||
self.mock_resourcepool_object = MagicMock()
|
||||
self.mock_host_object = MagicMock()
|
||||
self.mock_vm_create_task = MagicMock(return_value=self.mock_task)
|
||||
self.mock_folder_object = MagicMock(CreateVM_Task=self.mock_vm_create_task)
|
||||
|
||||
def test_create_vm_pool_task_call(self):
|
||||
vmware.create_vm(self.vm_name, self.mock_config_spec,
|
||||
self.mock_folder_object, self.mock_resourcepool_object)
|
||||
self.mock_vm_create_task.assert_called_once()
|
||||
|
||||
def test_create_vm_host_task_call(self):
|
||||
vmware.create_vm(self.vm_name, self.mock_config_spec,
|
||||
self.mock_folder_object, self.mock_resourcepool_object,
|
||||
host_object=self.mock_host_object)
|
||||
self.mock_vm_create_task.assert_called_once()
|
||||
|
||||
def test_create_vm_raise_no_permission(self):
|
||||
exception = vim.fault.NoPermission()
|
||||
exception.msg = 'vim.fault.NoPermission msg'
|
||||
self.mock_folder_object.CreateVM_Task = MagicMock(side_effect=exception)
|
||||
with self.assertRaises(VMwareApiError) as exc:
|
||||
vmware.create_vm(self.vm_name, self.mock_config_spec,
|
||||
self.mock_folder_object, self.mock_resourcepool_object)
|
||||
self.assertEqual(exc.exception.strerror,
|
||||
'Not enough permissions. Required privilege: ')
|
||||
|
||||
def test_create_vm_raise_vim_fault(self):
|
||||
exception = vim.fault.VimFault()
|
||||
exception.msg = 'vim.fault.VimFault msg'
|
||||
self.mock_folder_object.CreateVM_Task = MagicMock(side_effect=exception)
|
||||
with self.assertRaises(VMwareApiError) as exc:
|
||||
vmware.create_vm(self.vm_name, self.mock_config_spec,
|
||||
self.mock_folder_object, self.mock_resourcepool_object)
|
||||
self.assertEqual(exc.exception.strerror, 'vim.fault.VimFault msg')
|
||||
|
||||
def test_create_vm_raise_runtime_fault(self):
|
||||
exception = vmodl.RuntimeFault()
|
||||
exception.msg = 'vmodl.RuntimeFault msg'
|
||||
self.mock_folder_object.CreateVM_Task = MagicMock(side_effect=exception)
|
||||
with self.assertRaises(VMwareRuntimeError) as exc:
|
||||
vmware.create_vm(self.vm_name, self.mock_config_spec,
|
||||
self.mock_folder_object, self.mock_resourcepool_object)
|
||||
self.assertEqual(exc.exception.strerror, 'vmodl.RuntimeFault msg')
|
||||
|
||||
def test_create_vm_wait_for_task(self):
|
||||
mock_wait_for_task = MagicMock()
|
||||
with patch('salt.utils.vmware.wait_for_task', mock_wait_for_task):
|
||||
vmware.create_vm(self.vm_name, self.mock_config_spec,
|
||||
self.mock_folder_object, self.mock_resourcepool_object)
|
||||
mock_wait_for_task.assert_called_once_with(
|
||||
self.mock_task, self.vm_name, 'CreateVM Task', 10, 'info')
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
|
||||
@patch('salt.utils.vmware.get_managed_object_name', MagicMock())
|
||||
@patch('salt.utils.vmware.wait_for_task', MagicMock())
|
||||
class RegisterVirtualMachineTestCase(TestCase):
|
||||
'''Tests for salt.utils.vmware.register_vm'''
|
||||
|
||||
def setUp(self):
|
||||
self.vm_name = 'fake_vm'
|
||||
self.mock_task = MagicMock()
|
||||
self.mock_vmx_path = MagicMock()
|
||||
self.mock_resourcepool_object = MagicMock()
|
||||
self.mock_host_object = MagicMock()
|
||||
self.mock_vm_register_task = MagicMock(return_value=self.mock_task)
|
||||
self.vm_folder_object = MagicMock(RegisterVM_Task=self.mock_vm_register_task)
|
||||
self.datacenter = MagicMock(vmFolder=self.vm_folder_object)
|
||||
|
||||
def test_register_vm_pool_task_call(self):
|
||||
vmware.register_vm(self.datacenter, self.vm_name, self.mock_vmx_path,
|
||||
self.mock_resourcepool_object)
|
||||
self.mock_vm_register_task.assert_called_once()
|
||||
|
||||
def test_register_vm_host_task_call(self):
|
||||
vmware.register_vm(self.datacenter, self.vm_name, self.mock_vmx_path,
|
||||
self.mock_resourcepool_object,
|
||||
host_object=self.mock_host_object)
|
||||
self.mock_vm_register_task.assert_called_once()
|
||||
|
||||
def test_register_vm_raise_no_permission(self):
|
||||
exception = vim.fault.NoPermission()
|
||||
self.vm_folder_object.RegisterVM_Task = MagicMock(side_effect=exception)
|
||||
with self.assertRaises(VMwareApiError) as exc:
|
||||
vmware.register_vm(self.datacenter, self.vm_name, self.mock_vmx_path,
|
||||
self.mock_resourcepool_object)
|
||||
self.assertEqual(exc.exception.strerror,
|
||||
'Not enough permissions. Required privilege: ')
|
||||
|
||||
def test_register_vm_raise_vim_fault(self):
|
||||
exception = vim.fault.VimFault()
|
||||
exception.msg = 'vim.fault.VimFault msg'
|
||||
self.vm_folder_object.RegisterVM_Task = MagicMock(side_effect=exception)
|
||||
with self.assertRaises(VMwareApiError) as exc:
|
||||
vmware.register_vm(self.datacenter, self.vm_name, self.mock_vmx_path,
|
||||
self.mock_resourcepool_object)
|
||||
self.assertEqual(exc.exception.strerror, 'vim.fault.VimFault msg')
|
||||
|
||||
def test_register_vm_raise_runtime_fault(self):
|
||||
exception = vmodl.RuntimeFault()
|
||||
exception.msg = 'vmodl.RuntimeFault msg'
|
||||
self.vm_folder_object.RegisterVM_Task = MagicMock(side_effect=exception)
|
||||
with self.assertRaises(VMwareRuntimeError) as exc:
|
||||
vmware.register_vm(self.datacenter, self.vm_name, self.mock_vmx_path,
|
||||
self.mock_resourcepool_object)
|
||||
self.assertEqual(exc.exception.strerror, 'vmodl.RuntimeFault msg')
|
||||
|
||||
def test_register_vm_wait_for_task(self):
|
||||
mock_wait_for_task = MagicMock()
|
||||
with patch('salt.utils.vmware.wait_for_task', mock_wait_for_task):
|
||||
vmware.register_vm(self.datacenter, self.vm_name, self.mock_vmx_path,
|
||||
self.mock_resourcepool_object)
|
||||
mock_wait_for_task.assert_called_once_with(
|
||||
self.mock_task, self.vm_name, 'RegisterVM Task')
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
|
||||
@patch('salt.utils.vmware.get_managed_object_name', MagicMock())
|
||||
@patch('salt.utils.vmware.wait_for_task', MagicMock())
|
||||
class UpdateVirtualMachineTestCase(TestCase):
|
||||
'''Tests for salt.utils.vmware.update_vm'''
|
||||
|
||||
def setUp(self):
|
||||
self.mock_task = MagicMock()
|
||||
self.mock_config_spec = MagicMock()
|
||||
self.mock_vm_update_task = MagicMock(return_value=self.mock_task)
|
||||
self.mock_vm_ref = MagicMock(ReconfigVM_Task=self.mock_vm_update_task)
|
||||
|
||||
def test_update_vm_task_call(self):
|
||||
vmware.update_vm(self.mock_vm_ref, self.mock_config_spec)
|
||||
self.mock_vm_update_task.assert_called_once()
|
||||
|
||||
def test_update_vm_raise_vim_fault(self):
|
||||
exception = vim.fault.VimFault()
|
||||
exception.msg = 'vim.fault.VimFault'
|
||||
self.mock_vm_ref.ReconfigVM_Task = MagicMock(side_effect=exception)
|
||||
with self.assertRaises(VMwareApiError) as exc:
|
||||
vmware.update_vm(self.mock_vm_ref, self.mock_config_spec)
|
||||
self.assertEqual(exc.exception.strerror, 'vim.fault.VimFault')
|
||||
|
||||
def test_update_vm_raise_runtime_fault(self):
|
||||
exception = vmodl.RuntimeFault()
|
||||
exception.msg = 'vmodl.RuntimeFault'
|
||||
self.mock_vm_ref.ReconfigVM_Task = MagicMock(side_effect=exception)
|
||||
with self.assertRaises(VMwareRuntimeError) as exc:
|
||||
vmware.update_vm(self.mock_vm_ref, self.mock_config_spec)
|
||||
self.assertEqual(exc.exception.strerror, 'vmodl.RuntimeFault')
|
||||
|
||||
def test_update_vm_wait_for_task(self):
|
||||
mock_wait_for_task = MagicMock()
|
||||
with patch('salt.utils.vmware.get_managed_object_name',
|
||||
MagicMock(return_value='my_vm')):
|
||||
with patch('salt.utils.vmware.wait_for_task', mock_wait_for_task):
|
||||
vmware.update_vm(self.mock_vm_ref, self.mock_config_spec)
|
||||
mock_wait_for_task.assert_called_once_with(
|
||||
self.mock_task, 'my_vm', 'ReconfigureVM Task')
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
|
||||
@patch('salt.utils.vmware.get_managed_object_name', MagicMock())
|
||||
@patch('salt.utils.vmware.wait_for_task', MagicMock())
|
||||
class DeleteVirtualMachineTestCase(TestCase):
|
||||
'''Tests for salt.utils.vmware.delete_vm'''
|
||||
|
||||
def setUp(self):
|
||||
self.mock_task = MagicMock()
|
||||
self.mock_vm_destroy_task = MagicMock(return_value=self.mock_task)
|
||||
self.mock_vm_ref = MagicMock(Destroy_Task=self.mock_vm_destroy_task)
|
||||
|
||||
def test_destroy_vm_task_call(self):
|
||||
vmware.delete_vm(self.mock_vm_ref)
|
||||
self.mock_vm_destroy_task.assert_called_once()
|
||||
|
||||
def test_destroy_vm_raise_vim_fault(self):
|
||||
exception = vim.fault.VimFault()
|
||||
exception.msg = 'vim.fault.VimFault'
|
||||
self.mock_vm_ref.Destroy_Task = MagicMock(side_effect=exception)
|
||||
with self.assertRaises(VMwareApiError) as exc:
|
||||
vmware.delete_vm(self.mock_vm_ref)
|
||||
self.assertEqual(exc.exception.strerror, 'vim.fault.VimFault')
|
||||
|
||||
def test_destroy_vm_raise_runtime_fault(self):
|
||||
exception = vmodl.RuntimeFault()
|
||||
exception.msg = 'vmodl.RuntimeFault'
|
||||
self.mock_vm_ref.Destroy_Task = MagicMock(side_effect=exception)
|
||||
with self.assertRaises(VMwareRuntimeError) as exc:
|
||||
vmware.delete_vm(self.mock_vm_ref)
|
||||
self.assertEqual(exc.exception.strerror, 'vmodl.RuntimeFault')
|
||||
|
||||
def test_destroy_vm_wait_for_task(self):
|
||||
mock_wait_for_task = MagicMock()
|
||||
with patch('salt.utils.vmware.get_managed_object_name',
|
||||
MagicMock(return_value='my_vm')):
|
||||
with patch('salt.utils.vmware.wait_for_task', mock_wait_for_task):
|
||||
vmware.delete_vm(self.mock_vm_ref)
|
||||
mock_wait_for_task.assert_called_once_with(
|
||||
self.mock_task, 'my_vm', 'Destroy Task')
|
||||
|
||||
|
||||
@skipIf(NO_MOCK, NO_MOCK_REASON)
|
||||
@skipIf(not HAS_PYVMOMI, 'The \'pyvmomi\' library is missing')
|
||||
@patch('salt.utils.vmware.get_managed_object_name', MagicMock())
|
||||
class UnregisterVirtualMachineTestCase(TestCase):
|
||||
'''Tests for salt.utils.vmware.unregister_vm'''
|
||||
|
||||
def setUp(self):
|
||||
self.mock_vm_unregister = MagicMock()
|
||||
self.mock_vm_ref = MagicMock(UnregisterVM=self.mock_vm_unregister)
|
||||
|
||||
def test_unregister_vm_task_call(self):
|
||||
vmware.unregister_vm(self.mock_vm_ref)
|
||||
self.mock_vm_unregister.assert_called_once()
|
||||
|
||||
def test_unregister_vm_raise_vim_fault(self):
|
||||
exception = vim.fault.VimFault()
|
||||
exception.msg = 'vim.fault.VimFault'
|
||||
self.mock_vm_ref.UnregisterVM = MagicMock(side_effect=exception)
|
||||
with self.assertRaises(VMwareApiError) as exc:
|
||||
vmware.unregister_vm(self.mock_vm_ref)
|
||||
self.assertEqual(exc.exception.strerror, 'vim.fault.VimFault')
|
||||
|
||||
def test_unregister_vm_raise_runtime_fault(self):
|
||||
exception = vmodl.RuntimeFault()
|
||||
exception.msg = 'vmodl.RuntimeFault'
|
||||
self.mock_vm_ref.UnregisterVM = MagicMock(side_effect=exception)
|
||||
with self.assertRaises(VMwareRuntimeError) as exc:
|
||||
vmware.unregister_vm(self.mock_vm_ref)
|
||||
self.assertEqual(exc.exception.strerror, 'vmodl.RuntimeFault')
|
Loading…
Reference in New Issue
Block a user