mirror of
https://github.com/valitydev/salt.git
synced 2024-11-07 08:58:59 +00:00
Merge branch 'develop' into develop
This commit is contained in:
commit
d386daa0af
@ -250,9 +250,9 @@ on_saltstack = 'SALT_ON_SALTSTACK' in os.environ
|
||||
project = 'Salt'
|
||||
|
||||
version = salt.version.__version__
|
||||
latest_release = '2017.7.4' # latest release
|
||||
previous_release = '2016.11.9' # latest release from previous branch
|
||||
previous_release_dir = '2016.11' # path on web server for previous branch
|
||||
latest_release = '2018.3.0' # latest release
|
||||
previous_release = '2017.7.5' # latest release from previous branch
|
||||
previous_release_dir = '2017.7' # path on web server for previous branch
|
||||
next_release = '' # next release
|
||||
next_release_dir = '' # path on web server for next release branch
|
||||
|
||||
|
@ -33,6 +33,8 @@ A good example of this would be setting up a package manager early on:
|
||||
In this situation, the yum repo is going to be configured before other states,
|
||||
and if it fails to lay down the config file, than no other states will be
|
||||
executed.
|
||||
It is possible to override a Global Failhard (see below) by explicitly setting
|
||||
it to ``False`` in the state.
|
||||
|
||||
Global Failhard
|
||||
===============
|
||||
@ -51,4 +53,4 @@ see states failhard if an admin is not actively aware that the failhard has
|
||||
been set.
|
||||
|
||||
To use the global failhard set failhard: True in the master configuration
|
||||
file.
|
||||
file.
|
||||
|
@ -504,6 +504,15 @@ The ``onfail`` requisite is applied in the same way as ``require`` as ``watch``:
|
||||
- onfail:
|
||||
- mount: primary_mount
|
||||
|
||||
.. note::
|
||||
|
||||
Setting failhard (:ref:`globally <global-failhard>` or in
|
||||
:ref:`the failing state <state-level-failhard>`) to ``True`` will cause
|
||||
``onfail``, ``onfail_in`` and ``onfail_any`` requisites to be ignored.
|
||||
If you want to combine a global failhard set to True with ``onfail``,
|
||||
``onfail_in`` or ``onfail_any``, you will have to explicitly set failhard
|
||||
to ``False`` (overriding the global setting) in the state that could fail.
|
||||
|
||||
.. note::
|
||||
|
||||
Beginning in the ``2016.11.0`` release of Salt, ``onfail`` uses OR logic for
|
||||
|
@ -111,7 +111,11 @@ Here is an example of a profile:
|
||||
cpu_family: INTEL_XEON
|
||||
ram: 32768
|
||||
public_lan: 1
|
||||
public_ips:
|
||||
- 172.217.18.174
|
||||
private_lan: 2
|
||||
private_ips:
|
||||
- 192.168.100.10
|
||||
public_firewall_rules:
|
||||
Allow SSH:
|
||||
protocol: TCP
|
||||
@ -152,6 +156,13 @@ command:
|
||||
|
||||
# salt-cloud --list-sizes my-profitbricks-config
|
||||
|
||||
.. versionadded:: Fluorine
|
||||
One or more public IP address can be reserved with the following command:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# salt-cloud -f reserve_ipblock my-profitbricks-config location='us/ewr' size=1
|
||||
|
||||
Profile Specifics:
|
||||
------------------
|
||||
|
||||
@ -208,6 +219,10 @@ public_lan
|
||||
LAN exists, then a new public LAN will be created. The value accepts a LAN
|
||||
ID (integer).
|
||||
|
||||
.. versionadded:: Fluorine
|
||||
public_ips
|
||||
Public IPs assigned to the NIC in the public LAN.
|
||||
|
||||
public_firewall_rules
|
||||
This option allows for a list of firewall rules assigned to the public
|
||||
network interface.
|
||||
@ -227,6 +242,10 @@ private_lan
|
||||
LAN exists, then a new private LAN will be created. The value accepts a LAN
|
||||
ID (integer).
|
||||
|
||||
.. versionadded:: Fluorine
|
||||
private_ips
|
||||
Private IPs assigned in the private LAN. NAT setting is ignored when this setting is active.
|
||||
|
||||
private_firewall_rules
|
||||
This option allows for a list of firewall rules assigned to the private
|
||||
network interface.
|
||||
|
@ -88,6 +88,33 @@ by their ``os`` grain:
|
||||
- match: grain
|
||||
- servers
|
||||
|
||||
Pillar definitions can also take a keyword argument ``ignore_missing``.
|
||||
When the value of ``ignore_missing`` is ``True``, all errors for missing
|
||||
pillar files are ignored. The default value for ``ignore_missing`` is
|
||||
``False``.
|
||||
|
||||
Here is an example using the ``ignore_missing`` keyword parameter to ignore
|
||||
errors for missing pillar files:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
base:
|
||||
'*':
|
||||
- servers
|
||||
- systems
|
||||
- ignore_missing: True
|
||||
|
||||
Assuming that the pillar ``servers`` exists in the fileserver backend
|
||||
and the pillar ``systems`` doesn't, all pillar data from ``servers``
|
||||
pillar is delivered to minions and no error for the missing pillar
|
||||
``systems`` is noted under the key ``_errors`` in the pillar data
|
||||
delivered to minions.
|
||||
|
||||
Should the ``ignore_missing`` keyword parameter have the value ``False``,
|
||||
an error for the missing pillar ``systems`` would produce the value
|
||||
``Specified SLS 'servers' in environment 'base' is not available on the salt master``
|
||||
under the key ``_errors`` in the pillar data delivered to minions.
|
||||
|
||||
``/srv/pillar/packages.sls``
|
||||
|
||||
.. code-block:: jinja
|
||||
|
@ -481,9 +481,9 @@ Configuration
|
||||
|
||||
By default, automatic discovery is disabled.
|
||||
|
||||
..warning::
|
||||
Due to the current limitations that will be changing in a future, before you turn on auto-discovery,
|
||||
make sure your network is secured and trusted.
|
||||
.. warning::
|
||||
Due to the current limitations that will be changing in a future, before you turn on auto-discovery,
|
||||
make sure your network is secured and trusted.
|
||||
|
||||
Auto-discovery is configured on Master and Minion. Both of them are configured via the ``discovery`` option
|
||||
as follows:
|
||||
|
@ -9,21 +9,36 @@ Minion Startup Events
|
||||
---------------------
|
||||
|
||||
When a minion starts up it sends a notification on the event bus with a tag
|
||||
that looks like this: `salt/minion/<minion_id>/start`. For historical reasons
|
||||
that looks like this: ``salt/minion/<minion_id>/start``. For historical reasons
|
||||
the minion also sends a similar event with an event tag like this:
|
||||
`minion_start`. This duplication can cause a lot of clutter on the event bus
|
||||
when there are many minions. Set `enable_legacy_startup_events: False` in the
|
||||
minion config to ensure only the `salt/minion/<minion_id>/start` events are
|
||||
``minion_start``. This duplication can cause a lot of clutter on the event bus
|
||||
when there are many minions. Set ``enable_legacy_startup_events: False`` in the
|
||||
minion config to ensure only the ``salt/minion/<minion_id>/start`` events are
|
||||
sent.
|
||||
|
||||
The new :conf_minion:`enable_legacy_startup_events` minion config option
|
||||
defaults to ``True``, but will be set to default to ``False`` beginning with
|
||||
the Neon release of Salt.
|
||||
|
||||
The Salt Syndic currently sends an old style `syndic_start` event as well. The
|
||||
The Salt Syndic currently sends an old style ``syndic_start`` event as well. The
|
||||
syndic respects :conf_minion:`enable_legacy_startup_events` as well.
|
||||
|
||||
|
||||
Failhard changes
|
||||
----------------
|
||||
|
||||
It is now possible to override a global failhard setting with a state-level
|
||||
failhard setting. This is most useful in case where global failhard is set to
|
||||
``True`` and you want the execution not to stop for a specific state that
|
||||
could fail, by setting the state level failhard to ``False``.
|
||||
This also allows for the use of ``onfail*``-requisites, which would previously
|
||||
be ignored when a global failhard was set to ``True``.
|
||||
This is a deviation from previous behavior, where the global failhard setting
|
||||
always resulted in an immediate stop whenever any state failed (regardless
|
||||
of whether the failing state had a failhard setting of its own, or whether
|
||||
any ``onfail*``-requisites were used).
|
||||
|
||||
|
||||
Pass Through Options to :py:func:`file.serialize <salt.states.file.serialize>` State
|
||||
------------------------------------------------------------------------------------
|
||||
|
||||
@ -169,3 +184,12 @@ The ``trafficserver`` state had the following changes:
|
||||
function instead.
|
||||
|
||||
The ``win_update`` state has been removed. Please use the ``win_wua`` state instead.
|
||||
|
||||
Utils Deprecations
|
||||
==================
|
||||
|
||||
The ``vault`` utils module had the following changes:
|
||||
|
||||
- Support for specifying Vault connection data within a 'profile' has been removed.
|
||||
Please see the :mod:`vault execution module <salt.modules.vault>` documentation for
|
||||
details on the new configuration schema.
|
||||
|
@ -67,14 +67,12 @@ _su_cmd() {
|
||||
|
||||
|
||||
_get_pid() {
|
||||
netstat -n $NS_NOTRIM -ap --protocol=unix 2>$ERROR_TO_DEVNULL \
|
||||
| sed -r -e "\|\s${SOCK_DIR}/minion_event_${MINION_ID_HASH}_pub\.ipc$|"'!d; s|/.*||; s/.*\s//;' \
|
||||
| uniq
|
||||
cat $PID_FILE 2>/dev/null
|
||||
}
|
||||
|
||||
|
||||
_is_running() {
|
||||
[ -n "$(_get_pid)" ]
|
||||
[ -n "$(_get_pid)" ] && ps wwwaxu | grep '[s]alt-minion' | awk '{print $2}' | grep -qi "\b$(_get_pid)\b"
|
||||
}
|
||||
|
||||
|
||||
@ -219,7 +217,7 @@ status() {
|
||||
local retval=0
|
||||
local pid="$(_get_pid)"
|
||||
|
||||
if [ -n "$pid" ]; then
|
||||
if _is_running; then
|
||||
# Unquote $pid here to display multiple PIDs in one line
|
||||
echo "$SERVICE:$MINION_USER:$MINION_ID is running:" $pid
|
||||
else
|
||||
|
@ -248,7 +248,7 @@ class LocalClient(object):
|
||||
|
||||
return pub_data
|
||||
|
||||
def _check_pub_data(self, pub_data):
|
||||
def _check_pub_data(self, pub_data, listen=True):
|
||||
'''
|
||||
Common checks on the pub_data data structure returned from running pub
|
||||
'''
|
||||
@ -281,7 +281,13 @@ class LocalClient(object):
|
||||
print('No minions matched the target. '
|
||||
'No command was sent, no jid was assigned.')
|
||||
return {}
|
||||
else:
|
||||
|
||||
# don't install event subscription listeners when the request is async
|
||||
# and doesn't care. this is important as it will create event leaks otherwise
|
||||
if not listen:
|
||||
return pub_data
|
||||
|
||||
if self.opts.get('order_masters'):
|
||||
self.event.subscribe('syndic/.*/{0}'.format(pub_data['jid']), 'regex')
|
||||
|
||||
self.event.subscribe('salt/job/{0}'.format(pub_data['jid']))
|
||||
@ -336,7 +342,7 @@ class LocalClient(object):
|
||||
# Convert to generic client error and pass along message
|
||||
raise SaltClientError(general_exception)
|
||||
|
||||
return self._check_pub_data(pub_data)
|
||||
return self._check_pub_data(pub_data, listen=listen)
|
||||
|
||||
def gather_minions(self, tgt, expr_form):
|
||||
_res = salt.utils.minions.CkMinions(self.opts).check_minions(tgt, tgt_type=expr_form)
|
||||
@ -393,7 +399,7 @@ class LocalClient(object):
|
||||
# Convert to generic client error and pass along message
|
||||
raise SaltClientError(general_exception)
|
||||
|
||||
raise tornado.gen.Return(self._check_pub_data(pub_data))
|
||||
raise tornado.gen.Return(self._check_pub_data(pub_data, listen=listen))
|
||||
|
||||
def cmd_async(
|
||||
self,
|
||||
@ -425,6 +431,7 @@ class LocalClient(object):
|
||||
tgt_type,
|
||||
ret,
|
||||
jid=jid,
|
||||
listen=False,
|
||||
**kwargs)
|
||||
try:
|
||||
return pub_data['jid']
|
||||
|
@ -120,7 +120,7 @@ try:
|
||||
import profitbricks
|
||||
from profitbricks.client import (
|
||||
ProfitBricksService, Server,
|
||||
NIC, Volume, FirewallRule,
|
||||
NIC, Volume, FirewallRule, IPBlock,
|
||||
Datacenter, LoadBalancer, LAN,
|
||||
PBNotFoundError, PBError
|
||||
)
|
||||
@ -626,6 +626,39 @@ def list_nodes_full(conn=None, call=None):
|
||||
return ret
|
||||
|
||||
|
||||
def reserve_ipblock(call=None, kwargs=None):
|
||||
'''
|
||||
Reserve the IP Block
|
||||
'''
|
||||
if call == 'action':
|
||||
raise SaltCloudSystemExit(
|
||||
'The reserve_ipblock function must be called with -f or '
|
||||
'--function.'
|
||||
)
|
||||
|
||||
conn = get_conn()
|
||||
|
||||
if kwargs is None:
|
||||
kwargs = {}
|
||||
|
||||
ret = {}
|
||||
ret['ips'] = []
|
||||
|
||||
if kwargs.get('location') is None:
|
||||
raise SaltCloudExecutionFailure('The "location" parameter is required')
|
||||
location = kwargs.get('location')
|
||||
|
||||
size = 1
|
||||
if kwargs.get('size') is not None:
|
||||
size = kwargs.get('size')
|
||||
|
||||
block = conn.reserve_ipblock(IPBlock(size=size, location=location))
|
||||
for item in block['properties']['ips']:
|
||||
ret['ips'].append(item)
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
def show_instance(name, call=None):
|
||||
'''
|
||||
Show the details from the provider concerning an instance
|
||||
@ -677,12 +710,14 @@ def _get_nics(vm_):
|
||||
firewall_rules = []
|
||||
# Set LAN to public if it already exists, otherwise create a new
|
||||
# public LAN.
|
||||
lan_id = set_public_lan(int(vm_['public_lan']))
|
||||
if 'public_firewall_rules' in vm_:
|
||||
firewall_rules = _get_firewall_rules(vm_['public_firewall_rules'])
|
||||
nics.append(NIC(lan=lan_id,
|
||||
name='public',
|
||||
firewall_rules=firewall_rules))
|
||||
nic = NIC(lan=set_public_lan(int(vm_['public_lan'])),
|
||||
name='public',
|
||||
firewall_rules=firewall_rules)
|
||||
if 'public_ips' in vm_:
|
||||
nic.ips = _get_ip_addresses(vm_['public_ips'])
|
||||
nics.append(nic)
|
||||
|
||||
if 'private_lan' in vm_:
|
||||
firewall_rules = []
|
||||
@ -691,7 +726,9 @@ def _get_nics(vm_):
|
||||
nic = NIC(lan=int(vm_['private_lan']),
|
||||
name='private',
|
||||
firewall_rules=firewall_rules)
|
||||
if 'nat' in vm_:
|
||||
if 'private_ips' in vm_:
|
||||
nic.ips = _get_ip_addresses(vm_['private_ips'])
|
||||
if 'nat' in vm_ and 'private_ips' not in vm_:
|
||||
nic.nat = vm_['nat']
|
||||
nics.append(nic)
|
||||
return nics
|
||||
@ -1180,6 +1217,17 @@ def _get_data_volumes(vm_):
|
||||
return ret
|
||||
|
||||
|
||||
def _get_ip_addresses(ip_addresses):
|
||||
'''
|
||||
Construct a list of ip address
|
||||
'''
|
||||
ret = []
|
||||
for item in ip_addresses:
|
||||
ret.append(item)
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
def _get_firewall_rules(firewall_rules):
|
||||
'''
|
||||
Construct a list of optional firewall rules from the cloud profile.
|
||||
|
@ -1495,6 +1495,12 @@ def os_data():
|
||||
)
|
||||
elif salt.utils.path.which('supervisord') in init_cmdline:
|
||||
grains['init'] = 'supervisord'
|
||||
elif salt.utils.path.which('dumb-init') in init_cmdline:
|
||||
# https://github.com/Yelp/dumb-init
|
||||
grains['init'] = 'dumb-init'
|
||||
elif salt.utils.path.which('tini') in init_cmdline:
|
||||
# https://github.com/krallin/tini
|
||||
grains['init'] = 'tini'
|
||||
elif init_cmdline == ['runit']:
|
||||
grains['init'] = 'runit'
|
||||
elif '/sbin/my_init' in init_cmdline:
|
||||
@ -1919,16 +1925,21 @@ def fqdns():
|
||||
fqdns = set()
|
||||
|
||||
addresses = salt.utils.network.ip_addrs(include_loopback=False,
|
||||
interface_data=_INTERFACES)
|
||||
interface_data=_INTERFACES)
|
||||
addresses.extend(salt.utils.network.ip_addrs6(include_loopback=False,
|
||||
interface_data=_INTERFACES))
|
||||
|
||||
interface_data=_INTERFACES))
|
||||
err_message = 'Exception during resolving address: %s'
|
||||
for ip in addresses:
|
||||
try:
|
||||
fqdns.add(socket.gethostbyaddr(ip)[0])
|
||||
except (socket.error, socket.herror,
|
||||
socket.gaierror, socket.timeout) as e:
|
||||
log.info("Exception during resolving address: " + str(e))
|
||||
fqdns.add(socket.getfqdn(socket.gethostbyaddr(ip)[0]))
|
||||
except socket.herror as err:
|
||||
if err.errno == 1:
|
||||
# No FQDN for this IP address, so we don't need to know this all the time.
|
||||
log.debug("Unable to resolve address %s: %s", ip, err)
|
||||
else:
|
||||
log.error(err_message, err)
|
||||
except (socket.error, socket.gaierror, socket.timeout) as err:
|
||||
log.error(err_message, err)
|
||||
|
||||
grains['fqdns'] = sorted(list(fqdns))
|
||||
return grains
|
||||
|
@ -837,13 +837,10 @@ def set_multiprocessing_logging_level_by_opts(opts):
|
||||
'''
|
||||
global __MP_LOGGING_LEVEL
|
||||
|
||||
log_levels = []
|
||||
log_levels.append(
|
||||
LOG_LEVELS.get(opts.get('log_level', '').lower(), logging.ERROR)
|
||||
)
|
||||
log_levels.append(
|
||||
log_levels = [
|
||||
LOG_LEVELS.get(opts.get('log_level', '').lower(), logging.ERROR),
|
||||
LOG_LEVELS.get(opts.get('log_level_logfile', '').lower(), logging.ERROR)
|
||||
)
|
||||
]
|
||||
for level in six.itervalues(opts.get('log_granular_levels', {})):
|
||||
log_levels.append(
|
||||
LOG_LEVELS.get(level.lower(), logging.ERROR)
|
||||
|
@ -699,7 +699,7 @@ def cmd_zip(zip_file, sources, template=None, cwd=None, runas=None):
|
||||
|
||||
|
||||
@salt.utils.decorators.depends('zipfile', fallback_function=cmd_zip)
|
||||
def zip_(zip_file, sources, template=None, cwd=None, runas=None):
|
||||
def zip_(zip_file, sources, template=None, cwd=None, runas=None, zip64=False):
|
||||
'''
|
||||
Uses the ``zipfile`` Python module to create zip files
|
||||
|
||||
@ -744,6 +744,14 @@ def zip_(zip_file, sources, template=None, cwd=None, runas=None):
|
||||
Create the zip file as the specified user. Defaults to the user under
|
||||
which the minion is running.
|
||||
|
||||
zip64 : False
|
||||
Used to enable ZIP64 support, necessary to create archives larger than
|
||||
4 GByte in size.
|
||||
If true, will create ZIP file with the ZIPp64 extension when the zipfile
|
||||
is larger than 2 GB.
|
||||
ZIP64 extension is disabled by default in the Python native zip support
|
||||
because the default zip and unzip commands on Unix (the InfoZIP utilities)
|
||||
don't support these extensions.
|
||||
|
||||
CLI Example:
|
||||
|
||||
@ -788,7 +796,7 @@ def zip_(zip_file, sources, template=None, cwd=None, runas=None):
|
||||
try:
|
||||
exc = None
|
||||
archived_files = []
|
||||
with contextlib.closing(zipfile.ZipFile(zip_file, 'w', zipfile.ZIP_DEFLATED)) as zfile:
|
||||
with contextlib.closing(zipfile.ZipFile(zip_file, 'w', zipfile.ZIP_DEFLATED, zip64)) as zfile:
|
||||
for src in sources:
|
||||
if cwd:
|
||||
src = os.path.join(cwd, src)
|
||||
@ -828,9 +836,15 @@ def zip_(zip_file, sources, template=None, cwd=None, runas=None):
|
||||
if exc is not None:
|
||||
# Wait to raise the exception until euid/egid are restored to avoid
|
||||
# permission errors in writing to minion log.
|
||||
raise CommandExecutionError(
|
||||
'Exception encountered creating zipfile: {0}'.format(exc)
|
||||
)
|
||||
if exc == zipfile.LargeZipFile:
|
||||
raise CommandExecutionError(
|
||||
'Resulting zip file too large, would require ZIP64 support'
|
||||
'which has not been enabled. Rerun command with zip64=True'
|
||||
)
|
||||
else:
|
||||
raise CommandExecutionError(
|
||||
'Exception encountered creating zipfile: {0}'.format(exc)
|
||||
)
|
||||
|
||||
return archived_files
|
||||
|
||||
|
@ -169,9 +169,7 @@ def get_all_alarms(region=None, prefix=None, key=None, keyid=None,
|
||||
continue
|
||||
name = prefix + alarm["name"]
|
||||
del alarm["name"]
|
||||
alarm_sls = []
|
||||
alarm_sls.append({"name": name})
|
||||
alarm_sls.append({"attributes": alarm})
|
||||
alarm_sls = [{"name": name}, {"attributes": alarm}]
|
||||
results["manage alarm " + name] = {"boto_cloudwatch_alarm.present":
|
||||
alarm_sls}
|
||||
return _safe_dump(results)
|
||||
|
@ -289,12 +289,14 @@ def raw_cron(user):
|
||||
# Preserve line endings
|
||||
lines = sdecode(__salt__['cmd.run_stdout'](cmd,
|
||||
runas=user,
|
||||
ignore_retcode=True,
|
||||
rstrip=False,
|
||||
python_shell=False)).splitlines(True)
|
||||
else:
|
||||
cmd = 'crontab -u {0} -l'.format(user)
|
||||
# Preserve line endings
|
||||
lines = sdecode(__salt__['cmd.run_stdout'](cmd,
|
||||
ignore_retcode=True,
|
||||
rstrip=False,
|
||||
python_shell=False)).splitlines(True)
|
||||
|
||||
|
3618
salt/modules/lxd.py
Normal file
3618
salt/modules/lxd.py
Normal file
File diff suppressed because it is too large
Load Diff
@ -118,7 +118,7 @@ def exists(*nictag, **kwargs):
|
||||
salt '*' nictagadm.exists admin
|
||||
'''
|
||||
ret = {}
|
||||
if len(nictag) == 0:
|
||||
if not nictag:
|
||||
return {'Error': 'Please provide at least one nictag to check.'}
|
||||
|
||||
cmd = 'nictagadm exists -l {0}'.format(' '.join(nictag))
|
||||
@ -143,14 +143,14 @@ def add(name, mac, mtu=1500):
|
||||
mac : string
|
||||
mac of parent interface or 'etherstub' to create a ether stub
|
||||
mtu : int
|
||||
MTU
|
||||
MTU (ignored for etherstubs)
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' nictagadm.add storage etherstub
|
||||
salt '*' nictagadm.add trunk 'DE:AD:OO:OO:BE:EF' 9000
|
||||
salt '*' nictagadm.add storage0 etherstub
|
||||
salt '*' nictagadm.add trunk0 'DE:AD:OO:OO:BE:EF' 9000
|
||||
'''
|
||||
ret = {}
|
||||
|
||||
@ -159,21 +159,15 @@ def add(name, mac, mtu=1500):
|
||||
if mac != 'etherstub':
|
||||
cmd = 'dladm show-phys -m -p -o address'
|
||||
res = __salt__['cmd.run_all'](cmd)
|
||||
if mac not in res['stdout'].splitlines():
|
||||
# dladm prints '00' as '0', so account for that.
|
||||
if mac.replace('00', '0') not in res['stdout'].splitlines():
|
||||
return {'Error': '{0} is not present on this system.'.format(mac)}
|
||||
|
||||
if mac == 'etherstub':
|
||||
cmd = 'nictagadm add -l -p mtu={mtu} {name}'.format(
|
||||
mtu=mtu,
|
||||
name=name
|
||||
)
|
||||
cmd = 'nictagadm add -l {0}'.format(name)
|
||||
res = __salt__['cmd.run_all'](cmd)
|
||||
else:
|
||||
cmd = 'nictagadm add -p mtu={mtu},mac={mac} {name}'.format(
|
||||
mtu=mtu,
|
||||
mac=mac,
|
||||
name=name
|
||||
)
|
||||
cmd = 'nictagadm add -p mtu={0},mac={1} {2}'.format(mtu, mac, name)
|
||||
res = __salt__['cmd.run_all'](cmd)
|
||||
|
||||
if res['retcode'] == 0:
|
||||
@ -214,7 +208,8 @@ def update(name, mac=None, mtu=None):
|
||||
else:
|
||||
cmd = 'dladm show-phys -m -p -o address'
|
||||
res = __salt__['cmd.run_all'](cmd)
|
||||
if mac not in res['stdout'].splitlines():
|
||||
# dladm prints '00' as '0', so account for that.
|
||||
if mac.replace('00', '0') not in res['stdout'].splitlines():
|
||||
return {'Error': '{0} is not present on this system.'.format(mac)}
|
||||
|
||||
if mac and mtu:
|
||||
@ -224,10 +219,7 @@ def update(name, mac=None, mtu=None):
|
||||
elif mtu:
|
||||
properties = "mtu={0}".format(mtu) if mtu else ""
|
||||
|
||||
cmd = 'nictagadm update -p {properties} {name}'.format(
|
||||
properties=properties,
|
||||
name=name
|
||||
)
|
||||
cmd = 'nictagadm update -p {0} {1}'.format(properties, name)
|
||||
res = __salt__['cmd.run_all'](cmd)
|
||||
|
||||
if res['retcode'] == 0:
|
||||
@ -256,10 +248,7 @@ def delete(name, force=False):
|
||||
if name not in list_nictags():
|
||||
return True
|
||||
|
||||
cmd = 'nictagadm delete {force}{name}'.format(
|
||||
force="-f " if force else "",
|
||||
name=name
|
||||
)
|
||||
cmd = 'nictagadm delete {0}{1}'.format("-f " if force else "", name)
|
||||
res = __salt__['cmd.run_all'](cmd)
|
||||
|
||||
if res['retcode'] == 0:
|
||||
|
@ -57,8 +57,7 @@ def list_domains():
|
||||
salt '*' virt.list_domains
|
||||
'''
|
||||
data = __salt__['vmadm.list'](keyed=True)
|
||||
vms = []
|
||||
vms.append("UUID TYPE RAM STATE ALIAS")
|
||||
vms = ["UUID TYPE RAM STATE ALIAS"]
|
||||
for vm in data:
|
||||
vms.append("{vmuuid}{vmtype}{vmram}{vmstate}{vmalias}".format(
|
||||
vmuuid=vm.ljust(38),
|
||||
|
@ -280,8 +280,7 @@ def list_all(prefix=None, app=None, owner=None, description_contains=None,
|
||||
continue
|
||||
name = prefix + name
|
||||
# put name in the OrderedDict first
|
||||
d = []
|
||||
d.append({"name": name})
|
||||
d = [{"name": name}]
|
||||
# add the rest of the splunk settings, ignoring any defaults
|
||||
description = ''
|
||||
for (k, v) in sorted(search.content.items()):
|
||||
|
@ -338,6 +338,52 @@ def _gen_vol_xml(vmname,
|
||||
return template.render(**context)
|
||||
|
||||
|
||||
def _gen_net_xml(name,
|
||||
bridge,
|
||||
forward,
|
||||
vport,
|
||||
tag=None):
|
||||
'''
|
||||
Generate the XML string to define a libvirt network
|
||||
'''
|
||||
context = {
|
||||
'name': name,
|
||||
'bridge': bridge,
|
||||
'forward': forward,
|
||||
'vport': vport,
|
||||
'tag': tag,
|
||||
}
|
||||
fn_ = 'libvirt_network.jinja'
|
||||
try:
|
||||
template = JINJA.get_template(fn_)
|
||||
except jinja2.exceptions.TemplateNotFound:
|
||||
log.error('Could not load template %s', fn_)
|
||||
return ''
|
||||
return template.render(**context)
|
||||
|
||||
|
||||
def _gen_pool_xml(name,
|
||||
ptype,
|
||||
target,
|
||||
source=None):
|
||||
'''
|
||||
Generate the XML string to define a libvirt storage pool
|
||||
'''
|
||||
context = {
|
||||
'name': name,
|
||||
'ptype': ptype,
|
||||
'target': target,
|
||||
'source': source,
|
||||
}
|
||||
fn_ = 'libvirt_pool.jinja'
|
||||
try:
|
||||
template = JINJA.get_template(fn_)
|
||||
except jinja2.exceptions.TemplateNotFound:
|
||||
log.error('Could not load template %s', fn_)
|
||||
return ''
|
||||
return template.render(**context)
|
||||
|
||||
|
||||
def _qemu_image_info(path):
|
||||
'''
|
||||
Detect information for the image at path
|
||||
@ -2195,3 +2241,130 @@ def cpu_baseline(full=False, migratable=False, out='libvirt'):
|
||||
'vendor': cpu.getElementsByTagName('vendor')[0].childNodes[0].nodeValue,
|
||||
'features': [feature.getAttribute('name') for feature in cpu.getElementsByTagName('feature')]
|
||||
}
|
||||
|
||||
|
||||
def net_define(name, bridge, forward, **kwargs):
|
||||
'''
|
||||
Create libvirt network.
|
||||
|
||||
:param name: Network name
|
||||
:param bridge: Bridge name
|
||||
:param forward: Forward mode(bridge, router, nat)
|
||||
:param vport: Virtualport type
|
||||
:param tag: Vlan tag
|
||||
:param autostart: Network autostart (default True)
|
||||
:param start: Network start (default True)
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' virt.net_define network main bridge openvswitch
|
||||
'''
|
||||
conn = __get_conn()
|
||||
vport = kwargs.get('vport', None)
|
||||
tag = kwargs.get('tag', None)
|
||||
autostart = kwargs.get('autostart', True)
|
||||
starting = kwargs.get('start', True)
|
||||
xml = _gen_net_xml(
|
||||
name,
|
||||
bridge,
|
||||
forward,
|
||||
vport,
|
||||
tag,
|
||||
)
|
||||
try:
|
||||
conn.networkDefineXML(xml)
|
||||
except libvirtError as err:
|
||||
log.warning(err)
|
||||
raise err # a real error we should report upwards
|
||||
|
||||
try:
|
||||
network = conn.networkLookupByName(name)
|
||||
except libvirtError as err:
|
||||
log.warning(err)
|
||||
raise err # a real error we should report upwards
|
||||
|
||||
if network is None:
|
||||
return False
|
||||
|
||||
if (starting is True or autostart is True) and network.isActive() != 1:
|
||||
network.create()
|
||||
|
||||
if autostart is True and network.autostart() != 1:
|
||||
network.setAutostart(int(autostart))
|
||||
elif autostart is False and network.autostart() == 1:
|
||||
network.setAutostart(int(autostart))
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def pool_define_build(name, **kwargs):
|
||||
'''
|
||||
Create libvirt pool.
|
||||
|
||||
:param name: Pool name
|
||||
:param ptype: Pool type
|
||||
:param target: Pool path target
|
||||
:param source: Pool dev source
|
||||
:param autostart: Pool autostart (default True)
|
||||
:param start: Pool start (default True)
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' virt.pool_defin base logical base
|
||||
'''
|
||||
exist = False
|
||||
update = False
|
||||
conn = __get_conn()
|
||||
ptype = kwargs.pop('ptype', None)
|
||||
target = kwargs.pop('target', None)
|
||||
source = kwargs.pop('source', None)
|
||||
autostart = kwargs.pop('autostart', True)
|
||||
starting = kwargs.pop('start', True)
|
||||
xml = _gen_pool_xml(
|
||||
name,
|
||||
ptype,
|
||||
target,
|
||||
source,
|
||||
)
|
||||
try:
|
||||
conn.storagePoolDefineXML(xml)
|
||||
except libvirtError as err:
|
||||
log.warning(err)
|
||||
if err.get_error_code() == libvirt.VIR_ERR_STORAGE_POOL_BUILT or libvirt.VIR_ERR_OPERATION_FAILED:
|
||||
exist = True
|
||||
else:
|
||||
raise err # a real error we should report upwards
|
||||
try:
|
||||
pool = conn.storagePoolLookupByName(name)
|
||||
except libvirtError as err:
|
||||
log.warning(err)
|
||||
raise err # a real error we should report upwards
|
||||
|
||||
if pool is None:
|
||||
return False
|
||||
|
||||
if (starting is True or autostart is True) and pool.isActive() != 1:
|
||||
if exist is True:
|
||||
update = True
|
||||
pool.create()
|
||||
else:
|
||||
pool.create(libvirt.VIR_STORAGE_POOL_CREATE_WITH_BUILD)
|
||||
|
||||
if autostart is True and pool.autostart() != 1:
|
||||
if exist is True:
|
||||
update = True
|
||||
pool.setAutostart(int(autostart))
|
||||
elif autostart is False and pool.autostart() == 1:
|
||||
if exist is True:
|
||||
update = True
|
||||
pool.setAutostart(int(autostart))
|
||||
if exist is True:
|
||||
if update is True:
|
||||
return (True, 'Pool exist', 'Pool update')
|
||||
return (True, 'Pool exist')
|
||||
|
||||
return True
|
||||
|
@ -200,7 +200,7 @@ def create(path,
|
||||
for entry in extra_search_dir:
|
||||
cmd.append('--extra-search-dir={0}'.format(entry))
|
||||
if never_download is True:
|
||||
if virtualenv_version_info >= (1, 10) and virtualenv_version_info < (14, 0, 0):
|
||||
if (1, 10) <= virtualenv_version_info < (14, 0, 0):
|
||||
log.info(
|
||||
'--never-download was deprecated in 1.10.0, but reimplemented in 14.0.0. '
|
||||
'If this feature is needed, please install a supported virtualenv version.'
|
||||
|
@ -347,34 +347,20 @@ def iostat(zpool=None, sample_time=5, parsable=True):
|
||||
|
||||
def list_(properties='size,alloc,free,cap,frag,health', zpool=None, parsable=True):
|
||||
'''
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
.. versionadded:: 2015.5.0
|
||||
|
||||
>>>>>>> 2018.3
|
||||
Return information about (all) storage pools
|
||||
|
||||
zpool : string
|
||||
optional name of storage pool
|
||||
|
||||
properties : string
|
||||
<<<<<<< HEAD
|
||||
comma-separated list of properties to display
|
||||
parsable : boolean
|
||||
display data in pythonc values (True, False, Bytes,...)
|
||||
|
||||
.. versionadded:: 2015.5.0
|
||||
.. versionchanged:: Fluorine
|
||||
|
||||
Added ```parsable``` parameter that defaults to True
|
||||
=======
|
||||
comma-separated list of properties to list
|
||||
|
||||
parsable : boolean
|
||||
display numbers in parsable (exact) values
|
||||
|
||||
.. versionadded:: 2018.3.0
|
||||
>>>>>>> 2018.3
|
||||
|
||||
.. note::
|
||||
|
||||
@ -458,11 +444,8 @@ def list_(properties='size,alloc,free,cap,frag,health', zpool=None, parsable=Tru
|
||||
|
||||
def get(zpool, prop=None, show_source=False, parsable=True):
|
||||
'''
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
.. versionadded:: 2016.3.0
|
||||
|
||||
>>>>>>> 2018.3
|
||||
Retrieves the given list of properties
|
||||
|
||||
zpool : string
|
||||
@ -475,18 +458,9 @@ def get(zpool, prop=None, show_source=False, parsable=True):
|
||||
Show source of property
|
||||
|
||||
parsable : boolean
|
||||
<<<<<<< HEAD
|
||||
display data in pythonc values (True, False, Bytes,...)
|
||||
|
||||
.. versionadded:: 2016.3.0
|
||||
.. versionchanged:: Fluorine
|
||||
|
||||
Added ```parsable``` parameter that defaults to True
|
||||
=======
|
||||
Display numbers in parsable (exact) values
|
||||
|
||||
.. versionadded:: 2018.3.0
|
||||
>>>>>>> 2018.3
|
||||
|
||||
CLI Example:
|
||||
|
||||
@ -654,24 +628,14 @@ def scrub(zpool, stop=False, pause=False):
|
||||
If ``True``, cancel ongoing scrub
|
||||
|
||||
pause : boolean
|
||||
<<<<<<< HEAD
|
||||
if true, pause ongoing scrub
|
||||
=======
|
||||
If ``True``, pause ongoing scrub
|
||||
|
||||
>>>>>>> 2018.3
|
||||
.. versionadded:: 2018.3.0
|
||||
|
||||
.. note::
|
||||
|
||||
<<<<<<< HEAD
|
||||
If both pause and stop are true, stop will win.
|
||||
Pause support was added in this PR:
|
||||
https://github.com/openzfs/openzfs/pull/407
|
||||
=======
|
||||
If both ``pause`` and ``stop`` are ``True``, then ``stop`` will
|
||||
win.
|
||||
>>>>>>> 2018.3
|
||||
|
||||
CLI Example:
|
||||
|
||||
@ -711,11 +675,8 @@ def scrub(zpool, stop=False, pause=False):
|
||||
|
||||
def create(zpool, *vdevs, **kwargs):
|
||||
'''
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
.. versionadded:: 2015.5.0
|
||||
|
||||
>>>>>>> 2018.3
|
||||
Create a simple zpool, a mirrored zpool, a zpool having nested VDEVs, a hybrid zpool with cache, spare and log drives or a zpool with RAIDZ-1, RAIDZ-2 or RAIDZ-3
|
||||
|
||||
zpool : string
|
||||
@ -741,12 +702,6 @@ def create(zpool, *vdevs, **kwargs):
|
||||
Additional filesystem properties
|
||||
|
||||
createboot : boolean
|
||||
<<<<<<< HEAD
|
||||
..versionadded:: 2018.3.0
|
||||
create a boot partition
|
||||
|
||||
.. versionadded:: 2015.5.0
|
||||
=======
|
||||
create a boot partition
|
||||
|
||||
.. versionadded:: 2018.3.0
|
||||
@ -760,7 +715,6 @@ def create(zpool, *vdevs, **kwargs):
|
||||
salt '*' zpool.create myzpool raidz1 /path/to/vdev1 /path/to/vdev2 raidz2 /path/to/vdev3 /path/to/vdev4 /path/to/vdev5 [...] [force=True|False]
|
||||
salt '*' zpool.create myzpool mirror /path/to/vdev1 [...] mirror /path/to/vdev2 /path/to/vdev3 [...] [force=True|False]
|
||||
salt '*' zpool.create myhybridzpool mirror /tmp/file1 [...] log mirror /path/to/vdev1 [...] cache /path/to/vdev2 [...] spare /path/to/vdev3 [...] [force=True|False]
|
||||
>>>>>>> 2018.3
|
||||
|
||||
.. note::
|
||||
|
||||
@ -994,11 +948,8 @@ def detach(zpool, device):
|
||||
|
||||
def split(zpool, newzpool, **kwargs):
|
||||
'''
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
.. versionadded:: 2018.3.0
|
||||
|
||||
>>>>>>> 2018.3
|
||||
Splits devices off pool creating newpool.
|
||||
|
||||
.. note::
|
||||
@ -1023,16 +974,12 @@ def split(zpool, newzpool, **kwargs):
|
||||
properties : dict
|
||||
Additional pool properties for newzpool
|
||||
|
||||
<<<<<<< HEAD
|
||||
.. versionadded:: 2018.3.0
|
||||
=======
|
||||
CLI Examples:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' zpool.split datamirror databackup
|
||||
salt '*' zpool.split datamirror databackup altroot=/backup
|
||||
>>>>>>> 2018.3
|
||||
|
||||
.. note::
|
||||
|
||||
@ -1086,11 +1033,7 @@ def split(zpool, newzpool, **kwargs):
|
||||
|
||||
def replace(zpool, old_device, new_device=None, force=False):
|
||||
'''
|
||||
<<<<<<< HEAD
|
||||
Replaces old_device with new_device.
|
||||
=======
|
||||
Replaces ``old_device`` with ``new_device``
|
||||
>>>>>>> 2018.3
|
||||
|
||||
.. note::
|
||||
|
||||
@ -1157,13 +1100,7 @@ def replace(zpool, old_device, new_device=None, force=False):
|
||||
@salt.utils.decorators.path.which('mkfile')
|
||||
def create_file_vdev(size, *vdevs):
|
||||
'''
|
||||
<<<<<<< HEAD
|
||||
Creates file based ``virtual devices`` for a zpool
|
||||
|
||||
``*vdevs`` is a list of full paths for mkfile to create
|
||||
=======
|
||||
Creates file based virtual devices for a zpool
|
||||
>>>>>>> 2018.3
|
||||
|
||||
CLI Example:
|
||||
|
||||
@ -1206,11 +1143,8 @@ def create_file_vdev(size, *vdevs):
|
||||
|
||||
def export(*pools, **kwargs):
|
||||
'''
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
.. versionadded:: 2015.5.0
|
||||
|
||||
>>>>>>> 2018.3
|
||||
Export storage pools
|
||||
|
||||
pools : string
|
||||
@ -1219,8 +1153,6 @@ def export(*pools, **kwargs):
|
||||
force : boolean
|
||||
Force export of storage pools
|
||||
|
||||
.. versionadded:: 2015.5.0
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
@ -1256,11 +1188,8 @@ def export(*pools, **kwargs):
|
||||
|
||||
def import_(zpool=None, new_name=None, **kwargs):
|
||||
'''
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
.. versionadded:: 2015.5.0
|
||||
|
||||
>>>>>>> 2018.3
|
||||
Import storage pools or list pools available for import
|
||||
|
||||
zpool : string
|
||||
@ -1287,8 +1216,8 @@ def import_(zpool=None, new_name=None, **kwargs):
|
||||
Import the pool without mounting any file systems.
|
||||
|
||||
only_destroyed : boolean
|
||||
<<<<<<< HEAD
|
||||
imports destroyed pools only. this also sets force=True.
|
||||
Imports destroyed pools only. This also sets ``force=True``.
|
||||
|
||||
recovery : bool|str
|
||||
false: do not try to recovery broken pools
|
||||
true: try to recovery the pool by rolling back the latest transactions
|
||||
@ -1301,9 +1230,6 @@ def import_(zpool=None, new_name=None, **kwargs):
|
||||
.. warning::
|
||||
When recovery is set to 'test' the result will be have imported set to True if the pool
|
||||
can be imported. The pool might also be imported if the pool was not broken to begin with.
|
||||
=======
|
||||
Imports destroyed pools only. this also sets force=True.
|
||||
>>>>>>> 2018.3
|
||||
|
||||
properties : dict
|
||||
Additional pool properties
|
||||
@ -1319,8 +1245,6 @@ def import_(zpool=None, new_name=None, **kwargs):
|
||||
|
||||
properties="{'property1': 'value1', 'property2': 'value2'}"
|
||||
|
||||
.. versionadded:: 2015.5.0
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
@ -1387,11 +1311,8 @@ def import_(zpool=None, new_name=None, **kwargs):
|
||||
|
||||
def online(zpool, *vdevs, **kwargs):
|
||||
'''
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
.. versionadded:: 2015.5.0
|
||||
|
||||
>>>>>>> 2018.3
|
||||
Ensure that the specified devices are online
|
||||
|
||||
zpool : string
|
||||
@ -1408,8 +1329,6 @@ def online(zpool, *vdevs, **kwargs):
|
||||
If the device is part of a mirror or raidz then all devices must be
|
||||
expanded before the new space will become available to the pool.
|
||||
|
||||
.. versionadded:: 2015.5.0
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
@ -1457,11 +1376,8 @@ def online(zpool, *vdevs, **kwargs):
|
||||
|
||||
def offline(zpool, *vdevs, **kwargs):
|
||||
'''
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
.. versionadded:: 2015.5.0
|
||||
|
||||
>>>>>>> 2018.3
|
||||
Ensure that the specified devices are offline
|
||||
|
||||
.. warning::
|
||||
@ -1479,8 +1395,6 @@ def offline(zpool, *vdevs, **kwargs):
|
||||
temporary : boolean
|
||||
Enable temporarily offline
|
||||
|
||||
.. versionadded:: 2015.5.0
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
@ -1516,25 +1430,16 @@ def offline(zpool, *vdevs, **kwargs):
|
||||
|
||||
def labelclear(device, force=False):
|
||||
'''
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
.. versionadded:: 2018.3.0
|
||||
|
||||
>>>>>>> 2018.3
|
||||
Removes ZFS label information from the specified device
|
||||
|
||||
device : string
|
||||
<<<<<<< HEAD
|
||||
device, must not be part of an active pool configuration.
|
||||
=======
|
||||
Device name
|
||||
Device name; must not be part of an active pool configuration.
|
||||
|
||||
>>>>>>> 2018.3
|
||||
force : boolean
|
||||
Treat exported or foreign devices as inactive
|
||||
|
||||
.. versionadded:: 2018.3.0
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
@ -1659,25 +1564,16 @@ def reopen(zpool):
|
||||
|
||||
def upgrade(zpool=None, version=None):
|
||||
'''
|
||||
.. versionadded:: 2016.3.0
|
||||
|
||||
Enables all supported features on the given pool
|
||||
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
.. warning::
|
||||
Once this is done, the pool will no longer be accessible on systems
|
||||
that do not support feature flags. See ``zpool-features(5)`` for
|
||||
details on compatibility with systems that support feature flags, but
|
||||
do not support all features enabled on the pool.
|
||||
|
||||
>>>>>>> 2018.3
|
||||
zpool : string
|
||||
Optional storage pool, applies to all otherwize
|
||||
|
||||
version : int
|
||||
Version to upgrade to, if unspecified upgrade to the highest possible
|
||||
|
||||
.. versionadded:: 2016.3.0
|
||||
|
||||
.. warning::
|
||||
Once this is done, the pool will no longer be accessible on systems that do not
|
||||
support feature flags. See zpool-features(5) for details on compatibility with
|
||||
@ -1717,14 +1613,10 @@ def upgrade(zpool=None, version=None):
|
||||
|
||||
def history(zpool=None, internal=False, verbose=False):
|
||||
'''
|
||||
<<<<<<< HEAD
|
||||
Displays the command history of the specified pools or all pools if no pool is specified
|
||||
=======
|
||||
.. versionadded:: 2016.3.0
|
||||
|
||||
Displays the command history of the specified pools, or all pools if no
|
||||
pool is specified
|
||||
>>>>>>> 2018.3
|
||||
|
||||
zpool : string
|
||||
Optional storage pool
|
||||
@ -1736,8 +1628,6 @@ def history(zpool=None, internal=False, verbose=False):
|
||||
Toggle display of the user name, the hostname, and the zone in which
|
||||
the operation was performed
|
||||
|
||||
.. versionadded:: 2016.3.0
|
||||
|
||||
CLI Example:
|
||||
|
||||
.. code-block:: bash
|
||||
|
@ -135,7 +135,7 @@ A REST API for Salt
|
||||
This is useful for bootstrapping a single-page JavaScript app.
|
||||
|
||||
Warning! If you set this option to a custom web application, anything
|
||||
that uses cookie-based authentcation is vulnerable to XSRF attacks.
|
||||
that uses cookie-based authentication is vulnerable to XSRF attacks.
|
||||
Send the custom ``X-Auth-Token`` header instead and consider disabling
|
||||
the ``enable_sessions`` setting.
|
||||
|
||||
@ -174,7 +174,7 @@ cookie. The latter is far more convenient for clients that support cookies.
|
||||
-H 'Accept: application/x-yaml' \\
|
||||
-d username=saltdev \\
|
||||
-d password=saltdev \\
|
||||
-d eauth=auto
|
||||
-d eauth=pam
|
||||
|
||||
Copy the ``token`` value from the output and include it in subsequent requests:
|
||||
|
||||
|
@ -79,16 +79,18 @@ def ext_pillar(minion_id, # pylint: disable=W0613
|
||||
log.error('"%s" is not a valid Vault ext_pillar config', conf)
|
||||
return {}
|
||||
|
||||
vault_pillar = {}
|
||||
|
||||
try:
|
||||
path = paths[0].replace('path=', '')
|
||||
path = path.format(**{'minion': minion_id})
|
||||
url = 'v1/{0}'.format(path)
|
||||
response = __utils__['vault.make_request']('GET', url)
|
||||
if response.status_code != 200:
|
||||
response.raise_for_status()
|
||||
vault_pillar = response.json()['data']
|
||||
if response.status_code == 200:
|
||||
vault_pillar = response.json().get('data', {})
|
||||
else:
|
||||
log.info('Vault secret not found for: %s', path)
|
||||
except KeyError:
|
||||
log.error('No such path in Vault: %s', path)
|
||||
vault_pillar = {}
|
||||
|
||||
return vault_pillar
|
||||
|
@ -283,8 +283,7 @@ def find_credentials():
|
||||
Cycle through all the possible credentials and return the first one that
|
||||
works
|
||||
'''
|
||||
usernames = []
|
||||
usernames.append(__pillar__['proxy'].get('admin_username', 'root'))
|
||||
usernames = [__pillar__['proxy'].get('admin_username', 'root')]
|
||||
if 'fallback_admin_username' in __pillar__.get('proxy'):
|
||||
usernames.append(__pillar__['proxy'].get('fallback_admin_username'))
|
||||
|
||||
|
@ -189,15 +189,13 @@ def returner(ret):
|
||||
jid, minion_id, fun, alter_time, full_ret, return, success
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?)'''
|
||||
|
||||
statement_arguments = []
|
||||
|
||||
statement_arguments.append('{0}'.format(ret['jid']))
|
||||
statement_arguments.append('{0}'.format(ret['id']))
|
||||
statement_arguments.append('{0}'.format(ret['fun']))
|
||||
statement_arguments.append(int(time.time() * 1000))
|
||||
statement_arguments.append(salt.utils.json.dumps(ret).replace("'", "''"))
|
||||
statement_arguments.append(salt.utils.json.dumps(ret['return']).replace("'", "''"))
|
||||
statement_arguments.append(ret.get('success', False))
|
||||
statement_arguments = ['{0}'.format(ret['jid']),
|
||||
'{0}'.format(ret['id']),
|
||||
'{0}'.format(ret['fun']),
|
||||
int(time.time() * 1000),
|
||||
salt.utils.json.dumps(ret).replace("'", "''"),
|
||||
salt.utils.json.dumps(ret['return']).replace("'", "''"),
|
||||
ret.get('success', False)]
|
||||
|
||||
# cassandra_cql.cql_query may raise a CommandExecutionError
|
||||
try:
|
||||
@ -218,10 +216,7 @@ def returner(ret):
|
||||
minion_id, last_fun
|
||||
) VALUES (?, ?)'''
|
||||
|
||||
statement_arguments = []
|
||||
|
||||
statement_arguments.append('{0}'.format(ret['id']))
|
||||
statement_arguments.append('{0}'.format(ret['fun']))
|
||||
statement_arguments = ['{0}'.format(ret['id']), '{0}'.format(ret['fun'])]
|
||||
|
||||
# cassandra_cql.cql_query may raise a CommandExecutionError
|
||||
try:
|
||||
|
@ -72,6 +72,10 @@ def zabbix_send(key, host, output):
|
||||
cmd = zbx()['sender'] + " -c " + zbx()['config'] + " -s " + host + " -k " + key + " -o \"" + output +"\""
|
||||
|
||||
|
||||
def save_load(jid, load, minions=None):
|
||||
pass
|
||||
|
||||
|
||||
def returner(ret):
|
||||
changes = False
|
||||
errors = False
|
||||
|
@ -2157,7 +2157,7 @@ class State(object):
|
||||
tag = _gen_tag(low)
|
||||
if self.opts.get('test', False):
|
||||
return False
|
||||
if (low.get('failhard', False) or self.opts['failhard']) and tag in running:
|
||||
if low.get('failhard', self.opts['failhard']) and tag in running:
|
||||
if running[tag]['result'] is None:
|
||||
return False
|
||||
return not running[tag]['result']
|
||||
|
@ -384,11 +384,15 @@ def running(name,
|
||||
**NETWORK MANAGEMENT**
|
||||
|
||||
.. versionadded:: 2018.3.0
|
||||
.. versionchanged:: Fluorine
|
||||
If the ``networks`` option is used, any networks (including the default
|
||||
``bridge`` network) which are not specified will be disconnected.
|
||||
|
||||
The ``networks`` argument can be used to ensure that a container is
|
||||
attached to one or more networks. Optionally, arguments can be passed to
|
||||
the networks. In the example below, ``net1`` is being configured with
|
||||
arguments, while ``net2`` is being configured *without* arguments:
|
||||
arguments, while ``net2`` and ``bridge`` are being configured *without*
|
||||
arguments:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@ -402,6 +406,7 @@ def running(name,
|
||||
- baz
|
||||
- ipv4_address: 10.0.20.50
|
||||
- net2
|
||||
- bridge
|
||||
- require:
|
||||
- docker_network: net1
|
||||
- docker_network: net2
|
||||
@ -418,6 +423,17 @@ def running(name,
|
||||
|
||||
.. _`connect_container_to_network`: https://docker-py.readthedocs.io/en/stable/api.html#docker.api.network.NetworkApiMixin.connect_container_to_network
|
||||
|
||||
To start a container with no network connectivity (only possible in
|
||||
Fluorine and later) pass this option as an empty list. For example:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
foo:
|
||||
docker_container.running:
|
||||
- image: myuser/myimage:foo
|
||||
- networks: []
|
||||
|
||||
|
||||
**CONTAINER CONFIGURATION PARAMETERS**
|
||||
|
||||
auto_remove (or *rm*) : False
|
||||
@ -1672,6 +1688,9 @@ def running(name,
|
||||
image = six.text_type(image)
|
||||
|
||||
try:
|
||||
# Since we're rewriting the "networks" value below, save the original
|
||||
# value here.
|
||||
configured_networks = networks
|
||||
networks = _parse_networks(networks)
|
||||
image_id = _resolve_image(ret, image, client_timeout)
|
||||
except CommandExecutionError as exc:
|
||||
@ -1802,9 +1821,32 @@ def running(name,
|
||||
ret['result'] = False
|
||||
comments.append(exc.__str__())
|
||||
return _format_comments(ret, comments)
|
||||
|
||||
post_net_connect = __salt__['docker.inspect_container'](
|
||||
temp_container_name)
|
||||
|
||||
if configured_networks is not None:
|
||||
# Use set arithmetic to determine the networks which are connected
|
||||
# but not explicitly defined. They will be disconnected below. Note
|
||||
# that we check configured_networks because it represents the
|
||||
# original (unparsed) network configuration. When no networks
|
||||
# argument is used, the parsed networks will be an empty list, so
|
||||
# it's not sufficient to do a boolean check on the "networks"
|
||||
# variable.
|
||||
extra_nets = set(
|
||||
post_net_connect.get('NetworkSettings', {}).get('Networks', {})
|
||||
) - set(networks)
|
||||
|
||||
if extra_nets:
|
||||
for extra_net in extra_nets:
|
||||
__salt__['docker.disconnect_container_from_network'](
|
||||
temp_container_name,
|
||||
extra_net)
|
||||
|
||||
# We've made changes, so we need to inspect the container again
|
||||
post_net_connect = __salt__['docker.inspect_container'](
|
||||
temp_container_name)
|
||||
|
||||
net_changes = __salt__['docker.compare_container_networks'](
|
||||
pre_net_connect, post_net_connect)
|
||||
|
||||
@ -2183,7 +2225,7 @@ def run(name,
|
||||
return ret
|
||||
|
||||
try:
|
||||
if 'networks' in kwargs:
|
||||
if 'networks' in kwargs and kwargs['networks'] is not None:
|
||||
kwargs['networks'] = _parse_networks(kwargs['networks'])
|
||||
image_id = _resolve_image(ret, image, client_timeout)
|
||||
except CommandExecutionError as exc:
|
||||
|
320
salt/states/lxd.py
Normal file
320
salt/states/lxd.py
Normal file
@ -0,0 +1,320 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Manage LXD profiles.
|
||||
|
||||
.. versionadded:: Fluorine
|
||||
|
||||
.. note:
|
||||
|
||||
- `pylxd`_ version 2 is required to let this work,
|
||||
currently only available via pip.
|
||||
|
||||
To install on Ubuntu:
|
||||
|
||||
$ apt-get install libssl-dev python-pip
|
||||
$ pip install -U pylxd
|
||||
|
||||
- you need lxd installed on the minion
|
||||
for the init() and version() methods.
|
||||
|
||||
- for the config_get() and config_get() methods
|
||||
you need to have lxd-client installed.
|
||||
|
||||
.. _pylxd: https://github.com/lxc/pylxd/blob/master/doc/source/installation.rst
|
||||
|
||||
:maintainer: René Jochum <rene@jochums.at>
|
||||
:maturity: new
|
||||
:depends: python-pylxd
|
||||
:platform: Linux
|
||||
'''
|
||||
|
||||
# Import python libs
|
||||
from __future__ import absolute_import, print_function, unicode_literals
|
||||
import os.path
|
||||
|
||||
# Import salt libs
|
||||
from salt.exceptions import CommandExecutionError
|
||||
from salt.exceptions import SaltInvocationError
|
||||
import salt.ext.six as six
|
||||
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
__virtualname__ = 'lxd'
|
||||
|
||||
_password_config_key = 'core.trust_password'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
'''
|
||||
Only load if the lxd module is available in __salt__
|
||||
'''
|
||||
return __virtualname__ if 'lxd.version' in __salt__ else False
|
||||
|
||||
|
||||
def init(storage_backend='dir', trust_password=None, network_address=None,
|
||||
network_port=None, storage_create_device=None,
|
||||
storage_create_loop=None, storage_pool=None,
|
||||
done_file='%SALT_CONFIG_DIR%/lxd_initialized', name=None):
|
||||
'''
|
||||
Initalizes the LXD Daemon, as LXD doesn't tell if its initialized
|
||||
we touch the the done_file and check if it exist.
|
||||
|
||||
This can only be called once per host unless you remove the done_file.
|
||||
|
||||
storage_backend :
|
||||
Storage backend to use (zfs or dir, default: dir)
|
||||
|
||||
trust_password :
|
||||
Password required to add new clients
|
||||
|
||||
network_address : None
|
||||
Address to bind LXD to (default: none)
|
||||
|
||||
network_port : None
|
||||
Port to bind LXD to (Default: 8443)
|
||||
|
||||
storage_create_device : None
|
||||
Setup device based storage using this DEVICE
|
||||
|
||||
storage_create_loop : None
|
||||
Setup loop based storage with this SIZE in GB
|
||||
|
||||
storage_pool : None
|
||||
Storage pool to use or create
|
||||
|
||||
done_file :
|
||||
Path where we check that this method has been called,
|
||||
as it can run only once and theres currently no way
|
||||
to ask LXD if init has been called.
|
||||
|
||||
name :
|
||||
Ignore this. This is just here for salt.
|
||||
|
||||
CLI Examples:
|
||||
|
||||
To listen on all IPv4/IPv6 Addresses:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' lxd.init dir PaSsW0rD [::]
|
||||
|
||||
To not listen on Network:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
salt '*' lxd.init
|
||||
'''
|
||||
|
||||
ret = {
|
||||
'name': name,
|
||||
'storage_backend': storage_backend,
|
||||
'trust_password': True if trust_password is not None else False,
|
||||
'network_address': network_address,
|
||||
'network_port': network_port,
|
||||
'storage_create_device': storage_create_device,
|
||||
'storage_create_loop': storage_create_loop,
|
||||
'storage_pool': storage_pool,
|
||||
'done_file': done_file,
|
||||
}
|
||||
|
||||
# TODO: Get a better path and don't hardcode '/etc/salt'
|
||||
done_file = done_file.replace('%SALT_CONFIG_DIR%', '/etc/salt')
|
||||
if os.path.exists(done_file):
|
||||
# Success we already did that.
|
||||
return _success(ret, 'LXD is already initialized')
|
||||
|
||||
if __opts__['test']:
|
||||
return _success(ret, 'Would initialize LXD')
|
||||
|
||||
# We always touch the done_file, so when LXD is already initialized
|
||||
# we don't run this over and over.
|
||||
__salt__['file.touch'](done_file)
|
||||
|
||||
try:
|
||||
__salt__['lxd.init'](
|
||||
storage_backend if storage_backend else None,
|
||||
trust_password if trust_password else None,
|
||||
network_address if network_address else None,
|
||||
network_port if network_port else None,
|
||||
storage_create_device if storage_create_device else None,
|
||||
storage_create_loop if storage_create_loop else None,
|
||||
storage_pool if storage_pool else None
|
||||
)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
|
||||
return _success(ret, 'Initialized the LXD Daemon')
|
||||
|
||||
|
||||
def config_managed(name, value, force_password=False):
|
||||
'''
|
||||
Manage a LXD Server config setting.
|
||||
|
||||
name :
|
||||
The name of the config key.
|
||||
|
||||
value :
|
||||
Its value.
|
||||
|
||||
force_password : False
|
||||
Set this to True if you want to set the password on every run.
|
||||
|
||||
As we can't retrieve the password from LXD we can't check
|
||||
if the current one is the same as the given one.
|
||||
|
||||
'''
|
||||
ret = {
|
||||
'name': name,
|
||||
'value': value if name != 'core.trust_password' else True,
|
||||
'force_password': force_password
|
||||
}
|
||||
|
||||
try:
|
||||
current_value = __salt__['lxd.config_get'](name)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
|
||||
if (name == _password_config_key and
|
||||
(not force_password or not current_value)):
|
||||
msg = (
|
||||
('"{0}" is already set '
|
||||
'(we don\'t known if the password is correct)').format(name)
|
||||
)
|
||||
return _success(ret, msg)
|
||||
|
||||
elif six.text_type(value) == current_value:
|
||||
msg = ('"{0}" is already set to "{1}"'.format(name, value))
|
||||
return _success(ret, msg)
|
||||
|
||||
if __opts__['test']:
|
||||
if name == _password_config_key:
|
||||
msg = 'Would set the LXD password'
|
||||
ret['changes'] = {'password': msg}
|
||||
return _unchanged(ret, msg)
|
||||
else:
|
||||
msg = 'Would set the "{0}" to "{1}"'.format(name, value)
|
||||
ret['changes'] = {name: msg}
|
||||
return _unchanged(ret, msg)
|
||||
|
||||
result_msg = ''
|
||||
try:
|
||||
result_msg = __salt__['lxd.config_set'](name, value)[0]
|
||||
if name == _password_config_key:
|
||||
ret['changes'] = {
|
||||
name: 'Changed the password'
|
||||
}
|
||||
else:
|
||||
ret['changes'] = {
|
||||
name: 'Changed from "{0}" to {1}"'.format(
|
||||
current_value, value
|
||||
)
|
||||
}
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
|
||||
return _success(ret, result_msg)
|
||||
|
||||
|
||||
def authenticate(remote_addr, password, cert, key, verify_cert=True, name=''):
|
||||
'''
|
||||
Authenticate with a remote peer.
|
||||
|
||||
.. notes:
|
||||
|
||||
This function makes every time you run this a connection
|
||||
to remote_addr, you better call this only once.
|
||||
|
||||
remote_addr :
|
||||
An URL to a remote Server, you also have to give cert and key if you
|
||||
provide remote_addr!
|
||||
|
||||
Examples:
|
||||
https://myserver.lan:8443
|
||||
/var/lib/mysocket.sock
|
||||
|
||||
password :
|
||||
The PaSsW0rD
|
||||
|
||||
cert :
|
||||
PEM Formatted SSL Zertifikate.
|
||||
|
||||
Examples:
|
||||
/root/.config/lxc/client.crt
|
||||
|
||||
key :
|
||||
PEM Formatted SSL Key.
|
||||
|
||||
Examples:
|
||||
/root/.config/lxc/client.key
|
||||
|
||||
verify_cert : True
|
||||
Wherever to verify the cert, this is by default True
|
||||
but in the most cases you want to set it off as LXD
|
||||
normaly uses self-signed certificates.
|
||||
|
||||
name :
|
||||
Ignore this. This is just here for salt.
|
||||
'''
|
||||
ret = {
|
||||
'name': name,
|
||||
'remote_addr': remote_addr,
|
||||
'cert': cert,
|
||||
'key': key,
|
||||
'verify_cert': verify_cert
|
||||
}
|
||||
|
||||
try:
|
||||
client = __salt__['lxd.pylxd_client_get'](
|
||||
remote_addr, cert, key, verify_cert
|
||||
)
|
||||
except SaltInvocationError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
|
||||
if client.trusted:
|
||||
return _success(ret, "Already authenticated.")
|
||||
|
||||
try:
|
||||
result = __salt__['lxd.authenticate'](
|
||||
remote_addr, password, cert, key, verify_cert
|
||||
)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
|
||||
if result is not True:
|
||||
return _error(
|
||||
ret,
|
||||
"Failed to authenticate with peer: {0}".format(remote_addr)
|
||||
)
|
||||
|
||||
msg = "Successfully authenticated with peer: {0}".format(remote_addr)
|
||||
ret['changes'] = msg
|
||||
return _success(
|
||||
ret,
|
||||
msg
|
||||
)
|
||||
|
||||
|
||||
def _success(ret, success_msg):
|
||||
ret['result'] = True
|
||||
ret['comment'] = success_msg
|
||||
if 'changes' not in ret:
|
||||
ret['changes'] = {}
|
||||
return ret
|
||||
|
||||
|
||||
def _unchanged(ret, msg):
|
||||
ret['result'] = None
|
||||
ret['comment'] = msg
|
||||
if 'changes' not in ret:
|
||||
ret['changes'] = {}
|
||||
return ret
|
||||
|
||||
|
||||
def _error(ret, err_msg):
|
||||
ret['result'] = False
|
||||
ret['comment'] = err_msg
|
||||
if 'changes' not in ret:
|
||||
ret['changes'] = {}
|
||||
return ret
|
876
salt/states/lxd_container.py
Normal file
876
salt/states/lxd_container.py
Normal file
@ -0,0 +1,876 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Manage LXD containers.
|
||||
|
||||
.. versionadded:: Fluorine
|
||||
|
||||
.. note:
|
||||
|
||||
- `pylxd`_ version 2 is required to let this work,
|
||||
currently only available via pip.
|
||||
|
||||
To install on Ubuntu:
|
||||
|
||||
$ apt-get install libssl-dev python-pip
|
||||
$ pip install -U pylxd
|
||||
|
||||
- you need lxd installed on the minion
|
||||
for the init() and version() methods.
|
||||
|
||||
- for the config_get() and config_get() methods
|
||||
you need to have lxd-client installed.
|
||||
|
||||
.. _: https://github.com/lxc/pylxd/blob/master/doc/source/installation.rst
|
||||
|
||||
:maintainer: René Jochum <rene@jochums.at>
|
||||
:maturity: new
|
||||
:depends: python-pylxd
|
||||
:platform: Linux
|
||||
'''
|
||||
|
||||
# Import python libs
|
||||
from __future__ import absolute_import, print_function, unicode_literals
|
||||
|
||||
# Import salt libs
|
||||
from salt.exceptions import CommandExecutionError
|
||||
from salt.exceptions import SaltInvocationError
|
||||
import salt.ext.six as six
|
||||
from salt.ext.six.moves import map
|
||||
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
__virtualname__ = 'lxd_container'
|
||||
|
||||
# Keep in sync with: https://github.com/lxc/lxd/blob/master/shared/status.go
|
||||
CONTAINER_STATUS_RUNNING = 103
|
||||
CONTAINER_STATUS_FROZEN = 110
|
||||
CONTAINER_STATUS_STOPPED = 102
|
||||
|
||||
|
||||
def __virtual__():
|
||||
'''
|
||||
Only load if the lxd module is available in __salt__
|
||||
'''
|
||||
return __virtualname__ if 'lxd.version' in __salt__ else False
|
||||
|
||||
|
||||
def present(name,
|
||||
running=None,
|
||||
source=None,
|
||||
profiles=None,
|
||||
config=None,
|
||||
devices=None,
|
||||
architecture='x86_64',
|
||||
ephemeral=False,
|
||||
restart_on_change=False,
|
||||
remote_addr=None,
|
||||
cert=None,
|
||||
key=None,
|
||||
verify_cert=True):
|
||||
'''
|
||||
Create the named container if it does not exist
|
||||
|
||||
name
|
||||
The name of the container to be created
|
||||
|
||||
running : None
|
||||
* If ``True``, ensure that the container is running
|
||||
* If ``False``, ensure that the container is stopped
|
||||
* If ``None``, do nothing with regards to the running state of the
|
||||
container
|
||||
|
||||
source : None
|
||||
Can be either a string containing an image alias:
|
||||
"xenial/amd64"
|
||||
or an dict with type "image" with alias:
|
||||
{"type": "image",
|
||||
"alias": "xenial/amd64"}
|
||||
or image with "fingerprint":
|
||||
{"type": "image",
|
||||
"fingerprint": "SHA-256"}
|
||||
or image with "properties":
|
||||
{"type": "image",
|
||||
"properties": {
|
||||
"os": "ubuntu",
|
||||
"release": "14.04",
|
||||
"architecture": "x86_64"
|
||||
}}
|
||||
or none:
|
||||
{"type": "none"}
|
||||
or copy:
|
||||
{"type": "copy",
|
||||
"source": "my-old-container"}
|
||||
|
||||
|
||||
profiles : ['default']
|
||||
List of profiles to apply on this container
|
||||
|
||||
config :
|
||||
A config dict or None (None = unset).
|
||||
|
||||
Can also be a list:
|
||||
[{'key': 'boot.autostart', 'value': 1},
|
||||
{'key': 'security.privileged', 'value': '1'}]
|
||||
|
||||
devices :
|
||||
A device dict or None (None = unset).
|
||||
|
||||
architecture : 'x86_64'
|
||||
Can be one of the following:
|
||||
* unknown
|
||||
* i686
|
||||
* x86_64
|
||||
* armv7l
|
||||
* aarch64
|
||||
* ppc
|
||||
* ppc64
|
||||
* ppc64le
|
||||
* s390x
|
||||
|
||||
ephemeral : False
|
||||
Destroy this container after stop?
|
||||
|
||||
restart_on_change : False
|
||||
Restart the container when we detect changes on the config or
|
||||
its devices?
|
||||
|
||||
remote_addr :
|
||||
An URL to a remote Server, you also have to give cert and key if you
|
||||
provide remote_addr!
|
||||
|
||||
Examples:
|
||||
https://myserver.lan:8443
|
||||
/var/lib/mysocket.sock
|
||||
|
||||
cert :
|
||||
PEM Formatted SSL Zertifikate.
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.crt
|
||||
|
||||
key :
|
||||
PEM Formatted SSL Key.
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.key
|
||||
|
||||
verify_cert : True
|
||||
Wherever to verify the cert, this is by default True
|
||||
but in the most cases you want to set it off as LXD
|
||||
normaly uses self-signed certificates.
|
||||
'''
|
||||
if profiles is None:
|
||||
profiles = ['default']
|
||||
|
||||
if source is None:
|
||||
source = {}
|
||||
|
||||
ret = {
|
||||
'name': name,
|
||||
'running': running,
|
||||
'profiles': profiles,
|
||||
'source': source,
|
||||
'config': config,
|
||||
'devices': devices,
|
||||
'architecture': architecture,
|
||||
'ephemeral': ephemeral,
|
||||
'restart_on_change': restart_on_change,
|
||||
'remote_addr': remote_addr,
|
||||
'cert': cert,
|
||||
'key': key,
|
||||
'verify_cert': verify_cert,
|
||||
|
||||
'changes': {}
|
||||
}
|
||||
|
||||
container = None
|
||||
try:
|
||||
container = __salt__['lxd.container_get'](
|
||||
name, remote_addr, cert, key, verify_cert, _raw=True
|
||||
)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
except SaltInvocationError as e:
|
||||
# Profile not found
|
||||
pass
|
||||
|
||||
if container is None:
|
||||
if __opts__['test']:
|
||||
# Test is on, just return that we would create the container
|
||||
msg = 'Would create the container "{0}"'.format(name)
|
||||
ret['changes'] = {
|
||||
'created': msg
|
||||
}
|
||||
if running is True:
|
||||
msg = msg + ' and start it.'
|
||||
ret['changes']['started'] = (
|
||||
'Would start the container "{0}"'.format(name)
|
||||
)
|
||||
|
||||
ret['changes'] = {'created': msg}
|
||||
return _unchanged(ret, msg)
|
||||
|
||||
# create the container
|
||||
try:
|
||||
__salt__['lxd.container_create'](
|
||||
name,
|
||||
source,
|
||||
profiles,
|
||||
config,
|
||||
devices,
|
||||
architecture,
|
||||
ephemeral,
|
||||
True, # Wait
|
||||
remote_addr,
|
||||
cert,
|
||||
key,
|
||||
verify_cert
|
||||
)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
|
||||
msg = 'Created the container "{0}"'.format(name)
|
||||
ret['changes'] = {
|
||||
'created': msg
|
||||
}
|
||||
|
||||
if running is True:
|
||||
try:
|
||||
__salt__['lxd.container_start'](
|
||||
name,
|
||||
remote_addr,
|
||||
cert,
|
||||
key,
|
||||
verify_cert
|
||||
)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
|
||||
msg = msg + ' and started it.'
|
||||
ret['changes'] = {
|
||||
'started': 'Started the container "{0}"'.format(name)
|
||||
}
|
||||
|
||||
return _success(ret, msg)
|
||||
|
||||
# Container exists, lets check for differences
|
||||
new_profiles = set(map(six.text_type, profiles))
|
||||
old_profiles = set(map(six.text_type, container.profiles))
|
||||
|
||||
container_changed = False
|
||||
|
||||
profile_changes = []
|
||||
# Removed profiles
|
||||
for k in old_profiles.difference(new_profiles):
|
||||
if not __opts__['test']:
|
||||
profile_changes.append('Removed profile "{0}"'.format(k))
|
||||
old_profiles.discard(k)
|
||||
else:
|
||||
profile_changes.append('Would remove profile "{0}"'.format(k))
|
||||
|
||||
# Added profiles
|
||||
for k in new_profiles.difference(old_profiles):
|
||||
if not __opts__['test']:
|
||||
profile_changes.append('Added profile "{0}"'.format(k))
|
||||
old_profiles.add(k)
|
||||
else:
|
||||
profile_changes.append('Would add profile "{0}"'.format(k))
|
||||
|
||||
if profile_changes:
|
||||
container_changed = True
|
||||
ret['changes']['profiles'] = profile_changes
|
||||
container.profiles = list(old_profiles)
|
||||
|
||||
# Config and devices changes
|
||||
config, devices = __salt__['lxd.normalize_input_values'](
|
||||
config,
|
||||
devices
|
||||
)
|
||||
changes = __salt__['lxd.sync_config_devices'](
|
||||
container, config, devices, __opts__['test']
|
||||
)
|
||||
if changes:
|
||||
container_changed = True
|
||||
ret['changes'].update(changes)
|
||||
|
||||
is_running = \
|
||||
container.status_code == CONTAINER_STATUS_RUNNING
|
||||
|
||||
if not __opts__['test']:
|
||||
try:
|
||||
__salt__['lxd.pylxd_save_object'](container)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
|
||||
if running != is_running:
|
||||
if running is True:
|
||||
if __opts__['test']:
|
||||
changes['running'] = 'Would start the container'
|
||||
return _unchanged(
|
||||
ret,
|
||||
('Container "{0}" would get changed '
|
||||
'and started.').format(name)
|
||||
)
|
||||
else:
|
||||
container.start(wait=True)
|
||||
changes['running'] = 'Started the container'
|
||||
|
||||
elif running is False:
|
||||
if __opts__['test']:
|
||||
changes['stopped'] = 'Would stopped the container'
|
||||
return _unchanged(
|
||||
ret,
|
||||
('Container "{0}" would get changed '
|
||||
'and stopped.').format(name)
|
||||
)
|
||||
else:
|
||||
container.stop(wait=True)
|
||||
changes['stopped'] = 'Stopped the container'
|
||||
|
||||
if ((running is True or running is None) and
|
||||
is_running and
|
||||
restart_on_change and
|
||||
container_changed):
|
||||
|
||||
if __opts__['test']:
|
||||
changes['restarted'] = 'Would restart the container'
|
||||
return _unchanged(
|
||||
ret,
|
||||
'Would restart the container "{0}"'.format(name)
|
||||
)
|
||||
else:
|
||||
container.restart(wait=True)
|
||||
changes['restarted'] = (
|
||||
'Container "{0}" has been restarted'.format(name)
|
||||
)
|
||||
return _success(
|
||||
ret,
|
||||
'Container "{0}" has been restarted'.format(name)
|
||||
)
|
||||
|
||||
if not container_changed:
|
||||
return _success(ret, 'No changes')
|
||||
|
||||
if __opts__['test']:
|
||||
return _unchanged(
|
||||
ret,
|
||||
'Container "{0}" would get changed.'.format(name)
|
||||
)
|
||||
|
||||
return _success(ret, '{0} changes'.format(len(ret['changes'].keys())))
|
||||
|
||||
|
||||
def absent(name,
|
||||
stop=False,
|
||||
remote_addr=None,
|
||||
cert=None,
|
||||
key=None,
|
||||
verify_cert=True):
|
||||
'''
|
||||
Ensure a LXD container is not present, destroying it if present
|
||||
|
||||
name :
|
||||
The name of the container to destroy
|
||||
|
||||
stop :
|
||||
stop before destroying
|
||||
default: false
|
||||
|
||||
remote_addr :
|
||||
An URL to a remote Server, you also have to give cert and key if you
|
||||
provide remote_addr!
|
||||
|
||||
Examples:
|
||||
https://myserver.lan:8443
|
||||
/var/lib/mysocket.sock
|
||||
|
||||
cert :
|
||||
PEM Formatted SSL Zertifikate.
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.crt
|
||||
|
||||
key :
|
||||
PEM Formatted SSL Key.
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.key
|
||||
|
||||
verify_cert : True
|
||||
Wherever to verify the cert, this is by default True
|
||||
but in the most cases you want to set it off as LXD
|
||||
normaly uses self-signed certificates.
|
||||
'''
|
||||
ret = {
|
||||
'name': name,
|
||||
'stop': stop,
|
||||
|
||||
'remote_addr': remote_addr,
|
||||
'cert': cert,
|
||||
'key': key,
|
||||
'verify_cert': verify_cert,
|
||||
|
||||
'changes': {}
|
||||
}
|
||||
|
||||
try:
|
||||
container = __salt__['lxd.container_get'](
|
||||
name, remote_addr, cert, key, verify_cert, _raw=True
|
||||
)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
except SaltInvocationError as e:
|
||||
# Container not found
|
||||
return _success(ret, 'Container "{0}" not found.'.format(name))
|
||||
|
||||
if __opts__['test']:
|
||||
ret['changes'] = {
|
||||
'removed':
|
||||
'Container "{0}" would get deleted.'.format(name)
|
||||
}
|
||||
return _unchanged(ret, ret['changes']['removed'])
|
||||
|
||||
if stop and container.status_code == CONTAINER_STATUS_RUNNING:
|
||||
container.stop(wait=True)
|
||||
|
||||
container.delete(wait=True)
|
||||
|
||||
ret['changes']['deleted'] = \
|
||||
'Container "{0}" has been deleted.'.format(name)
|
||||
return _success(ret, ret['changes']['deleted'])
|
||||
|
||||
|
||||
def running(name,
|
||||
restart=False,
|
||||
remote_addr=None,
|
||||
cert=None,
|
||||
key=None,
|
||||
verify_cert=True):
|
||||
'''
|
||||
Ensure a LXD container is running and restart it if restart is True
|
||||
|
||||
name :
|
||||
The name of the container to start/restart.
|
||||
|
||||
restart :
|
||||
restart the container if it is already started.
|
||||
|
||||
remote_addr :
|
||||
An URL to a remote Server, you also have to give cert and key if you
|
||||
provide remote_addr!
|
||||
|
||||
Examples:
|
||||
https://myserver.lan:8443
|
||||
/var/lib/mysocket.sock
|
||||
|
||||
cert :
|
||||
PEM Formatted SSL Zertifikate.
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.crt
|
||||
|
||||
key :
|
||||
PEM Formatted SSL Key.
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.key
|
||||
|
||||
verify_cert : True
|
||||
Wherever to verify the cert, this is by default True
|
||||
but in the most cases you want to set it off as LXD
|
||||
normaly uses self-signed certificates.
|
||||
'''
|
||||
ret = {
|
||||
'name': name,
|
||||
'restart': restart,
|
||||
|
||||
'remote_addr': remote_addr,
|
||||
'cert': cert,
|
||||
'key': key,
|
||||
'verify_cert': verify_cert,
|
||||
|
||||
'changes': {}
|
||||
}
|
||||
|
||||
try:
|
||||
container = __salt__['lxd.container_get'](
|
||||
name, remote_addr, cert, key, verify_cert, _raw=True
|
||||
)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
except SaltInvocationError as e:
|
||||
# Container not found
|
||||
return _error(ret, 'Container "{0}" not found'.format(name))
|
||||
|
||||
is_running = container.status_code == CONTAINER_STATUS_RUNNING
|
||||
|
||||
if is_running:
|
||||
if not restart:
|
||||
return _success(
|
||||
ret,
|
||||
'The container "{0} is already running"'.format(name)
|
||||
)
|
||||
else:
|
||||
if __opts__['test']:
|
||||
ret['changes']['restarted'] = (
|
||||
'Would restart the container "{0}"'.format(name)
|
||||
)
|
||||
return _unchanged(ret, ret['changes']['restarted'])
|
||||
else:
|
||||
container.restart(wait=True)
|
||||
ret['changes']['restarted'] = (
|
||||
'Restarted the container "{0}"'.format(name)
|
||||
)
|
||||
return _success(ret, ret['changes']['restarted'])
|
||||
|
||||
if __opts__['test']:
|
||||
ret['changes']['started'] = (
|
||||
'Would start the container "{0}"'.format(name)
|
||||
)
|
||||
return _unchanged(ret, ret['changes']['started'])
|
||||
|
||||
container.start(wait=True)
|
||||
ret['changes']['started'] = (
|
||||
'Started the container "{0}"'.format(name)
|
||||
)
|
||||
return _success(ret, ret['changes']['started'])
|
||||
|
||||
|
||||
def frozen(name,
|
||||
start=True,
|
||||
remote_addr=None,
|
||||
cert=None,
|
||||
key=None,
|
||||
verify_cert=True):
|
||||
'''
|
||||
Ensure a LXD container is frozen, start and freeze it if start is true
|
||||
|
||||
name :
|
||||
The name of the container to freeze
|
||||
|
||||
start :
|
||||
start and freeze it
|
||||
|
||||
remote_addr :
|
||||
An URL to a remote Server, you also have to give cert and key if you
|
||||
provide remote_addr!
|
||||
|
||||
Examples:
|
||||
https://myserver.lan:8443
|
||||
/var/lib/mysocket.sock
|
||||
|
||||
cert :
|
||||
PEM Formatted SSL Zertifikate.
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.crt
|
||||
|
||||
key :
|
||||
PEM Formatted SSL Key.
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.key
|
||||
|
||||
verify_cert : True
|
||||
Wherever to verify the cert, this is by default True
|
||||
but in the most cases you want to set it off as LXD
|
||||
normaly uses self-signed certificates.
|
||||
'''
|
||||
ret = {
|
||||
'name': name,
|
||||
'start': start,
|
||||
|
||||
'remote_addr': remote_addr,
|
||||
'cert': cert,
|
||||
'key': key,
|
||||
'verify_cert': verify_cert,
|
||||
|
||||
'changes': {}
|
||||
}
|
||||
|
||||
try:
|
||||
container = __salt__['lxd.container_get'](
|
||||
name, remote_addr, cert, key, verify_cert, _raw=True
|
||||
)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
except SaltInvocationError as e:
|
||||
# Container not found
|
||||
return _error(ret, 'Container "{0}" not found'.format(name))
|
||||
|
||||
if container.status_code == CONTAINER_STATUS_FROZEN:
|
||||
return _success(ret, 'Container "{0}" is alredy frozen'.format(name))
|
||||
|
||||
is_running = container.status_code == CONTAINER_STATUS_RUNNING
|
||||
|
||||
if not is_running and not start:
|
||||
return _error(ret, (
|
||||
'Container "{0}" is not running and start is False, '
|
||||
'cannot freeze it').format(name)
|
||||
)
|
||||
|
||||
elif not is_running and start:
|
||||
if __opts__['test']:
|
||||
ret['changes']['started'] = (
|
||||
'Would start the container "{0}" and freeze it after'
|
||||
.format(name)
|
||||
)
|
||||
return _unchanged(ret, ret['changes']['started'])
|
||||
else:
|
||||
container.start(wait=True)
|
||||
ret['changes']['started'] = (
|
||||
'Start the container "{0}"'
|
||||
.format(name)
|
||||
)
|
||||
|
||||
if __opts__['test']:
|
||||
ret['changes']['frozen'] = (
|
||||
'Would freeze the container "{0}"'.format(name)
|
||||
)
|
||||
return _unchanged(ret, ret['changes']['frozen'])
|
||||
|
||||
container.freeze(wait=True)
|
||||
ret['changes']['frozen'] = (
|
||||
'Froze the container "{0}"'.format(name)
|
||||
)
|
||||
|
||||
return _success(ret, ret['changes']['frozen'])
|
||||
|
||||
|
||||
def stopped(name,
|
||||
kill=False,
|
||||
remote_addr=None,
|
||||
cert=None,
|
||||
key=None,
|
||||
verify_cert=True):
|
||||
'''
|
||||
Ensure a LXD container is stopped, kill it if kill is true else stop it
|
||||
|
||||
name :
|
||||
The name of the container to stop
|
||||
|
||||
kill :
|
||||
kill if true
|
||||
|
||||
remote_addr :
|
||||
An URL to a remote Server, you also have to give cert and key if you
|
||||
provide remote_addr!
|
||||
|
||||
Examples:
|
||||
https://myserver.lan:8443
|
||||
/var/lib/mysocket.sock
|
||||
|
||||
cert :
|
||||
PEM Formatted SSL Zertifikate.
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.crt
|
||||
|
||||
key :
|
||||
PEM Formatted SSL Key.
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.key
|
||||
|
||||
verify_cert : True
|
||||
Wherever to verify the cert, this is by default True
|
||||
but in the most cases you want to set it off as LXD
|
||||
normaly uses self-signed certificates.
|
||||
'''
|
||||
ret = {
|
||||
'name': name,
|
||||
'kill': kill,
|
||||
|
||||
'remote_addr': remote_addr,
|
||||
'cert': cert,
|
||||
'key': key,
|
||||
'verify_cert': verify_cert,
|
||||
|
||||
'changes': {}
|
||||
}
|
||||
|
||||
try:
|
||||
container = __salt__['lxd.container_get'](
|
||||
name, remote_addr, cert, key, verify_cert, _raw=True
|
||||
)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
except SaltInvocationError as e:
|
||||
# Container not found
|
||||
return _error(ret, 'Container "{0}" not found'.format(name))
|
||||
|
||||
if container.status_code == CONTAINER_STATUS_STOPPED:
|
||||
return _success(ret, 'Container "{0}" is already stopped'.format(name))
|
||||
|
||||
if __opts__['test']:
|
||||
ret['changes']['stopped'] = \
|
||||
'Would stop the container "{0}"'.format(name)
|
||||
return _unchanged(ret, ret['changes']['stopped'])
|
||||
|
||||
container.stop(force=kill, wait=True)
|
||||
ret['changes']['stopped'] = \
|
||||
'Stopped the container "{0}"'.format(name)
|
||||
return _success(ret, ret['changes']['stopped'])
|
||||
|
||||
|
||||
def migrated(name,
|
||||
remote_addr,
|
||||
cert,
|
||||
key,
|
||||
verify_cert,
|
||||
src_remote_addr,
|
||||
stop_and_start=False,
|
||||
src_cert=None,
|
||||
src_key=None,
|
||||
src_verify_cert=None):
|
||||
''' Ensure a container is migrated to another host
|
||||
|
||||
If the container is running, it either must be shut down
|
||||
first (use stop_and_start=True) or criu must be installed
|
||||
on the source and destination machines.
|
||||
|
||||
For this operation both certs need to be authenticated,
|
||||
use :mod:`lxd.authenticate <salt.states.lxd.authenticate`
|
||||
to authenticate your cert(s).
|
||||
|
||||
name :
|
||||
The container to migrate
|
||||
|
||||
remote_addr :
|
||||
An URL to the destination remote Server
|
||||
|
||||
Examples:
|
||||
https://myserver.lan:8443
|
||||
/var/lib/mysocket.sock
|
||||
|
||||
cert :
|
||||
PEM Formatted SSL Zertifikate.
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.crt
|
||||
|
||||
key :
|
||||
PEM Formatted SSL Key.
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.key
|
||||
|
||||
verify_cert : True
|
||||
Wherever to verify the cert, this is by default True
|
||||
but in the most cases you want to set it off as LXD
|
||||
normaly uses self-signed certificates.
|
||||
|
||||
src_remote_addr :
|
||||
An URL to the source remote Server
|
||||
|
||||
Examples:
|
||||
https://myserver.lan:8443
|
||||
/var/lib/mysocket.sock
|
||||
|
||||
stop_and_start:
|
||||
Stop before migrating and start after
|
||||
|
||||
src_cert :
|
||||
PEM Formatted SSL Zertifikate, if None we copy "cert"
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.crt
|
||||
|
||||
src_key :
|
||||
PEM Formatted SSL Key, if None we copy "key"
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.key
|
||||
|
||||
src_verify_cert :
|
||||
Wherever to verify the cert, if None we copy "verify_cert"
|
||||
'''
|
||||
ret = {
|
||||
'name': name,
|
||||
|
||||
'remote_addr': remote_addr,
|
||||
'cert': cert,
|
||||
'key': key,
|
||||
'verify_cert': verify_cert,
|
||||
|
||||
'src_remote_addr': src_remote_addr,
|
||||
'src_and_start': stop_and_start,
|
||||
'src_cert': src_cert,
|
||||
'src_key': src_key,
|
||||
|
||||
'changes': {}
|
||||
}
|
||||
|
||||
dest_container = None
|
||||
try:
|
||||
dest_container = __salt__['lxd.container_get'](
|
||||
name, remote_addr, cert, key,
|
||||
verify_cert, _raw=True
|
||||
)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
except SaltInvocationError as e:
|
||||
# Destination container not found
|
||||
pass
|
||||
|
||||
if dest_container is not None:
|
||||
return _success(
|
||||
ret,
|
||||
'Container "{0}" exists on the destination'.format(name)
|
||||
)
|
||||
|
||||
if src_verify_cert is None:
|
||||
src_verify_cert = verify_cert
|
||||
|
||||
try:
|
||||
__salt__['lxd.container_get'](
|
||||
name, src_remote_addr, src_cert, src_key, src_verify_cert, _raw=True
|
||||
)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
except SaltInvocationError as e:
|
||||
# Container not found
|
||||
return _error(ret, 'Source Container "{0}" not found'.format(name))
|
||||
|
||||
if __opts__['test']:
|
||||
ret['changes']['migrated'] = (
|
||||
'Would migrate the container "{0}" from "{1}" to "{2}"'
|
||||
).format(name, src_remote_addr, remote_addr)
|
||||
return _unchanged(ret, ret['changes']['migrated'])
|
||||
|
||||
try:
|
||||
__salt__['lxd.container_migrate'](
|
||||
name, stop_and_start, remote_addr, cert, key,
|
||||
verify_cert, src_remote_addr, src_cert, src_key, src_verify_cert
|
||||
)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
|
||||
ret['changes']['migrated'] = (
|
||||
'Migrated the container "{0}" from "{1}" to "{2}"'
|
||||
).format(name, src_remote_addr, remote_addr)
|
||||
return _success(ret, ret['changes']['migrated'])
|
||||
|
||||
|
||||
def _success(ret, success_msg):
|
||||
ret['result'] = True
|
||||
ret['comment'] = success_msg
|
||||
if 'changes' not in ret:
|
||||
ret['changes'] = {}
|
||||
return ret
|
||||
|
||||
|
||||
def _unchanged(ret, msg):
|
||||
ret['result'] = None
|
||||
ret['comment'] = msg
|
||||
if 'changes' not in ret:
|
||||
ret['changes'] = {}
|
||||
return ret
|
||||
|
||||
|
||||
def _error(ret, err_msg):
|
||||
ret['result'] = False
|
||||
ret['comment'] = err_msg
|
||||
if 'changes' not in ret:
|
||||
ret['changes'] = {}
|
||||
return ret
|
401
salt/states/lxd_image.py
Normal file
401
salt/states/lxd_image.py
Normal file
@ -0,0 +1,401 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Manage LXD images.
|
||||
|
||||
.. versionadded:: Fluorine
|
||||
|
||||
.. note:
|
||||
|
||||
- `pylxd`_ version 2 is required to let this work,
|
||||
currently only available via pip.
|
||||
|
||||
To install on Ubuntu:
|
||||
|
||||
$ apt-get install libssl-dev python-pip
|
||||
$ pip install -U pylxd
|
||||
|
||||
- you need lxd installed on the minion
|
||||
for the init() and version() methods.
|
||||
|
||||
- for the config_get() and config_get() methods
|
||||
you need to have lxd-client installed.
|
||||
|
||||
.. _: https://github.com/lxc/pylxd/blob/master/doc/source/installation.rst
|
||||
|
||||
:maintainer: René Jochum <rene@jochums.at>
|
||||
:maturity: new
|
||||
:depends: python-pylxd
|
||||
:platform: Linux
|
||||
'''
|
||||
|
||||
# Import python libs
|
||||
from __future__ import absolute_import, print_function, unicode_literals
|
||||
|
||||
# Import salt libs
|
||||
from salt.exceptions import CommandExecutionError
|
||||
from salt.exceptions import SaltInvocationError
|
||||
import salt.ext.six as six
|
||||
from salt.ext.six.moves import map
|
||||
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
__virtualname__ = 'lxd_image'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
'''
|
||||
Only load if the lxd module is available in __salt__
|
||||
'''
|
||||
return __virtualname__ if 'lxd.version' in __salt__ else False
|
||||
|
||||
|
||||
def present(name,
|
||||
source,
|
||||
aliases=None,
|
||||
public=None,
|
||||
auto_update=None,
|
||||
remote_addr=None,
|
||||
cert=None,
|
||||
key=None,
|
||||
verify_cert=True):
|
||||
'''
|
||||
Ensure an image exists, copy it else from source
|
||||
|
||||
name :
|
||||
An alias of the image, this is used to check if the image exists and
|
||||
it will be added as alias to the image on copy/create.
|
||||
|
||||
source :
|
||||
Source dict.
|
||||
|
||||
For an LXD to LXD copy:
|
||||
|
||||
.. code-block: yaml
|
||||
|
||||
source:
|
||||
type: lxd
|
||||
name: ubuntu/xenial/amd64 # This can also be a fingerprint.
|
||||
remote_addr: https://images.linuxcontainers.org:8443
|
||||
cert: ~/.config/lxd/client.crt
|
||||
key: ~/.config/lxd/client.key
|
||||
verify_cert: False
|
||||
|
||||
.. attention:
|
||||
|
||||
For this kind of remote you also need to provide:
|
||||
- a https:// remote_addr
|
||||
- a cert and key
|
||||
- verify_cert
|
||||
|
||||
From file:
|
||||
|
||||
.. code-block: yaml
|
||||
|
||||
source:
|
||||
type: file
|
||||
filename: salt://lxd/files/busybox.tar.xz
|
||||
saltenv: base
|
||||
|
||||
From simplestreams:
|
||||
|
||||
.. code-block: yaml
|
||||
|
||||
source:
|
||||
type: simplestreams
|
||||
server: https://cloud-images.ubuntu.com/releases
|
||||
name: xenial/amd64
|
||||
|
||||
From an URL:
|
||||
|
||||
.. code-block: yaml
|
||||
|
||||
source:
|
||||
type: url
|
||||
url: https://dl.stgraber.org/lxd
|
||||
|
||||
aliases :
|
||||
List of aliases to append, can be empty.
|
||||
|
||||
public :
|
||||
Make this image public available on this instance?
|
||||
None on source_type LXD means copy source
|
||||
None on source_type file means False
|
||||
|
||||
auto_update :
|
||||
Try to auto-update from the original source?
|
||||
None on source_type LXD means copy source
|
||||
source_type file does not have auto-update.
|
||||
|
||||
remote_addr :
|
||||
An URL to a remote Server, you also have to give cert and key if you
|
||||
provide remote_addr!
|
||||
|
||||
Examples:
|
||||
https://myserver.lan:8443
|
||||
/var/lib/mysocket.sock
|
||||
|
||||
cert :
|
||||
PEM Formatted SSL Zertifikate.
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.crt
|
||||
|
||||
key :
|
||||
PEM Formatted SSL Key.
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.key
|
||||
|
||||
verify_cert : True
|
||||
Wherever to verify the cert, this is by default True
|
||||
but in the most cases you want to set it off as LXD
|
||||
normaly uses self-signed certificates.
|
||||
'''
|
||||
if aliases is None:
|
||||
aliases = []
|
||||
|
||||
# Create a copy of aliases, since we're modifying it here
|
||||
aliases = aliases[:]
|
||||
ret = {
|
||||
'name': name,
|
||||
'source': source,
|
||||
'aliases': aliases,
|
||||
'public': public,
|
||||
'auto_update': auto_update,
|
||||
|
||||
'remote_addr': remote_addr,
|
||||
'cert': cert,
|
||||
'key': key,
|
||||
'verify_cert': verify_cert,
|
||||
|
||||
'changes': {}
|
||||
}
|
||||
|
||||
image = None
|
||||
try:
|
||||
image = __salt__['lxd.image_get_by_alias'](
|
||||
name, remote_addr, cert, key, verify_cert, _raw=True
|
||||
)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
except SaltInvocationError as e:
|
||||
# Image not found
|
||||
pass
|
||||
|
||||
if image is None:
|
||||
if __opts__['test']:
|
||||
# Test is on, just return that we would create the image
|
||||
msg = 'Would create the image "{0}"'.format(name)
|
||||
ret['changes'] = {'created': msg}
|
||||
return _unchanged(ret, msg)
|
||||
|
||||
try:
|
||||
if source['type'] == 'lxd':
|
||||
image = __salt__['lxd.image_copy_lxd'](
|
||||
source['name'],
|
||||
src_remote_addr=source['remote_addr'],
|
||||
src_cert=source['cert'],
|
||||
src_key=source['key'],
|
||||
src_verify_cert=source['verify_cert'],
|
||||
remote_addr=remote_addr,
|
||||
cert=cert,
|
||||
key=key,
|
||||
verify_cert=verify_cert,
|
||||
aliases=aliases,
|
||||
public=public,
|
||||
auto_update=auto_update,
|
||||
_raw=True
|
||||
)
|
||||
|
||||
if source['type'] == 'file':
|
||||
if 'saltenv' not in source:
|
||||
source['saltenv'] = __env__
|
||||
image = __salt__['lxd.image_from_file'](
|
||||
source['filename'],
|
||||
remote_addr=remote_addr,
|
||||
cert=cert,
|
||||
key=key,
|
||||
verify_cert=verify_cert,
|
||||
aliases=aliases,
|
||||
public=False if public is None else public,
|
||||
saltenv=source['saltenv'],
|
||||
_raw=True
|
||||
)
|
||||
|
||||
if source['type'] == 'simplestreams':
|
||||
image = __salt__['lxd.image_from_simplestreams'](
|
||||
source['server'],
|
||||
source['name'],
|
||||
remote_addr=remote_addr,
|
||||
cert=cert,
|
||||
key=key,
|
||||
verify_cert=verify_cert,
|
||||
aliases=aliases,
|
||||
public=False if public is None else public,
|
||||
auto_update=False if auto_update is None else auto_update,
|
||||
_raw=True
|
||||
)
|
||||
|
||||
if source['type'] == 'url':
|
||||
image = __salt__['lxd.image_from_url'](
|
||||
source['url'],
|
||||
remote_addr=remote_addr,
|
||||
cert=cert,
|
||||
key=key,
|
||||
verify_cert=verify_cert,
|
||||
aliases=aliases,
|
||||
public=False if public is None else public,
|
||||
auto_update=False if auto_update is None else auto_update,
|
||||
_raw=True
|
||||
)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
|
||||
# Sync aliases
|
||||
if name not in aliases:
|
||||
aliases.append(name)
|
||||
|
||||
old_aliases = set([six.text_type(a['name']) for a in image.aliases])
|
||||
new_aliases = set(map(six.text_type, aliases))
|
||||
|
||||
alias_changes = []
|
||||
# Removed aliases
|
||||
for k in old_aliases.difference(new_aliases):
|
||||
if not __opts__['test']:
|
||||
__salt__['lxd.image_alias_delete'](image, k)
|
||||
alias_changes.append('Removed alias "{0}"'.format(k))
|
||||
else:
|
||||
alias_changes.append('Would remove alias "{0}"'.format(k))
|
||||
|
||||
# New aliases
|
||||
for k in new_aliases.difference(old_aliases):
|
||||
if not __opts__['test']:
|
||||
__salt__['lxd.image_alias_add'](image, k, '')
|
||||
alias_changes.append('Added alias "{0}"'.format(k))
|
||||
else:
|
||||
alias_changes.append('Would add alias "{0}"'.format(k))
|
||||
|
||||
if alias_changes:
|
||||
ret['changes']['aliases'] = alias_changes
|
||||
|
||||
# Set public
|
||||
if public is not None and image.public != public:
|
||||
if not __opts__['test']:
|
||||
ret['changes']['public'] = \
|
||||
'Setting the image public to {0!s}'.format(public)
|
||||
image.public = public
|
||||
__salt__['lxd.pylxd_save_object'](image)
|
||||
else:
|
||||
ret['changes']['public'] = \
|
||||
'Would set public to {0!s}'.format(public)
|
||||
|
||||
if __opts__['test'] and ret['changes']:
|
||||
return _unchanged(
|
||||
ret,
|
||||
'Would do {0} changes'.format(len(ret['changes'].keys()))
|
||||
)
|
||||
|
||||
return _success(ret, '{0} changes'.format(len(ret['changes'].keys())))
|
||||
|
||||
|
||||
def absent(name,
|
||||
remote_addr=None,
|
||||
cert=None,
|
||||
key=None,
|
||||
verify_cert=True):
|
||||
'''
|
||||
name :
|
||||
An alias or fingerprint of the image to check and delete.
|
||||
|
||||
remote_addr :
|
||||
An URL to a remote Server, you also have to give cert and key if you
|
||||
provide remote_addr!
|
||||
|
||||
Examples:
|
||||
https://myserver.lan:8443
|
||||
/var/lib/mysocket.sock
|
||||
|
||||
cert :
|
||||
PEM Formatted SSL Zertifikate.
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.crt
|
||||
|
||||
key :
|
||||
PEM Formatted SSL Key.
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.key
|
||||
|
||||
verify_cert : True
|
||||
Wherever to verify the cert, this is by default True
|
||||
but in the most cases you want to set it off as LXD
|
||||
normaly uses self-signed certificates.
|
||||
'''
|
||||
ret = {
|
||||
'name': name,
|
||||
|
||||
'remote_addr': remote_addr,
|
||||
'cert': cert,
|
||||
'key': key,
|
||||
'verify_cert': verify_cert,
|
||||
|
||||
'changes': {}
|
||||
}
|
||||
image = None
|
||||
try:
|
||||
image = __salt__['lxd.image_get_by_alias'](
|
||||
name, remote_addr, cert, key, verify_cert, _raw=True
|
||||
)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
except SaltInvocationError as e:
|
||||
try:
|
||||
image = __salt__['lxd.image_get'](
|
||||
name, remote_addr, cert, key, verify_cert, _raw=True
|
||||
)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
except SaltInvocationError as e:
|
||||
return _success(ret, 'Image "{0}" not found.'.format(name))
|
||||
|
||||
if __opts__['test']:
|
||||
ret['changes'] = {
|
||||
'removed':
|
||||
'Image "{0}" would get deleted.'.format(name)
|
||||
}
|
||||
return _success(ret, ret['changes']['removed'])
|
||||
|
||||
__salt__['lxd.image_delete'](
|
||||
image
|
||||
)
|
||||
|
||||
ret['changes'] = {
|
||||
'removed':
|
||||
'Image "{0}" has been deleted.'.format(name)
|
||||
}
|
||||
return _success(ret, ret['changes']['removed'])
|
||||
|
||||
|
||||
def _success(ret, success_msg):
|
||||
ret['result'] = True
|
||||
ret['comment'] = success_msg
|
||||
if 'changes' not in ret:
|
||||
ret['changes'] = {}
|
||||
return ret
|
||||
|
||||
|
||||
def _unchanged(ret, msg):
|
||||
ret['result'] = None
|
||||
ret['comment'] = msg
|
||||
if 'changes' not in ret:
|
||||
ret['changes'] = {}
|
||||
return ret
|
||||
|
||||
|
||||
def _error(ret, err_msg):
|
||||
ret['result'] = False
|
||||
ret['comment'] = err_msg
|
||||
if 'changes' not in ret:
|
||||
ret['changes'] = {}
|
||||
return ret
|
297
salt/states/lxd_profile.py
Normal file
297
salt/states/lxd_profile.py
Normal file
@ -0,0 +1,297 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
'''
|
||||
Manage LXD profiles.
|
||||
|
||||
.. versionadded:: Fluorine
|
||||
|
||||
.. note:
|
||||
|
||||
- `pylxd`_ version 2 is required to let this work,
|
||||
currently only available via pip.
|
||||
|
||||
To install on Ubuntu:
|
||||
|
||||
$ apt-get install libssl-dev python-pip
|
||||
$ pip install -U pylxd
|
||||
|
||||
- you need lxd installed on the minion
|
||||
for the init() and version() methods.
|
||||
|
||||
- for the config_get() and config_get() methods
|
||||
you need to have lxd-client installed.
|
||||
|
||||
.. _pylxd: https://github.com/lxc/pylxd/blob/master/doc/source/installation.rst
|
||||
|
||||
:maintainer: René Jochum <rene@jochums.at>
|
||||
:maturity: new
|
||||
:depends: python-pylxd
|
||||
:platform: Linux
|
||||
'''
|
||||
|
||||
# Import python libs
|
||||
from __future__ import absolute_import, print_function, unicode_literals
|
||||
|
||||
# Import salt libs
|
||||
from salt.exceptions import CommandExecutionError
|
||||
from salt.exceptions import SaltInvocationError
|
||||
import salt.ext.six as six
|
||||
|
||||
__docformat__ = 'restructuredtext en'
|
||||
|
||||
__virtualname__ = 'lxd_profile'
|
||||
|
||||
|
||||
def __virtual__():
|
||||
'''
|
||||
Only load if the lxd module is available in __salt__
|
||||
'''
|
||||
return __virtualname__ if 'lxd.version' in __salt__ else False
|
||||
|
||||
|
||||
def present(name, description=None, config=None, devices=None,
|
||||
remote_addr=None, cert=None, key=None, verify_cert=True):
|
||||
'''
|
||||
Creates or updates LXD profiles
|
||||
|
||||
name :
|
||||
The name of the profile to create/update
|
||||
|
||||
description :
|
||||
A description string
|
||||
|
||||
config :
|
||||
A config dict or None (None = unset).
|
||||
|
||||
Can also be a list:
|
||||
[{'key': 'boot.autostart', 'value': 1},
|
||||
{'key': 'security.privileged', 'value': '1'}]
|
||||
|
||||
devices :
|
||||
A device dict or None (None = unset).
|
||||
|
||||
remote_addr :
|
||||
An URL to a remote Server, you also have to give cert and key if you
|
||||
provide remote_addr!
|
||||
|
||||
Examples:
|
||||
https://myserver.lan:8443
|
||||
/var/lib/mysocket.sock
|
||||
|
||||
cert :
|
||||
PEM Formatted SSL Zertifikate.
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.crt
|
||||
|
||||
key :
|
||||
PEM Formatted SSL Key.
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.key
|
||||
|
||||
verify_cert : True
|
||||
Wherever to verify the cert, this is by default True
|
||||
but in the most cases you want to set it off as LXD
|
||||
normaly uses self-signed certificates.
|
||||
|
||||
See the `lxd-docs`_ for the details about the config and devices dicts.
|
||||
See the `requests-docs` for the SSL stuff.
|
||||
|
||||
.. _lxd-docs: https://github.com/lxc/lxd/blob/master/doc/rest-api.md#post-10
|
||||
.. _requests-docs: http://docs.python-requests.org/en/master/user/advanced/#ssl-cert-verification # noqa
|
||||
'''
|
||||
ret = {
|
||||
'name': name,
|
||||
'description': description,
|
||||
'config': config,
|
||||
'devices': devices,
|
||||
|
||||
'remote_addr': remote_addr,
|
||||
'cert': cert,
|
||||
'key': key,
|
||||
'verify_cert': verify_cert,
|
||||
|
||||
'changes': {}
|
||||
}
|
||||
|
||||
profile = None
|
||||
try:
|
||||
profile = __salt__['lxd.profile_get'](
|
||||
name, remote_addr, cert, key, verify_cert, _raw=True
|
||||
)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
except SaltInvocationError as e:
|
||||
# Profile not found
|
||||
pass
|
||||
|
||||
if description is None:
|
||||
description = six.text_type()
|
||||
|
||||
if profile is None:
|
||||
if __opts__['test']:
|
||||
# Test is on, just return that we would create the profile
|
||||
msg = 'Would create the profile "{0}"'.format(name)
|
||||
ret['changes'] = {'created': msg}
|
||||
return _unchanged(ret, msg)
|
||||
|
||||
# Create the profile
|
||||
try:
|
||||
__salt__['lxd.profile_create'](
|
||||
name,
|
||||
config,
|
||||
devices,
|
||||
description,
|
||||
remote_addr,
|
||||
cert,
|
||||
key,
|
||||
verify_cert
|
||||
)
|
||||
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
|
||||
msg = 'Profile "{0}" has been created'.format(name)
|
||||
ret['changes'] = {'created': msg}
|
||||
return _success(ret, msg)
|
||||
|
||||
config, devices = __salt__['lxd.normalize_input_values'](
|
||||
config,
|
||||
devices
|
||||
)
|
||||
|
||||
#
|
||||
# Description change
|
||||
#
|
||||
if six.text_type(profile.description) != six.text_type(description):
|
||||
ret['changes']['description'] = (
|
||||
'Description changed, from "{0}" to "{1}".'
|
||||
).format(profile.description, description)
|
||||
|
||||
profile.description = description
|
||||
|
||||
changes = __salt__['lxd.sync_config_devices'](
|
||||
profile, config, devices, __opts__['test']
|
||||
)
|
||||
ret['changes'].update(changes)
|
||||
|
||||
if not ret['changes']:
|
||||
return _success(ret, 'No changes')
|
||||
|
||||
if __opts__['test']:
|
||||
return _unchanged(
|
||||
ret,
|
||||
'Profile "{0}" would get changed.'.format(name)
|
||||
)
|
||||
|
||||
try:
|
||||
__salt__['lxd.pylxd_save_object'](profile)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
|
||||
return _success(ret, '{0} changes'.format(len(ret['changes'].keys())))
|
||||
|
||||
|
||||
def absent(name, remote_addr=None, cert=None,
|
||||
key=None, verify_cert=True):
|
||||
'''
|
||||
Ensure a LXD profile is not present, removing it if present.
|
||||
|
||||
name :
|
||||
The name of the profile to remove.
|
||||
|
||||
remote_addr :
|
||||
An URL to a remote Server, you also have to give cert and key if you
|
||||
provide remote_addr!
|
||||
|
||||
Examples:
|
||||
https://myserver.lan:8443
|
||||
/var/lib/mysocket.sock
|
||||
|
||||
cert :
|
||||
PEM Formatted SSL Zertifikate.
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.crt
|
||||
|
||||
key :
|
||||
PEM Formatted SSL Key.
|
||||
|
||||
Examples:
|
||||
~/.config/lxc/client.key
|
||||
|
||||
verify_cert : True
|
||||
Wherever to verify the cert, this is by default True
|
||||
but in the most cases you want to set it off as LXD
|
||||
normaly uses self-signed certificates.
|
||||
|
||||
See the `requests-docs` for the SSL stuff.
|
||||
|
||||
.. _requests-docs: http://docs.python-requests.org/en/master/user/advanced/#ssl-cert-verification # noqa
|
||||
'''
|
||||
ret = {
|
||||
'name': name,
|
||||
|
||||
'remote_addr': remote_addr,
|
||||
'cert': cert,
|
||||
'key': key,
|
||||
'verify_cert': verify_cert,
|
||||
|
||||
'changes': {}
|
||||
}
|
||||
if __opts__['test']:
|
||||
try:
|
||||
__salt__['lxd.profile_get'](
|
||||
name, remote_addr, cert, key, verify_cert
|
||||
)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
except SaltInvocationError as e:
|
||||
# Profile not found
|
||||
return _success(ret, 'Profile "{0}" not found.'.format(name))
|
||||
|
||||
ret['changes'] = {
|
||||
'removed':
|
||||
'Profile "{0}" would get deleted.'.format(name)
|
||||
}
|
||||
return _success(ret, ret['changes']['removed'])
|
||||
|
||||
try:
|
||||
__salt__['lxd.profile_delete'](
|
||||
name, remote_addr, cert, key, verify_cert
|
||||
)
|
||||
except CommandExecutionError as e:
|
||||
return _error(ret, six.text_type(e))
|
||||
except SaltInvocationError as e:
|
||||
# Profile not found
|
||||
return _success(ret, 'Profile "{0}" not found.'.format(name))
|
||||
|
||||
ret['changes'] = {
|
||||
'removed':
|
||||
'Profile "{0}" has been deleted.'.format(name)
|
||||
}
|
||||
return _success(ret, ret['changes']['removed'])
|
||||
|
||||
|
||||
def _success(ret, success_msg):
|
||||
ret['result'] = True
|
||||
ret['comment'] = success_msg
|
||||
if 'changes' not in ret:
|
||||
ret['changes'] = {}
|
||||
return ret
|
||||
|
||||
|
||||
def _unchanged(ret, msg):
|
||||
ret['result'] = None
|
||||
ret['comment'] = msg
|
||||
if 'changes' not in ret:
|
||||
ret['changes'] = {}
|
||||
return ret
|
||||
|
||||
|
||||
def _error(ret, err_msg):
|
||||
ret['result'] = False
|
||||
ret['comment'] = err_msg
|
||||
if 'changes' not in ret:
|
||||
ret['changes'] = {}
|
||||
return ret
|
@ -186,6 +186,10 @@ def monitored(name, group=None, salt_name=True, salt_params=True, agent_version=
|
||||
agent_key = device['agentKey']
|
||||
ret['comment'] = 'Device created in Server Density db.'
|
||||
ret['changes'] = {'device_created': device}
|
||||
if __opts__['test']:
|
||||
ret['result'] = None
|
||||
ret['comment'] = 'Device set to be created in Server Density db.'
|
||||
return ret
|
||||
elif device_in_sd:
|
||||
device = __salt__['serverdensity_device.ls'](name=name)[0]
|
||||
agent_key = device['agentKey']
|
||||
@ -194,6 +198,14 @@ def monitored(name, group=None, salt_name=True, salt_params=True, agent_version=
|
||||
ret['result'] = False
|
||||
ret['comment'] = 'Failed to create device in Server Density DB and this device does not exist in db either.'
|
||||
ret['changes'] = {}
|
||||
if __opts__['test']:
|
||||
ret['result'] = None
|
||||
ret['comment'] = 'Agent is not installed and device is not in the Server Density DB'
|
||||
return ret
|
||||
|
||||
if __opts__['test']:
|
||||
ret['result'] = None
|
||||
ret['comment'] = 'Server Density agent is set to be installed and device created in the Server Density DB'
|
||||
return ret
|
||||
|
||||
installed_agent = __salt__['serverdensity_device.install_agent'](agent_key, agent_version)
|
||||
|
@ -390,3 +390,109 @@ def reverted(name, snapshot=None, cleanup=False):
|
||||
ret['comment'] = six.text_type(err)
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
def network_define(name, bridge, forward, **kwargs):
|
||||
'''
|
||||
Defines and starts a new network with specified arguments.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
domain_name:
|
||||
virt.network_define
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
network_name:
|
||||
virt.network_define:
|
||||
- bridge: main
|
||||
- forward: bridge
|
||||
- vport: openvswitch
|
||||
- tag: 180
|
||||
- autostart: True
|
||||
- start: True
|
||||
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': False,
|
||||
'comment': ''
|
||||
}
|
||||
|
||||
kwargs = salt.utils.args.clean_kwargs(**kwargs)
|
||||
vport = kwargs.pop('vport', None)
|
||||
tag = kwargs.pop('tag', None)
|
||||
autostart = kwargs.pop('autostart', True)
|
||||
start = kwargs.pop('start', True)
|
||||
|
||||
try:
|
||||
result = __salt__['virt.net_define'](name, bridge, forward, vport, tag=tag, autostart=autostart, start=start)
|
||||
if result:
|
||||
ret['changes'][name] = 'Network {0} has been created'.format(name)
|
||||
ret['result'] = True
|
||||
else:
|
||||
ret['comment'] = 'Network {0} created fail'.format(name)
|
||||
except libvirt.libvirtError as err:
|
||||
if err.get_error_code() == libvirt.VIR_ERR_NETWORK_EXIST or libvirt.VIR_ERR_OPERATION_FAILED:
|
||||
ret['result'] = True
|
||||
ret['comment'] = 'The network already exist'
|
||||
else:
|
||||
ret['comment'] = err.get_error_message()
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
def pool_define(name, **kwargs):
|
||||
'''
|
||||
Defines and starts a new pool with specified arguments.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
pool_name:
|
||||
virt.pool_define
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
pool_name:
|
||||
virt.pool_define:
|
||||
- ptype: logical
|
||||
- target: pool
|
||||
- source: sda1
|
||||
- autostart: True
|
||||
- start: True
|
||||
|
||||
'''
|
||||
ret = {'name': name,
|
||||
'changes': {},
|
||||
'result': False,
|
||||
'comment': ''
|
||||
}
|
||||
|
||||
kwargs = salt.utils.args.clean_kwargs(**kwargs)
|
||||
ptype = kwargs.pop('ptype', None)
|
||||
target = kwargs.pop('target', None)
|
||||
source = kwargs.pop('source', None)
|
||||
autostart = kwargs.pop('autostart', True)
|
||||
start = kwargs.pop('start', True)
|
||||
|
||||
try:
|
||||
result = __salt__['virt.pool_define_build'](name, ptype=ptype, target=target,
|
||||
source=source, autostart=autostart, start=start)
|
||||
if result:
|
||||
if 'Pool exist' in result:
|
||||
if 'Pool update' in result:
|
||||
ret['changes'][name] = 'Pool {0} has been updated'.format(name)
|
||||
else:
|
||||
ret['comment'] = 'Pool {0} already exist'.format(name)
|
||||
else:
|
||||
ret['changes'][name] = 'Pool {0} has been created'.format(name)
|
||||
ret['result'] = True
|
||||
else:
|
||||
ret['comment'] = 'Pool {0} created fail'.format(name)
|
||||
except libvirt.libvirtError as err:
|
||||
if err.get_error_code() == libvirt.VIR_ERR_STORAGE_POOL_BUILT or libvirt.VIR_ERR_OPERATION_FAILED:
|
||||
ret['result'] = True
|
||||
ret['comment'] = 'The pool already exist'
|
||||
ret['comment'] = err.get_error_message()
|
||||
|
||||
return ret
|
||||
|
9
salt/templates/virt/libvirt_network.jinja
Normal file
9
salt/templates/virt/libvirt_network.jinja
Normal file
@ -0,0 +1,9 @@
|
||||
<network>
|
||||
<name>{{ name }}</name>
|
||||
<bridge name='{{ bridge }}'/>
|
||||
<forward mode='{{ forward }}'/>{% if vport != None %}
|
||||
<virtualport type='{{ vport }}'/>{% endif %}{% if tag != None %}
|
||||
<vlan>
|
||||
<tag id='{{ tag }}'/>
|
||||
</vlan>{% endif %}
|
||||
</network>
|
9
salt/templates/virt/libvirt_pool.jinja
Normal file
9
salt/templates/virt/libvirt_pool.jinja
Normal file
@ -0,0 +1,9 @@
|
||||
<pool type='{{ ptype }}'>
|
||||
<name>{{ name }}</name>
|
||||
<target>
|
||||
<path>/dev/{{ target }}</path>
|
||||
</target>{% if source != None %}
|
||||
<source>
|
||||
<device path='/dev/{{ source }}'/>
|
||||
</source>{% endif %}
|
||||
</pool>
|
@ -97,7 +97,7 @@ def calc(name, num, oper, minimum=0, maximum=0, ref=None):
|
||||
if minimum > 0 and answer < minimum:
|
||||
ret['result'] = False
|
||||
|
||||
if maximum > 0 and answer > maximum:
|
||||
if 0 < maximum < answer:
|
||||
ret['result'] = False
|
||||
|
||||
ret['changes'] = {
|
||||
|
@ -80,7 +80,7 @@ def top(**kwargs):
|
||||
subprocess.Popen(
|
||||
cmd,
|
||||
shell=True,
|
||||
stdout=subprocess.PIPE.communicate()[0])
|
||||
stdout=subprocess.PIPE).communicate()[0]
|
||||
)
|
||||
if not ndata:
|
||||
log.info('master_tops ext_nodes call did not return any data')
|
||||
|
@ -2290,16 +2290,16 @@ def is_public_ip(ip):
|
||||
return False
|
||||
return True
|
||||
addr = ip_to_int(ip)
|
||||
if addr > 167772160 and addr < 184549375:
|
||||
if 167772160 < addr < 184549375:
|
||||
# 10.0.0.0/8
|
||||
return False
|
||||
elif addr > 3232235520 and addr < 3232301055:
|
||||
elif 3232235520 < addr < 3232301055:
|
||||
# 192.168.0.0/16
|
||||
return False
|
||||
elif addr > 2886729728 and addr < 2887778303:
|
||||
elif 2886729728 < addr < 2887778303:
|
||||
# 172.16.0.0/12
|
||||
return False
|
||||
elif addr > 2130706432 and addr < 2147483647:
|
||||
elif 2130706432 < addr < 2147483647:
|
||||
# 127.0.0.0/8
|
||||
return False
|
||||
return True
|
||||
|
@ -567,13 +567,13 @@ class ConnectedCache(MultiprocessingProcess):
|
||||
|
||||
# the socket for incoming cache-updates from workers
|
||||
cupd_in = context.socket(zmq.SUB)
|
||||
cupd_in.setsockopt(zmq.SUBSCRIBE, '')
|
||||
cupd_in.setsockopt(zmq.SUBSCRIBE, b'')
|
||||
cupd_in.setsockopt(zmq.LINGER, 100)
|
||||
cupd_in.bind('ipc://' + self.update_sock)
|
||||
|
||||
# the socket for the timer-event
|
||||
timer_in = context.socket(zmq.SUB)
|
||||
timer_in.setsockopt(zmq.SUBSCRIBE, '')
|
||||
timer_in.setsockopt(zmq.SUBSCRIBE, b'')
|
||||
timer_in.setsockopt(zmq.LINGER, 100)
|
||||
timer_in.connect('ipc://' + self.upd_t_sock)
|
||||
|
||||
|
@ -193,8 +193,7 @@ def get_fqhostname():
|
||||
'''
|
||||
Returns the fully qualified hostname
|
||||
'''
|
||||
l = []
|
||||
l.append(socket.getfqdn())
|
||||
l = [socket.getfqdn()]
|
||||
|
||||
# try socket.getaddrinfo
|
||||
try:
|
||||
|
@ -68,7 +68,7 @@ def check_nova():
|
||||
novaclient_ver = _LooseVersion(novaclient.__version__)
|
||||
min_ver = _LooseVersion(NOVACLIENT_MINVER)
|
||||
max_ver = _LooseVersion(NOVACLIENT_MAXVER)
|
||||
if novaclient_ver >= min_ver and novaclient_ver <= max_ver:
|
||||
if min_ver <= novaclient_ver <= max_ver:
|
||||
return HAS_NOVA
|
||||
elif novaclient_ver > max_ver:
|
||||
log.debug('Older novaclient version required. Maximum: %s',
|
||||
|
@ -630,8 +630,9 @@ class Schedule(object):
|
||||
# Reconfigure multiprocessing logging after daemonizing
|
||||
log_setup.setup_multiprocessing_logging()
|
||||
|
||||
# Don't *BEFORE* to go into try to don't let it triple execute the finally section.
|
||||
salt.utils.process.daemonize_if(self.opts)
|
||||
if multiprocessing_enabled:
|
||||
# Don't *BEFORE* to go into try to don't let it triple execute the finally section.
|
||||
salt.utils.process.daemonize_if(self.opts)
|
||||
|
||||
# TODO: Make it readable! Splt to funcs, remove nested try-except-finally sections.
|
||||
try:
|
||||
@ -855,8 +856,6 @@ class Schedule(object):
|
||||
|
||||
data['_next_scheduled_fire_time'] = now + datetime.timedelta(seconds=data['_seconds'])
|
||||
|
||||
return data
|
||||
|
||||
def _handle_once(job, data, loop_interval):
|
||||
'''
|
||||
Handle schedule item with once
|
||||
@ -869,24 +868,24 @@ class Schedule(object):
|
||||
|
||||
if not data['_next_fire_time'] and \
|
||||
not data['_splay']:
|
||||
once_fmt = data.get('once_fmt', '%Y-%m-%dT%H:%M:%S')
|
||||
try:
|
||||
once = datetime.datetime.strptime(data['once'],
|
||||
once_fmt)
|
||||
except (TypeError, ValueError):
|
||||
data['_error'] = ('Date string could not '
|
||||
'be parsed: {0}, {1}. '
|
||||
'Ignoring job {2}.'.format(
|
||||
data['once'], once_fmt, job))
|
||||
log.error(data['_error'])
|
||||
return data
|
||||
once = data['once']
|
||||
if not isinstance(once, datetime.datetime):
|
||||
once_fmt = data.get('once_fmt', '%Y-%m-%dT%H:%M:%S')
|
||||
try:
|
||||
once = datetime.datetime.strptime(data['once'],
|
||||
once_fmt)
|
||||
except (TypeError, ValueError):
|
||||
data['_error'] = ('Date string could not '
|
||||
'be parsed: {0}, {1}. '
|
||||
'Ignoring job {2}.'.format(
|
||||
data['once'], once_fmt, job))
|
||||
log.error(data['_error'])
|
||||
return
|
||||
data['_next_fire_time'] = once
|
||||
data['_next_scheduled_fire_time'] = once
|
||||
# If _next_fire_time is less than now, continue
|
||||
if once < now - loop_interval:
|
||||
data['_continue'] = True
|
||||
else:
|
||||
data['_next_fire_time'] = once
|
||||
data['_next_scheduled_fire_time'] = once
|
||||
return data
|
||||
|
||||
def _handle_when(job, data, loop_interval):
|
||||
'''
|
||||
@ -896,7 +895,7 @@ class Schedule(object):
|
||||
data['_error'] = ('Missing python-dateutil. '
|
||||
'Ignoring job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
return data
|
||||
return
|
||||
|
||||
if isinstance(data['when'], list):
|
||||
_when = []
|
||||
@ -909,7 +908,7 @@ class Schedule(object):
|
||||
'must be a dict. '
|
||||
'Ignoring job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
return data
|
||||
return
|
||||
when_ = self.opts['pillar']['whens'][i]
|
||||
elif ('whens' in self.opts['grains'] and
|
||||
i in self.opts['grains']['whens']):
|
||||
@ -918,7 +917,7 @@ class Schedule(object):
|
||||
data['_error'] = ('Grain "whens" must be a dict.'
|
||||
'Ignoring job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
return data
|
||||
return
|
||||
when_ = self.opts['grains']['whens'][i]
|
||||
else:
|
||||
when_ = i
|
||||
@ -930,7 +929,7 @@ class Schedule(object):
|
||||
data['_error'] = ('Invalid date string {0}. '
|
||||
'Ignoring job {1}.'.format(i, job))
|
||||
log.error(data['_error'])
|
||||
return data
|
||||
return
|
||||
|
||||
_when.append(when_)
|
||||
|
||||
@ -979,7 +978,7 @@ class Schedule(object):
|
||||
data['_error'] = ('Pillar item "whens" must be dict.'
|
||||
'Ignoring job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
return data
|
||||
return
|
||||
when = self.opts['pillar']['whens'][data['when']]
|
||||
elif ('whens' in self.opts['grains'] and
|
||||
data['when'] in self.opts['grains']['whens']):
|
||||
@ -987,7 +986,7 @@ class Schedule(object):
|
||||
data['_error'] = ('Grain "whens" must be a dict. '
|
||||
'Ignoring job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
return data
|
||||
return
|
||||
when = self.opts['grains']['whens'][data['when']]
|
||||
else:
|
||||
when = data['when']
|
||||
@ -999,7 +998,7 @@ class Schedule(object):
|
||||
data['_error'] = ('Invalid date string. '
|
||||
'Ignoring job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
return data
|
||||
return
|
||||
|
||||
if when < now - loop_interval and \
|
||||
not data.get('_run', False) and \
|
||||
@ -1021,8 +1020,6 @@ class Schedule(object):
|
||||
data['_next_fire_time'] = when
|
||||
data['_run'] = True
|
||||
|
||||
return data
|
||||
|
||||
def _handle_cron(job, data, loop_interval):
|
||||
'''
|
||||
Handle schedule item with cron
|
||||
@ -1031,7 +1028,7 @@ class Schedule(object):
|
||||
data['_error'] = ('Missing python-croniter. '
|
||||
'Ignoring job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
return data
|
||||
return
|
||||
|
||||
if data['_next_fire_time'] is None:
|
||||
# Get next time frame for a "cron" job if it has been never
|
||||
@ -1043,7 +1040,7 @@ class Schedule(object):
|
||||
data['_error'] = ('Invalid cron string. '
|
||||
'Ignoring job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
return data
|
||||
return
|
||||
|
||||
# If next job run is scheduled more than 1 minute ahead and
|
||||
# configured loop interval is longer than that, we should
|
||||
@ -1052,7 +1049,6 @@ class Schedule(object):
|
||||
interval = (now - data['_next_fire_time']).total_seconds()
|
||||
if interval >= 60 and interval < self.loop_interval:
|
||||
self.loop_interval = interval
|
||||
return data
|
||||
|
||||
def _handle_run_explicit(data, loop_interval):
|
||||
'''
|
||||
@ -1077,7 +1073,6 @@ class Schedule(object):
|
||||
if _run_explicit[0] <= now < _run_explicit[0] + loop_interval:
|
||||
data['run'] = True
|
||||
data['_next_fire_time'] = _run_explicit[0]
|
||||
return data
|
||||
|
||||
def _handle_skip_explicit(data, loop_interval):
|
||||
'''
|
||||
@ -1110,7 +1105,6 @@ class Schedule(object):
|
||||
data['run'] = False
|
||||
else:
|
||||
data['run'] = True
|
||||
return data
|
||||
|
||||
def _handle_skip_during_range(job, data, loop_interval):
|
||||
'''
|
||||
@ -1120,68 +1114,67 @@ class Schedule(object):
|
||||
data['_error'] = ('Missing python-dateutil. '
|
||||
'Ignoring job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
return data
|
||||
else:
|
||||
if isinstance(data['skip_during_range'], dict):
|
||||
start = data['skip_during_range']['start']
|
||||
end = data['skip_during_range']['end']
|
||||
if not isinstance(start, datetime.datetime):
|
||||
try:
|
||||
start = dateutil_parser.parse(start)
|
||||
except ValueError:
|
||||
data['_error'] = ('Invalid date string for start in '
|
||||
'skip_during_range. Ignoring '
|
||||
'job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
return data
|
||||
if not isinstance(end, datetime.datetime):
|
||||
try:
|
||||
end = dateutil_parser.parse(end)
|
||||
except ValueError:
|
||||
data['_error'] = ('Invalid date string for end in '
|
||||
'skip_during_range. Ignoring '
|
||||
'job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
return data
|
||||
return
|
||||
|
||||
# Check to see if we should run the job immediately
|
||||
# after the skip_during_range is over
|
||||
if 'run_after_skip_range' in data and \
|
||||
data['run_after_skip_range']:
|
||||
if 'run_explicit' not in data:
|
||||
data['run_explicit'] = []
|
||||
# Add a run_explicit for immediately after the
|
||||
# skip_during_range ends
|
||||
_run_immediate = (end + loop_interval).strftime('%Y-%m-%dT%H:%M:%S')
|
||||
if _run_immediate not in data['run_explicit']:
|
||||
data['run_explicit'].append({'time': _run_immediate,
|
||||
'time_fmt': '%Y-%m-%dT%H:%M:%S'})
|
||||
if not isinstance(data['skip_during_range'], dict):
|
||||
data['_error'] = ('schedule.handle_func: Invalid, range '
|
||||
'must be specified as a dictionary. '
|
||||
'Ignoring job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
return
|
||||
|
||||
if end > start:
|
||||
if start <= now <= end:
|
||||
if self.skip_function:
|
||||
data['run'] = True
|
||||
data['func'] = self.skip_function
|
||||
else:
|
||||
data['_skip_reason'] = 'in_skip_range'
|
||||
data['_skipped_time'] = now
|
||||
data['_skipped'] = True
|
||||
data['run'] = False
|
||||
else:
|
||||
data['run'] = True
|
||||
else:
|
||||
data['_error'] = ('schedule.handle_func: Invalid '
|
||||
'range, end must be larger than '
|
||||
'start. Ignoring job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
return data
|
||||
else:
|
||||
data['_error'] = ('schedule.handle_func: Invalid, range '
|
||||
'must be specified as a dictionary. '
|
||||
'Ignoring job {0}.'.format(job))
|
||||
start = data['skip_during_range']['start']
|
||||
end = data['skip_during_range']['end']
|
||||
if not isinstance(start, datetime.datetime):
|
||||
try:
|
||||
start = dateutil_parser.parse(start)
|
||||
except ValueError:
|
||||
data['_error'] = ('Invalid date string for start in '
|
||||
'skip_during_range. Ignoring '
|
||||
'job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
return data
|
||||
return data
|
||||
return
|
||||
|
||||
if not isinstance(end, datetime.datetime):
|
||||
try:
|
||||
end = dateutil_parser.parse(end)
|
||||
except ValueError:
|
||||
data['_error'] = ('Invalid date string for end in '
|
||||
'skip_during_range. Ignoring '
|
||||
'job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
return
|
||||
|
||||
# Check to see if we should run the job immediately
|
||||
# after the skip_during_range is over
|
||||
if 'run_after_skip_range' in data and \
|
||||
data['run_after_skip_range']:
|
||||
if 'run_explicit' not in data:
|
||||
data['run_explicit'] = []
|
||||
# Add a run_explicit for immediately after the
|
||||
# skip_during_range ends
|
||||
_run_immediate = (end + loop_interval).strftime('%Y-%m-%dT%H:%M:%S')
|
||||
if _run_immediate not in data['run_explicit']:
|
||||
data['run_explicit'].append({'time': _run_immediate,
|
||||
'time_fmt': '%Y-%m-%dT%H:%M:%S'})
|
||||
|
||||
if end > start:
|
||||
if start <= now <= end:
|
||||
if self.skip_function:
|
||||
data['run'] = True
|
||||
data['func'] = self.skip_function
|
||||
else:
|
||||
data['_skip_reason'] = 'in_skip_range'
|
||||
data['_skipped_time'] = now
|
||||
data['_skipped'] = True
|
||||
data['run'] = False
|
||||
else:
|
||||
data['run'] = True
|
||||
else:
|
||||
data['_error'] = ('schedule.handle_func: Invalid '
|
||||
'range, end must be larger than '
|
||||
'start. Ignoring job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
|
||||
def _handle_range(job, data):
|
||||
'''
|
||||
@ -1191,56 +1184,57 @@ class Schedule(object):
|
||||
data['_error'] = ('Missing python-dateutil. '
|
||||
'Ignoring job {0}'.format(job))
|
||||
log.error(data['_error'])
|
||||
return data
|
||||
else:
|
||||
if isinstance(data['range'], dict):
|
||||
start = data['range']['start']
|
||||
end = data['range']['end']
|
||||
if not isinstance(start, datetime.datetime):
|
||||
try:
|
||||
start = dateutil_parser.parse(start)
|
||||
except ValueError:
|
||||
data['_error'] = ('Invalid date string for start. '
|
||||
'Ignoring job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
return data
|
||||
if not isinstance(end, datetime.datetime):
|
||||
try:
|
||||
end = dateutil_parser.parse(end)
|
||||
except ValueError:
|
||||
data['_error'] = ('Invalid date string for end.'
|
||||
' Ignoring job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
return data
|
||||
if end > start:
|
||||
if 'invert' in data['range'] and data['range']['invert']:
|
||||
if now <= start or now >= end:
|
||||
data['run'] = True
|
||||
else:
|
||||
data['_skip_reason'] = 'in_skip_range'
|
||||
data['run'] = False
|
||||
else:
|
||||
if start <= now <= end:
|
||||
data['run'] = True
|
||||
else:
|
||||
if self.skip_function:
|
||||
data['run'] = True
|
||||
data['func'] = self.skip_function
|
||||
else:
|
||||
data['_skip_reason'] = 'not_in_range'
|
||||
data['run'] = False
|
||||
else:
|
||||
data['_error'] = ('schedule.handle_func: Invalid '
|
||||
'range, end must be larger '
|
||||
'than start. Ignoring job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
return data
|
||||
else:
|
||||
data['_error'] = ('schedule.handle_func: Invalid, range '
|
||||
'must be specified as a dictionary.'
|
||||
return
|
||||
|
||||
if not isinstance(data['range'], dict):
|
||||
data['_error'] = ('schedule.handle_func: Invalid, range '
|
||||
'must be specified as a dictionary.'
|
||||
'Ignoring job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
return
|
||||
|
||||
start = data['range']['start']
|
||||
end = data['range']['end']
|
||||
if not isinstance(start, datetime.datetime):
|
||||
try:
|
||||
start = dateutil_parser.parse(start)
|
||||
except ValueError:
|
||||
data['_error'] = ('Invalid date string for start. '
|
||||
'Ignoring job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
return data
|
||||
return
|
||||
|
||||
if not isinstance(end, datetime.datetime):
|
||||
try:
|
||||
end = dateutil_parser.parse(end)
|
||||
except ValueError:
|
||||
data['_error'] = ('Invalid date string for end.'
|
||||
' Ignoring job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
return
|
||||
|
||||
if end > start:
|
||||
if 'invert' in data['range'] and data['range']['invert']:
|
||||
if now <= start or now >= end:
|
||||
data['run'] = True
|
||||
else:
|
||||
data['_skip_reason'] = 'in_skip_range'
|
||||
data['run'] = False
|
||||
else:
|
||||
if start <= now <= end:
|
||||
data['run'] = True
|
||||
else:
|
||||
if self.skip_function:
|
||||
data['run'] = True
|
||||
data['func'] = self.skip_function
|
||||
else:
|
||||
data['_skip_reason'] = 'not_in_range'
|
||||
data['run'] = False
|
||||
else:
|
||||
data['_error'] = ('schedule.handle_func: Invalid '
|
||||
'range, end must be larger '
|
||||
'than start. Ignoring job {0}.'.format(job))
|
||||
log.error(data['_error'])
|
||||
|
||||
def _handle_after(job, data):
|
||||
'''
|
||||
@ -1250,23 +1244,23 @@ class Schedule(object):
|
||||
data['_error'] = ('Missing python-dateutil. '
|
||||
'Ignoring job {0}'.format(job))
|
||||
log.error(data['_error'])
|
||||
else:
|
||||
after = data['after']
|
||||
if not isinstance(after, datetime.datetime):
|
||||
after = dateutil_parser.parse(after)
|
||||
return
|
||||
|
||||
if after >= now:
|
||||
log.debug(
|
||||
'After time has not passed skipping job: %s.',
|
||||
data['name']
|
||||
)
|
||||
data['_skip_reason'] = 'after_not_passed'
|
||||
data['_skipped_time'] = now
|
||||
data['_skipped'] = True
|
||||
data['run'] = False
|
||||
else:
|
||||
data['run'] = True
|
||||
return data
|
||||
after = data['after']
|
||||
if not isinstance(after, datetime.datetime):
|
||||
after = dateutil_parser.parse(after)
|
||||
|
||||
if after >= now:
|
||||
log.debug(
|
||||
'After time has not passed skipping job: %s.',
|
||||
data['name']
|
||||
)
|
||||
data['_skip_reason'] = 'after_not_passed'
|
||||
data['_skipped_time'] = now
|
||||
data['_skipped'] = True
|
||||
data['run'] = False
|
||||
else:
|
||||
data['run'] = True
|
||||
|
||||
def _handle_until(job, data):
|
||||
'''
|
||||
@ -1276,23 +1270,23 @@ class Schedule(object):
|
||||
data['_error'] = ('Missing python-dateutil. '
|
||||
'Ignoring job {0}'.format(job))
|
||||
log.error(data['_error'])
|
||||
else:
|
||||
until = data['until']
|
||||
if not isinstance(until, datetime.datetime):
|
||||
until = dateutil_parser.parse(until)
|
||||
return
|
||||
|
||||
if until <= now:
|
||||
log.debug(
|
||||
'Until time has passed skipping job: %s.',
|
||||
data['name']
|
||||
)
|
||||
data['_skip_reason'] = 'until_passed'
|
||||
data['_skipped_time'] = now
|
||||
data['_skipped'] = True
|
||||
data['run'] = False
|
||||
else:
|
||||
data['run'] = True
|
||||
return data
|
||||
until = data['until']
|
||||
if not isinstance(until, datetime.datetime):
|
||||
until = dateutil_parser.parse(until)
|
||||
|
||||
if until <= now:
|
||||
log.debug(
|
||||
'Until time has passed skipping job: %s.',
|
||||
data['name']
|
||||
)
|
||||
data['_skip_reason'] = 'until_passed'
|
||||
data['_skipped_time'] = now
|
||||
data['_skipped'] = True
|
||||
data['run'] = False
|
||||
else:
|
||||
data['run'] = True
|
||||
|
||||
schedule = self._get_schedule()
|
||||
if not isinstance(schedule, dict):
|
||||
@ -1364,8 +1358,9 @@ class Schedule(object):
|
||||
time_elements = ('seconds', 'minutes', 'hours', 'days')
|
||||
scheduling_elements = ('when', 'cron', 'once')
|
||||
|
||||
invalid_sched_combos = [set(i)
|
||||
for i in itertools.combinations(scheduling_elements, 2)]
|
||||
invalid_sched_combos = [
|
||||
set(i) for i in itertools.combinations(scheduling_elements, 2)
|
||||
]
|
||||
|
||||
if any(i <= schedule_keys for i in invalid_sched_combos):
|
||||
log.error(
|
||||
@ -1389,17 +1384,17 @@ class Schedule(object):
|
||||
continue
|
||||
|
||||
if 'run_explicit' in data:
|
||||
data = _handle_run_explicit(data, loop_interval)
|
||||
_handle_run_explicit(data, loop_interval)
|
||||
run = data['run']
|
||||
|
||||
if True in [True for item in time_elements if item in data]:
|
||||
data = _handle_time_elements(data)
|
||||
_handle_time_elements(data)
|
||||
elif 'once' in data:
|
||||
data = _handle_once(job, data, loop_interval)
|
||||
_handle_once(job, data, loop_interval)
|
||||
elif 'when' in data:
|
||||
data = _handle_when(job, data, loop_interval)
|
||||
_handle_when(job, data, loop_interval)
|
||||
elif 'cron' in data:
|
||||
data = _handle_cron(job, data, loop_interval)
|
||||
_handle_cron(job, data, loop_interval)
|
||||
else:
|
||||
continue
|
||||
|
||||
@ -1454,7 +1449,7 @@ class Schedule(object):
|
||||
data['_run_on_start'] = False
|
||||
elif run:
|
||||
if 'range' in data:
|
||||
data = _handle_range(job, data)
|
||||
_handle_range(job, data)
|
||||
|
||||
# An error occurred so we bail out
|
||||
if '_error' in data and data['_error']:
|
||||
@ -1471,7 +1466,7 @@ class Schedule(object):
|
||||
data['skip_during_range'] = self.skip_during_range
|
||||
|
||||
if 'skip_during_range' in data and data['skip_during_range']:
|
||||
data = _handle_skip_during_range(job, data, loop_interval)
|
||||
_handle_skip_during_range(job, data, loop_interval)
|
||||
|
||||
# An error occurred so we bail out
|
||||
if '_error' in data and data['_error']:
|
||||
@ -1483,7 +1478,7 @@ class Schedule(object):
|
||||
func = data['func']
|
||||
|
||||
if 'skip_explicit' in data:
|
||||
data = _handle_skip_explicit(data, loop_interval)
|
||||
_handle_skip_explicit(data, loop_interval)
|
||||
|
||||
# An error occurred so we bail out
|
||||
if '_error' in data and data['_error']:
|
||||
@ -1495,7 +1490,7 @@ class Schedule(object):
|
||||
func = data['func']
|
||||
|
||||
if 'until' in data:
|
||||
data = _handle_until(job, data)
|
||||
_handle_until(job, data)
|
||||
|
||||
# An error occurred so we bail out
|
||||
if '_error' in data and data['_error']:
|
||||
@ -1504,7 +1499,7 @@ class Schedule(object):
|
||||
run = data['run']
|
||||
|
||||
if 'after' in data:
|
||||
data = _handle_after(job, data)
|
||||
_handle_after(job, data)
|
||||
|
||||
# An error occurred so we bail out
|
||||
if '_error' in data and data['_error']:
|
||||
|
@ -103,12 +103,12 @@ def thin_path(cachedir):
|
||||
|
||||
def get_tops(extra_mods='', so_mods=''):
|
||||
tops = [
|
||||
os.path.dirname(salt.__file__),
|
||||
os.path.dirname(jinja2.__file__),
|
||||
os.path.dirname(yaml.__file__),
|
||||
os.path.dirname(tornado.__file__),
|
||||
os.path.dirname(msgpack.__file__),
|
||||
]
|
||||
os.path.dirname(salt.__file__),
|
||||
os.path.dirname(jinja2.__file__),
|
||||
os.path.dirname(yaml.__file__),
|
||||
os.path.dirname(tornado.__file__),
|
||||
os.path.dirname(msgpack.__file__),
|
||||
]
|
||||
|
||||
tops.append(_six.__file__.replace('.pyc', '.py'))
|
||||
tops.append(backports_abc.__file__.replace('.pyc', '.py'))
|
||||
|
@ -11,7 +11,6 @@ documented in the execution module docs.
|
||||
from __future__ import absolute_import, print_function, unicode_literals
|
||||
import base64
|
||||
import logging
|
||||
import os
|
||||
import requests
|
||||
|
||||
import salt.crypt
|
||||
@ -133,14 +132,10 @@ def _get_vault_connection():
|
||||
return _get_token_and_url_from_master()
|
||||
|
||||
|
||||
def make_request(method, resource, profile=None, token=None, vault_url=None, get_token_url=False, **args):
|
||||
def make_request(method, resource, token=None, vault_url=None, get_token_url=False, **args):
|
||||
'''
|
||||
Make a request to Vault
|
||||
'''
|
||||
if profile is not None and profile.keys().remove('driver') is not None:
|
||||
# Deprecated code path
|
||||
return make_request_with_profile(method, resource, profile, **args)
|
||||
|
||||
if not token or not vault_url:
|
||||
connection = _get_vault_connection()
|
||||
token, vault_url = connection['token'], connection['url']
|
||||
@ -157,34 +152,6 @@ def make_request(method, resource, profile=None, token=None, vault_url=None, get
|
||||
return response
|
||||
|
||||
|
||||
def make_request_with_profile(method, resource, profile, **args):
|
||||
'''
|
||||
DEPRECATED! Make a request to Vault, with a profile including connection
|
||||
details.
|
||||
'''
|
||||
salt.utils.versions.warn_until(
|
||||
'Fluorine',
|
||||
'Specifying Vault connection data within a \'profile\' has been '
|
||||
'deprecated. Please see the documentation for details on the new '
|
||||
'configuration schema. Support for this function will be removed '
|
||||
'in Salt Fluorine.'
|
||||
)
|
||||
url = '{0}://{1}:{2}/v1/{3}'.format(
|
||||
profile.get('vault.scheme', 'https'),
|
||||
profile.get('vault.host'),
|
||||
profile.get('vault.port'),
|
||||
resource,
|
||||
)
|
||||
token = os.environ.get('VAULT_TOKEN', profile.get('vault.token'))
|
||||
if token is None:
|
||||
raise salt.exceptions.CommandExecutionError('A token was not configured')
|
||||
|
||||
headers = {'X-Vault-Token': token, 'Content-Type': 'application/json'}
|
||||
response = requests.request(method, url, headers=headers, **args)
|
||||
|
||||
return response
|
||||
|
||||
|
||||
def _selftoken_expired():
|
||||
'''
|
||||
Validate the current token exists and is still valid
|
||||
|
@ -208,9 +208,7 @@ def _command(source, command, flags=None, opts=None,
|
||||
|
||||
'''
|
||||
# NOTE: start with the zfs binary and command
|
||||
cmd = []
|
||||
cmd.append(_zpool_cmd() if source == 'zpool' else _zfs_cmd())
|
||||
cmd.append(command)
|
||||
cmd = [_zpool_cmd() if source == 'zpool' else _zfs_cmd(), command]
|
||||
|
||||
# NOTE: append flags if we have any
|
||||
if flags is None:
|
||||
|
@ -82,16 +82,19 @@ cp "$busybox" "$rootfsDir/bin/busybox"
|
||||
unset IFS
|
||||
|
||||
for module in "${modules[@]}"; do
|
||||
# Don't stomp on the busybox binary (newer busybox releases
|
||||
# include busybox in the --list-modules output)
|
||||
test "$module" == "bin/busybox" && continue
|
||||
mkdir -p "$(dirname "$module")"
|
||||
ln -sf /bin/busybox "$module"
|
||||
done
|
||||
# Make sure the image has the needed files to make users work
|
||||
mkdir etc
|
||||
echo "$etc_passwd" >etc/passwd
|
||||
echo "$etc_group" >etc/group
|
||||
echo "$etc_shadow" >etc/shadow
|
||||
# Import the image
|
||||
tar --numeric-owner -cf- . | docker import --change "CMD sleep 300" - "$imageName"
|
||||
docker run --rm -i "$imageName" /bin/true
|
||||
exit $?
|
||||
# Make sure the image has the needed files to make users work
|
||||
mkdir etc
|
||||
echo "$etc_passwd" >etc/passwd
|
||||
echo "$etc_group" >etc/group
|
||||
echo "$etc_shadow" >etc/shadow
|
||||
# Import the image
|
||||
tar --numeric-owner -cf- . | docker import --change "CMD sleep 300" - "$imageName"
|
||||
docker run --rm -i "$imageName" /bin/true
|
||||
exit $?
|
||||
)
|
||||
|
@ -683,7 +683,6 @@ class DockerContainerTestCase(ModuleCase, SaltReturnAssertsMixin):
|
||||
self.assertEqual(ret['comment'], expected)
|
||||
|
||||
# Update the SLS configuration to remove the last network
|
||||
log.critical('networks = %s', kwargs['networks'])
|
||||
kwargs['networks'].pop(-1)
|
||||
ret = self.run_state('docker_container.running', **kwargs)
|
||||
self.assertSaltTrueReturn(ret)
|
||||
@ -792,6 +791,65 @@ class DockerContainerTestCase(ModuleCase, SaltReturnAssertsMixin):
|
||||
def test_running_mixed_ipv4_and_ipv6(self, container_name, *nets):
|
||||
self._test_running(container_name, *nets)
|
||||
|
||||
@with_network(subnet='10.247.197.96/27', create=True)
|
||||
@container_name
|
||||
def test_running_explicit_networks(self, container_name, net):
|
||||
'''
|
||||
Ensure that if we use an explicit network configuration, we remove any
|
||||
default networks not specified (e.g. the default "bridge" network).
|
||||
'''
|
||||
# Create a container with no specific network configuration. The only
|
||||
# networks connected will be the default ones.
|
||||
ret = self.run_state(
|
||||
'docker_container.running',
|
||||
name=container_name,
|
||||
image=self.image,
|
||||
shutdown_timeout=1)
|
||||
self.assertSaltTrueReturn(ret)
|
||||
|
||||
inspect_result = self.run_function('docker.inspect_container',
|
||||
[container_name])
|
||||
# Get the default network names
|
||||
default_networks = list(inspect_result['NetworkSettings']['Networks'])
|
||||
|
||||
# Re-run the state with an explicit network configuration. All of the
|
||||
# default networks should be disconnected.
|
||||
ret = self.run_state(
|
||||
'docker_container.running',
|
||||
name=container_name,
|
||||
image=self.image,
|
||||
networks=[net.name],
|
||||
shutdown_timeout=1)
|
||||
self.assertSaltTrueReturn(ret)
|
||||
ret = ret[next(iter(ret))]
|
||||
net_changes = ret['changes']['container']['Networks']
|
||||
|
||||
self.assertIn(
|
||||
"Container '{0}' is already configured as specified.".format(
|
||||
container_name
|
||||
),
|
||||
ret['comment']
|
||||
)
|
||||
|
||||
updated_networks = self.run_function(
|
||||
'docker.inspect_container',
|
||||
[container_name])['NetworkSettings']['Networks']
|
||||
|
||||
for default_network in default_networks:
|
||||
self.assertIn(
|
||||
"Disconnected from network '{0}'.".format(default_network),
|
||||
ret['comment']
|
||||
)
|
||||
self.assertIn(default_network, net_changes)
|
||||
# We've tested that the state return is correct, but let's be extra
|
||||
# paranoid and check the actual connected networks.
|
||||
self.assertNotIn(default_network, updated_networks)
|
||||
|
||||
self.assertIn(
|
||||
"Connected to network '{0}'.".format(net.name),
|
||||
ret['comment']
|
||||
)
|
||||
|
||||
@container_name
|
||||
def test_run_with_onlyif(self, name):
|
||||
'''
|
||||
|
@ -872,24 +872,21 @@ SwapTotal: 4789244 kB'''
|
||||
self.assertEqual(get_dns, ret)
|
||||
|
||||
@skipIf(not salt.utils.platform.is_linux(), 'System is not Linux')
|
||||
@patch.object(salt.utils, 'is_windows', MagicMock(return_value=False))
|
||||
@patch('salt.utils.network.ip_addrs', MagicMock(return_value=['1.2.3.4', '5.6.7.8']))
|
||||
@patch('salt.utils.network.ip_addrs6',
|
||||
MagicMock(return_value=['fe80::a8b2:93ff:fe00:0', 'fe80::a8b2:93ff:dead:beef']))
|
||||
@patch('salt.utils.network.socket.getfqdn', MagicMock(side_effect=lambda v: v)) # Just pass-through
|
||||
def test_fqdns_return(self):
|
||||
'''
|
||||
test the return for a dns grain. test for issue:
|
||||
https://github.com/saltstack/salt/issues/41230
|
||||
'''
|
||||
reverse_resolv_mock = [('foo.bar.baz', [], ['1.2.3.4']),
|
||||
('rinzler.evil-corp.com', [], ['5.6.7.8']),
|
||||
('foo.bar.baz', [], ['fe80::a8b2:93ff:fe00:0']),
|
||||
('bluesniff.foo.bar', [], ['fe80::a8b2:93ff:dead:beef'])]
|
||||
('rinzler.evil-corp.com', [], ['5.6.7.8']),
|
||||
('foo.bar.baz', [], ['fe80::a8b2:93ff:fe00:0']),
|
||||
('bluesniff.foo.bar', [], ['fe80::a8b2:93ff:dead:beef'])]
|
||||
ret = {'fqdns': ['bluesniff.foo.bar', 'foo.bar.baz', 'rinzler.evil-corp.com']}
|
||||
self._run_fqdns_test(reverse_resolv_mock, ret)
|
||||
|
||||
def _run_fqdns_test(self, reverse_resolv_mock, ret):
|
||||
with patch.object(salt.utils, 'is_windows', MagicMock(return_value=False)):
|
||||
with patch('salt.utils.network.ip_addrs',
|
||||
MagicMock(return_value=['1.2.3.4', '5.6.7.8'])),\
|
||||
patch('salt.utils.network.ip_addrs6',
|
||||
MagicMock(return_value=['fe80::a8b2:93ff:fe00:0', 'fe80::a8b2:93ff:dead:beef'])):
|
||||
with patch.object(socket, 'gethostbyaddr', side_effect=reverse_resolv_mock):
|
||||
fqdns = core.fqdns()
|
||||
self.assertEqual(fqdns, ret)
|
||||
with patch.object(socket, 'gethostbyaddr', side_effect=reverse_resolv_mock):
|
||||
fqdns = core.fqdns()
|
||||
self.assertEqual(fqdns, ret)
|
||||
|
@ -789,6 +789,7 @@ class CronTestCase(TestCase, LoaderModuleMockMixin):
|
||||
cron.raw_cron(STUB_USER)
|
||||
cron.__salt__['cmd.run_stdout'].assert_called_with("crontab -l",
|
||||
runas=STUB_USER,
|
||||
ignore_retcode=True,
|
||||
rstrip=False,
|
||||
python_shell=False)
|
||||
|
||||
@ -803,6 +804,7 @@ class CronTestCase(TestCase, LoaderModuleMockMixin):
|
||||
MagicMock(return_value=False)):
|
||||
cron.raw_cron(STUB_USER)
|
||||
cron.__salt__['cmd.run_stdout'].assert_called_with("crontab -u root -l",
|
||||
ignore_retcode=True,
|
||||
rstrip=False,
|
||||
python_shell=False)
|
||||
|
||||
@ -818,6 +820,7 @@ class CronTestCase(TestCase, LoaderModuleMockMixin):
|
||||
cron.raw_cron(STUB_USER)
|
||||
cron.__salt__['cmd.run_stdout'].assert_called_with("crontab -l",
|
||||
runas=STUB_USER,
|
||||
ignore_retcode=True,
|
||||
rstrip=False,
|
||||
python_shell=False)
|
||||
|
||||
@ -833,6 +836,7 @@ class CronTestCase(TestCase, LoaderModuleMockMixin):
|
||||
cron.raw_cron(STUB_USER)
|
||||
cron.__salt__['cmd.run_stdout'].assert_called_with("crontab -l",
|
||||
runas=STUB_USER,
|
||||
ignore_retcode=True,
|
||||
rstrip=False,
|
||||
python_shell=False)
|
||||
|
||||
@ -848,6 +852,7 @@ class CronTestCase(TestCase, LoaderModuleMockMixin):
|
||||
cron.raw_cron(STUB_USER)
|
||||
cron.__salt__['cmd.run_stdout'].assert_called_with("crontab -l",
|
||||
runas=STUB_USER,
|
||||
ignore_retcode=True,
|
||||
rstrip=False,
|
||||
python_shell=False)
|
||||
|
||||
@ -863,6 +868,7 @@ class CronTestCase(TestCase, LoaderModuleMockMixin):
|
||||
cron.raw_cron(STUB_USER)
|
||||
cron.__salt__['cmd.run_stdout'].assert_called_with("crontab -l",
|
||||
runas=STUB_USER,
|
||||
ignore_retcode=True,
|
||||
rstrip=False,
|
||||
python_shell=False)
|
||||
|
||||
|
@ -507,3 +507,35 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin):
|
||||
nic = nics[list(nics)[0]]
|
||||
self.assertEqual('bridge', nic['type'])
|
||||
self.assertEqual('ac:de:48:b6:8b:59', nic['mac'])
|
||||
|
||||
def test_network(self):
|
||||
xml_data = virt._gen_net_xml('network', 'main', 'bridge', 'openvswitch')
|
||||
root = ET.fromstring(xml_data)
|
||||
self.assertEqual(root.find('name').text, 'network')
|
||||
self.assertEqual(root.find('bridge').attrib['name'], 'main')
|
||||
self.assertEqual(root.find('forward').attrib['mode'], 'bridge')
|
||||
self.assertEqual(root.find('virtualport').attrib['type'], 'openvswitch')
|
||||
|
||||
def test_network_tag(self):
|
||||
xml_data = virt._gen_net_xml('network', 'main', 'bridge', 'openvswitch', 1001)
|
||||
root = ET.fromstring(xml_data)
|
||||
self.assertEqual(root.find('name').text, 'network')
|
||||
self.assertEqual(root.find('bridge').attrib['name'], 'main')
|
||||
self.assertEqual(root.find('forward').attrib['mode'], 'bridge')
|
||||
self.assertEqual(root.find('virtualport').attrib['type'], 'openvswitch')
|
||||
self.assertEqual(root.find('vlan/tag').attrib['id'], '1001')
|
||||
|
||||
def test_pool(self):
|
||||
xml_data = virt._gen_pool_xml('pool', 'logical', 'base')
|
||||
root = ET.fromstring(xml_data)
|
||||
self.assertEqual(root.find('name').text, 'pool')
|
||||
self.assertEqual(root.attrib['type'], 'logical')
|
||||
self.assertEqual(root.find('target/path').text, '/dev/base')
|
||||
|
||||
def test_pool_with_source(self):
|
||||
xml_data = virt._gen_pool_xml('pool', 'logical', 'base', 'sda')
|
||||
root = ET.fromstring(xml_data)
|
||||
self.assertEqual(root.find('name').text, 'pool')
|
||||
self.assertEqual(root.attrib['type'], 'logical')
|
||||
self.assertEqual(root.find('target/path').text, '/dev/base')
|
||||
self.assertEqual(root.find('source/device').attrib['path'], '/dev/sda')
|
||||
|
@ -44,12 +44,14 @@ class ServerdensityDeviceTestCase(TestCase, LoaderModuleMockMixin):
|
||||
mock_t = MagicMock(side_effect=[True, {'agentKey': True},
|
||||
[{'agentKey': True}]])
|
||||
mock_sd = MagicMock(side_effect=[['sd-agent'], [], True])
|
||||
with patch.dict(serverdensity_device.__salt__,
|
||||
{'status.all_status': mock_dict,
|
||||
'grains.items': mock_dict,
|
||||
'serverdensity_device.ls': mock_t,
|
||||
'pkg.list_pkgs': mock_sd,
|
||||
'serverdensity_device.install_agent': mock_sd}):
|
||||
with patch.multiple(serverdensity_device,
|
||||
__salt__={'status.all_status': mock_dict,
|
||||
'grains.items': mock_dict,
|
||||
'serverdensity_device.ls': mock_t,
|
||||
'pkg.list_pkgs': mock_sd,
|
||||
'serverdensity_device.install_agent': mock_sd},
|
||||
__opts__={'test': False},
|
||||
):
|
||||
comt = ('Such server name already exists in this'
|
||||
' Server Density account. And sd-agent is installed')
|
||||
ret.update({'comment': comt})
|
||||
|
Loading…
Reference in New Issue
Block a user