Merge branch 'develop' into separate-key-dir-from-cache-dir

This commit is contained in:
Sol Kim 2017-10-24 11:00:19 +09:00 committed by GitHub
commit af2d6a0441
71 changed files with 1375 additions and 448 deletions

View File

@ -12,4 +12,10 @@ Remove this section if not relevant
Yes/No
### Commits signed with GPG?
Yes/No
Please review [Salt's Contributing Guide](https://docs.saltstack.com/en/latest/topics/development/contributing.html) for best practices.
See GitHub's [page on GPG signing](https://help.github.com/articles/signing-commits-using-gpg/) for more information about signing commits with GPG.

4
.github/stale.yml vendored
View File

@ -1,8 +1,8 @@
# Probot Stale configuration file
# Number of days of inactivity before an issue becomes stale
# 950 is approximately 2 years and 7 months
daysUntilStale: 950
# 925 is approximately 2 years and 6 months
daysUntilStale: 925
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 7

View File

@ -299,6 +299,7 @@ execution modules
openstack_mng
openvswitch
opkg
opsgenie
oracle
osquery
out

View File

@ -0,0 +1,6 @@
===================
salt.modules.opsgenie
===================
.. automodule:: salt.modules.opsgenie
:members:

View File

@ -188,6 +188,7 @@ state modules
openstack_config
openvswitch_bridge
openvswitch_port
opsgenie
pagerduty
pagerduty_escalation_policy
pagerduty_schedule

View File

@ -0,0 +1,6 @@
=====================
salt.states.opsgenie
=====================
.. automodule:: salt.states.opsgenie
:members:

View File

@ -1,3 +1,5 @@
.. _misc-salt-cloud-options:
================================
Miscellaneous Salt Cloud Options
================================

View File

@ -4,7 +4,7 @@
Getting Started With Saltify
============================
The Saltify driver is a new, experimental driver for installing Salt on existing
The Saltify driver is a driver for installing Salt on existing
machines (virtual or bare metal).
@ -37,16 +37,25 @@ the role usually performed by a vendor's cloud management system. The salt maste
must be running on the salt-cloud machine, and created nodes must be connected to the
master.
Additional information about which configuration options apply to which actions
can be studied in the
:ref:`Saltify Module documentation <saltify-module>`
and the
:ref:`Miscellaneous Salt Cloud Options <misc-salt-cloud-options>`
document.
Profiles
========
Saltify requires a profile to be configured for each machine that needs Salt
installed. The initial profile can be set up at ``/etc/salt/cloud.profiles``
Saltify requires a separate profile to be configured for each machine that
needs Salt installed [#]_. The initial profile can be set up at
``/etc/salt/cloud.profiles``
or in the ``/etc/salt/cloud.profiles.d/`` directory. Each profile requires
both an ``ssh_host`` and an ``ssh_username`` key parameter as well as either
an ``key_filename`` or a ``password``.
.. [#] Unless you are using a map file to provide the unique parameters.
Profile configuration example:
.. code-block:: yaml
@ -68,40 +77,78 @@ The machine can now be "Salted" with the following command:
This will install salt on the machine specified by the cloud profile,
``salt-this-machine``, and will give the machine the minion id of
``my-machine``. If the command was executed on the salt-master, its Salt
key will automatically be signed on the master.
key will automatically be accepted by the master.
Once a salt-minion has been successfully installed on the instance, connectivity
to it can be verified with Salt:
.. code-block:: bash
salt my-machine test.ping
salt my-machine test.version
Destroy Options
---------------
.. versionadded:: Oxygen
For obvious reasons, the ``destroy`` action does not actually vaporize hardware.
If the salt master is connected using salt-api, it can tear down parts of
the client machines. It will remove the client's key from the salt master,
and will attempt the following options:
If the salt master is connected, it can tear down parts of the client machines.
It will remove the client's key from the salt master,
and can execute the following options:
.. code-block:: yaml
- remove_config_on_destroy: true
# default: true
# Deactivate salt-minion on reboot and
# delete the minion config and key files from its ``/etc/salt`` directory,
# NOTE: If deactivation is unsuccessful (older Ubuntu machines) then when
# delete the minion config and key files from its "/etc/salt" directory,
# NOTE: If deactivation was unsuccessful (older Ubuntu machines) then when
# salt-minion restarts it will automatically create a new, unwanted, set
# of key files. The ``force_minion_config`` option must be used in that case.
# of key files. Use the "force_minion_config" option to replace them.
- shutdown_on_destroy: false
# default: false
# send a ``shutdown`` command to the client.
# last of all, send a "shutdown" command to the client.
Wake On LAN
-----------
.. versionadded:: Oxygen
In addition to connecting a hardware machine to a Salt master,
you have the option of sending a wake-on-LAN
`magic packet`_
to start that machine running.
.. _magic packet: https://en.wikipedia.org/wiki/Wake-on-LAN
The "magic packet" must be sent by an existing salt minion which is on
the same network segment as the target machine. (Or your router
must be set up especially to route WoL packets.) Your target machine
must be set up to listen for WoL and to respond appropriatly.
You must provide the Salt node id of the machine which will send
the WoL packet \(parameter ``wol_sender_node``\), and
the hardware MAC address of the machine you intend to wake,
\(parameter ``wake_on_lan_mac``\). If both parameters are defined,
the WoL will be sent. The cloud master will then sleep a while
\(parameter ``wol_boot_wait``) to give the target machine time to
boot up before we start probing its SSH port to begin deploying
Salt to it. The default sleep time is 30 seconds.
.. code-block:: yaml
# /etc/salt/cloud.profiles.d/saltify.conf
salt-this-machine:
ssh_host: 12.34.56.78
ssh_username: root
key_filename: '/etc/salt/mysshkey.pem'
provider: my-saltify-config
wake_on_lan_mac: '00:e0:4c:70:2a:b2' # found with ifconfig
wol_sender_node: bevymaster # its on this network segment
wol_boot_wait: 45 # seconds to sleep
Using Map Files
---------------
The settings explained in the section above may also be set in a map file. An

View File

@ -199,13 +199,42 @@ class Beacon(object):
else:
self.opts['beacons'][name].append({'enabled': enabled_value})
def list_beacons(self):
def _get_beacons(self,
include_opts=True,
include_pillar=True):
'''
Return the beacons data structure
'''
beacons = {}
if include_pillar:
pillar_beacons = self.opts.get('pillar', {}).get('beacons', {})
if not isinstance(pillar_beacons, dict):
raise ValueError('Beacons must be of type dict.')
beacons.update(pillar_beacons)
if include_opts:
opts_beacons = self.opts.get('beacons', {})
if not isinstance(opts_beacons, dict):
raise ValueError('Beacons must be of type dict.')
beacons.update(opts_beacons)
return beacons
def list_beacons(self,
include_pillar=True,
include_opts=True):
'''
List the beacon items
include_pillar: Whether to include beacons that are
configured in pillar, default is True.
include_opts: Whether to include beacons that are
configured in opts, default is True.
'''
beacons = self._get_beacons(include_pillar, include_opts)
# Fire the complete event back along with the list of beacons
evt = salt.utils.event.get_event('minion', opts=self.opts)
evt.fire_event({'complete': True, 'beacons': self.opts['beacons']},
evt.fire_event({'complete': True, 'beacons': beacons},
tag='/salt/minion/minion_beacons_list_complete')
return True
@ -236,8 +265,8 @@ class Beacon(object):
del beacon_data['enabled']
valid, vcomment = self.beacons[validate_str](beacon_data)
else:
log.info('Beacon %s does not have a validate'
' function, skipping validation.', name)
vcomment = 'Beacon {0} does not have a validate' \
' function, skipping validation.'.format(name)
valid = True
# Fire the complete event back along with the list of beacons
@ -257,16 +286,23 @@ class Beacon(object):
data = {}
data[name] = beacon_data
if name in self.opts['beacons']:
log.info('Updating settings for beacon '
'item: %s', name)
if name in self._get_beacons(include_opts=False):
comment = 'Cannot update beacon item {0}, ' \
'because it is configured in pillar.'.format(name)
complete = False
else:
log.info('Added new beacon item %s', name)
if name in self.opts['beacons']:
comment = 'Updating settings for beacon ' \
'item: {0}'.format(name)
else:
comment = 'Added new beacon item: {0}'.format(name)
complete = True
self.opts['beacons'].update(data)
# Fire the complete event back along with updated list of beacons
evt = salt.utils.event.get_event('minion', opts=self.opts)
evt.fire_event({'complete': True, 'beacons': self.opts['beacons']},
evt.fire_event({'complete': complete, 'comment': comment,
'beacons': self.opts['beacons']},
tag='/salt/minion/minion_beacon_add_complete')
return True
@ -279,13 +315,20 @@ class Beacon(object):
data = {}
data[name] = beacon_data
log.info('Updating settings for beacon '
'item: %s', name)
if name in self._get_beacons(include_opts=False):
comment = 'Cannot modify beacon item {0}, ' \
'it is configured in pillar.'.format(name)
complete = False
else:
comment = 'Updating settings for beacon ' \
'item: {0}'.format(name)
complete = True
self.opts['beacons'].update(data)
# Fire the complete event back along with updated list of beacons
evt = salt.utils.event.get_event('minion', opts=self.opts)
evt.fire_event({'complete': True, 'beacons': self.opts['beacons']},
evt.fire_event({'complete': complete, 'comment': comment,
'beacons': self.opts['beacons']},
tag='/salt/minion/minion_beacon_modify_complete')
return True
@ -295,13 +338,22 @@ class Beacon(object):
Delete a beacon item
'''
if name in self._get_beacons(include_opts=False):
comment = 'Cannot delete beacon item {0}, ' \
'it is configured in pillar.'.format(name)
complete = False
else:
if name in self.opts['beacons']:
log.info('Deleting beacon item %s', name)
del self.opts['beacons'][name]
comment = 'Deleting beacon item: {0}'.format(name)
else:
comment = 'Beacon item {0} not found.'.format(name)
complete = True
# Fire the complete event back along with updated list of beacons
evt = salt.utils.event.get_event('minion', opts=self.opts)
evt.fire_event({'complete': True, 'beacons': self.opts['beacons']},
evt.fire_event({'complete': complete, 'comment': comment,
'beacons': self.opts['beacons']},
tag='/salt/minion/minion_beacon_delete_complete')
return True
@ -339,11 +391,19 @@ class Beacon(object):
Enable a beacon
'''
if name in self._get_beacons(include_opts=False):
comment = 'Cannot enable beacon item {0}, ' \
'it is configured in pillar.'.format(name)
complete = False
else:
self._update_enabled(name, True)
comment = 'Enabling beacon item {0}'.format(name)
complete = True
# Fire the complete event back along with updated list of beacons
evt = salt.utils.event.get_event('minion', opts=self.opts)
evt.fire_event({'complete': True, 'beacons': self.opts['beacons']},
evt.fire_event({'complete': complete, 'comment': comment,
'beacons': self.opts['beacons']},
tag='/salt/minion/minion_beacon_enabled_complete')
return True
@ -353,11 +413,19 @@ class Beacon(object):
Disable a beacon
'''
if name in self._get_beacons(include_opts=False):
comment = 'Cannot disable beacon item {0}, ' \
'it is configured in pillar.'.format(name)
complete = False
else:
self._update_enabled(name, False)
comment = 'Disabling beacon item {0}'.format(name)
complete = True
# Fire the complete event back along with updated list of beacons
evt = salt.utils.event.get_event('minion', opts=self.opts)
evt.fire_event({'complete': True, 'beacons': self.opts['beacons']},
evt.fire_event({'complete': complete, 'comment': comment,
'beacons': self.opts['beacons']},
tag='/salt/minion/minion_beacon_disabled_complete')
return True

View File

@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
'''
Send events covering service status
Send events covering process status
'''
# Import Python Libs

View File

@ -3,6 +3,8 @@
Beacon to monitor temperature, humidity and pressure using the SenseHat
of a Raspberry Pi.
.. versionadded:: 2017.7.0
:maintainer: Benedikt Werner <1benediktwerner@gmail.com>
:maturity: new
:depends: sense_hat Python module

View File

@ -1595,7 +1595,10 @@ class LocalClient(object):
timeout=timeout,
tgt=tgt,
tgt_type=tgt_type,
expect_minions=(verbose or show_timeout),
# (gtmanfred) expect_minions is popped here incase it is passed from a client
# call. If this is not popped, then it would be passed twice to
# get_iter_returns.
expect_minions=(kwargs.pop('expect_minions', False) or verbose or show_timeout),
**kwargs
):
log.debug(u'return event: %s', ret)

View File

@ -4572,6 +4572,7 @@ def _list_nodes(full=False):
pass
vms[name]['id'] = vm.find('ID').text
if vm.find('TEMPLATE').find('TEMPLATE_ID'):
vms[name]['image'] = vm.find('TEMPLATE').find('TEMPLATE_ID').text
vms[name]['name'] = name
vms[name]['size'] = {'cpu': cpu_size, 'memory': memory_size}

View File

@ -1,5 +1,7 @@
# -*- coding: utf-8 -*-
'''
.. _`saltify-module`:
Saltify Module
==============
@ -7,6 +9,9 @@ The Saltify module is designed to install Salt on a remote machine, virtual or
bare metal, using SSH. This module is useful for provisioning machines which
are already installed, but not Salted.
.. versionchanged:: Oxygen
The wake_on_lan capability, and actions destroy, reboot, and query functions were added.
Use of this module requires some configuration in cloud profile and provider
files as described in the
:ref:`Gettting Started with Saltify <getting-started-with-saltify>` documentation.
@ -15,6 +20,7 @@ files as described in the
# Import python libs
from __future__ import absolute_import
import logging
import time
# Import salt libs
import salt.utils.cloud
@ -210,12 +216,61 @@ def show_instance(name, call=None):
def create(vm_):
'''
Provision a single machine
if configuration parameter ``deploy`` is ``True``,
Provision a single machine, adding its keys to the salt master
else,
Test ssh connections to the machine
Configuration parameters:
- deploy: (see above)
- provider: name of entry in ``salt/cloud.providers.d/???`` file
- ssh_host: IP address or DNS name of the new machine
- ssh_username: name used to log in to the new machine
- ssh_password: password to log in (unless key_filename is used)
- key_filename: (optional) SSH private key for passwordless login
- ssh_port: (default=22) TCP port for SSH connection
- wake_on_lan_mac: (optional) hardware (MAC) address for wake on lan
- wol_sender_node: (optional) salt minion to send wake on lan command
- wol_boot_wait: (default=30) seconds to delay while client boots
- force_minion_config: (optional) replace the minion configuration files on the new machine
See also
:ref:`Miscellaneous Salt Cloud Options <misc-salt-cloud-options>`
and
:ref:`Getting Started with Saltify <getting-started-with-saltify>`
CLI Example:
.. code-block:: bash
salt-cloud -p mymachine my_new_id
'''
deploy_config = config.get_cloud_config_value(
'deploy', vm_, __opts__, default=False)
if deploy_config:
wol_mac = config.get_cloud_config_value(
'wake_on_lan_mac', vm_, __opts__, default='')
wol_host = config.get_cloud_config_value(
'wol_sender_node', vm_, __opts__, default='')
if wol_mac and wol_host:
log.info('sending wake-on-lan to %s using node %s',
wol_mac, wol_host)
local = salt.client.LocalClient()
if isinstance(wol_mac, six.string_types):
wol_mac = [wol_mac] # a smart user may have passed more params
ret = local.cmd(wol_host, 'network.wol', wol_mac)
log.info('network.wol returned value %s', ret)
if ret and ret[wol_host]:
sleep_time = config.get_cloud_config_value(
'wol_boot_wait', vm_, __opts__, default=30)
if sleep_time > 0.0:
log.info('delaying %d seconds for boot', sleep_time)
time.sleep(sleep_time)
log.info('Provisioning existing machine %s', vm_['name'])
ret = __utils__['cloud.bootstrap'](vm_, __opts__)
else:

View File

@ -8,7 +8,7 @@ Salt minion.
Use of this module requires some configuration in cloud profile and provider
files as described in the
:ref:`Gettting Started with Vagrant <getting-started-with-vagrant>` documentation.
:ref:`Getting Started with Vagrant <getting-started-with-vagrant>` documentation.
.. versionadded:: Oxygen

View File

@ -704,7 +704,7 @@ def _manage_devices(devices, vm=None, container_ref=None, new_vm_name=None):
network_name = devices['network'][device.deviceInfo.label]['name']
adapter_type = devices['network'][device.deviceInfo.label]['adapter_type'] if 'adapter_type' in devices['network'][device.deviceInfo.label] else ''
switch_type = devices['network'][device.deviceInfo.label]['switch_type'] if 'switch_type' in devices['network'][device.deviceInfo.label] else ''
network_spec = _edit_existing_network_adapter(device, network_name, adapter_type, switch_type)
network_spec = _edit_existing_network_adapter(device, network_name, adapter_type, switch_type, container_ref)
adapter_mapping = _set_network_adapter_mapping(devices['network'][device.deviceInfo.label])
device_specs.append(network_spec)
nics_map.append(adapter_mapping)
@ -2578,7 +2578,7 @@ def create(vm_):
config_spec.memoryMB = memory_mb
if devices:
specs = _manage_devices(devices, vm=object_ref, new_vm_name=vm_name)
specs = _manage_devices(devices, vm=object_ref, container_ref=container_ref, new_vm_name=vm_name)
config_spec.deviceChange = specs['device_specs']
if extra_config:

View File

@ -1410,7 +1410,10 @@ def os_data():
.format(' '.join(init_cmdline))
)
# Add lsb grains on any distro with lsb-release
# Add lsb grains on any distro with lsb-release. Note that this import
# can fail on systems with lsb-release installed if the system package
# does not install the python package for the python interpreter used by
# Salt (i.e. python2 or python3)
try:
import lsb_release # pylint: disable=import-error
release = lsb_release.get_distro_information()
@ -1459,7 +1462,13 @@ def os_data():
if 'VERSION_ID' in os_release:
grains['lsb_distrib_release'] = os_release['VERSION_ID']
if 'PRETTY_NAME' in os_release:
grains['lsb_distrib_codename'] = os_release['PRETTY_NAME']
codename = os_release['PRETTY_NAME']
# https://github.com/saltstack/salt/issues/44108
if os_release['ID'] == 'debian':
codename_match = re.search(r'\((\w+)\)$', codename)
if codename_match:
codename = codename_match.group(1)
grains['lsb_distrib_codename'] = codename
if 'CPE_NAME' in os_release:
if ":suse:" in os_release['CPE_NAME'] or ":opensuse:" in os_release['CPE_NAME']:
grains['os'] = "SUSE"

View File

@ -2063,6 +2063,8 @@ class Minion(MinionBase):
func = data.get(u'func', None)
name = data.get(u'name', None)
beacon_data = data.get(u'beacon_data', None)
include_pillar = data.get(u'include_pillar', None)
include_opts = data.get(u'include_opts', None)
if func == u'add':
self.beacons.add_beacon(name, beacon_data)
@ -2079,7 +2081,7 @@ class Minion(MinionBase):
elif func == u'disable_beacon':
self.beacons.disable_beacon(name)
elif func == u'list':
self.beacons.list_beacons()
self.beacons.list_beacons(include_opts, include_pillar)
elif func == u'list_available':
self.beacons.list_available_beacons()
elif func == u'validate_beacon':

View File

@ -29,7 +29,6 @@ import json
import yaml
# pylint: disable=no-name-in-module,import-error,redefined-builtin
from salt.ext import six
from salt.ext.six.moves import range
from salt.ext.six.moves.urllib.error import HTTPError
from salt.ext.six.moves.urllib.request import Request as _Request, urlopen as _urlopen
# pylint: enable=no-name-in-module,import-error,redefined-builtin
@ -1610,7 +1609,7 @@ def _consolidate_repo_sources(sources):
combined_comps = set(repo.comps).union(set(combined.comps))
consolidated[key].comps = list(combined_comps)
else:
consolidated[key] = sourceslist.SourceEntry(_strip_uri(repo.line))
consolidated[key] = sourceslist.SourceEntry(salt.utils.pkg.deb.strip_uri(repo.line))
if repo.file != base_file:
delete_files.add(repo.file)
@ -1718,7 +1717,7 @@ def list_repos():
repo['dist'] = source.dist
repo['type'] = source.type
repo['uri'] = source.uri.rstrip('/')
repo['line'] = _strip_uri(source.line.strip())
repo['line'] = salt.utils.pkg.deb.strip_uri(source.line.strip())
repo['architectures'] = getattr(source, 'architectures', [])
repos.setdefault(source.uri, []).append(repo)
return repos
@ -2477,18 +2476,6 @@ def file_dict(*packages):
return __salt__['lowpkg.file_dict'](*packages)
def _strip_uri(repo):
'''
Remove the trailing slash from the URI in a repo definition
'''
splits = repo.split()
for idx in range(len(splits)):
if any(splits[idx].startswith(x)
for x in ('http://', 'https://', 'ftp://')):
splits[idx] = splits[idx].rstrip('/')
return ' '.join(splits)
def expand_repo_def(**kwargs):
'''
Take a repository definition and expand it to the full pkg repository dict
@ -2504,7 +2491,7 @@ def expand_repo_def(**kwargs):
_check_apt()
sanitized = {}
repo = _strip_uri(kwargs['repo'])
repo = salt.utils.pkg.deb.strip_uri(kwargs['repo'])
if repo.startswith('ppa:') and __grains__['os'] in ('Ubuntu', 'Mint', 'neon'):
dist = __grains__['lsb_distrib_codename']
owner_name, ppa_name = repo[4:].split('/', 1)

View File

@ -28,11 +28,21 @@ __func_alias__ = {
}
def list_(return_yaml=True):
def list_(return_yaml=True,
include_pillar=True,
include_opts=True):
'''
List the beacons currently configured on the minion
:param return_yaml: Whether to return YAML formatted output, default True
:param return_yaml: Whether to return YAML formatted output,
default True
:param include_pillar: Whether to include beacons that are
configured in pillar, default is True.
:param include_opts: Whether to include beacons that are
configured in opts, default is True.
:return: List of currently configured Beacons.
CLI Example:
@ -46,7 +56,10 @@ def list_(return_yaml=True):
try:
eventer = salt.utils.event.get_event('minion', opts=__opts__)
res = __salt__['event.fire']({'func': 'list'}, 'manage_beacons')
res = __salt__['event.fire']({'func': 'list',
'include_pillar': include_pillar,
'include_opts': include_opts},
'manage_beacons')
if res:
event_ret = eventer.get_event(tag='/salt/minion/minion_beacons_list_complete', wait=30)
log.debug('event_ret {0}'.format(event_ret))
@ -133,6 +146,10 @@ def add(name, beacon_data, **kwargs):
ret['comment'] = 'Beacon {0} is already configured.'.format(name)
return ret
if name not in list_available(return_yaml=False):
ret['comment'] = 'Beacon "{0}" is not available.'.format(name)
return ret
if 'test' in kwargs and kwargs['test']:
ret['result'] = True
ret['comment'] = 'Beacon: {0} would be added.'.format(name)
@ -170,6 +187,9 @@ def add(name, beacon_data, **kwargs):
if name in beacons and beacons[name] == beacon_data:
ret['result'] = True
ret['comment'] = 'Added beacon: {0}.'.format(name)
else:
ret['result'] = False
ret['comment'] = event_ret['comment']
return ret
except KeyError:
# Effectively a no-op, since we can't really return without an event system
@ -262,6 +282,9 @@ def modify(name, beacon_data, **kwargs):
if name in beacons and beacons[name] == beacon_data:
ret['result'] = True
ret['comment'] = 'Modified beacon: {0}.'.format(name)
else:
ret['result'] = False
ret['comment'] = event_ret['comment']
return ret
except KeyError:
# Effectively a no-op, since we can't really return without an event system
@ -299,12 +322,14 @@ def delete(name, **kwargs):
if res:
event_ret = eventer.get_event(tag='/salt/minion/minion_beacon_delete_complete', wait=30)
if event_ret and event_ret['complete']:
log.debug('== event_ret {} =='.format(event_ret))
beacons = event_ret['beacons']
if name not in beacons:
ret['result'] = True
ret['comment'] = 'Deleted beacon: {0}.'.format(name)
return ret
else:
ret['result'] = False
ret['comment'] = event_ret['comment']
except KeyError:
# Effectively a no-op, since we can't really return without an event system
ret['comment'] = 'Event module not available. Beacon add failed.'
@ -327,7 +352,7 @@ def save():
ret = {'comment': [],
'result': True}
beacons = list_(return_yaml=False)
beacons = list_(return_yaml=False, include_pillar=False)
# move this file into an configurable opt
sfn = '{0}/{1}/beacons.conf'.format(__opts__['config_dir'],
@ -483,6 +508,9 @@ def enable_beacon(name, **kwargs):
else:
ret['result'] = False
ret['comment'] = 'Failed to enable beacon {0} on minion.'.format(name)
else:
ret['result'] = False
ret['comment'] = event_ret['comment']
return ret
except KeyError:
# Effectively a no-op, since we can't really return without an event system
@ -536,6 +564,9 @@ def disable_beacon(name, **kwargs):
else:
ret['result'] = False
ret['comment'] = 'Failed to disable beacon on minion.'
else:
ret['result'] = False
ret['comment'] = event_ret['comment']
return ret
except KeyError:
# Effectively a no-op, since we can't really return without an event system

View File

@ -51,6 +51,7 @@ import datetime
import logging
import json
import sys
import time
import email.mime.multipart
log = logging.getLogger(__name__)
@ -677,12 +678,24 @@ def get_scaling_policy_arn(as_group, scaling_policy_name, region=None,
salt '*' boto_asg.get_scaling_policy_arn mygroup mypolicy
'''
conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile)
retries = 30
while retries > 0:
retries -= 1
try:
policies = conn.get_all_policies(as_group=as_group)
for policy in policies:
if policy.name == scaling_policy_name:
return policy.policy_arn
log.error('Could not convert: {0}'.format(as_group))
return None
except boto.exception.BotoServerError as e:
if e.error_code != 'Throttling':
raise
log.debug('Throttled by API, will retry in 5 seconds')
time.sleep(5)
log.error('Maximum number of retries exceeded')
return None
def get_all_groups(region=None, key=None, keyid=None, profile=None):
@ -763,11 +776,18 @@ def get_instances(name, lifecycle_state="InService", health_status="Healthy",
# get full instance info, so that we can return the attribute
instances = ec2_conn.get_only_instances(instance_ids=instance_ids)
if attributes:
return [[getattr(instance, attr).encode("ascii") for attr in attributes] for instance in instances]
return [[_convert_attribute(instance, attr) for attr in attributes] for instance in instances]
else:
# properly handle case when not all instances have the requested attribute
return [getattr(instance, attribute).encode("ascii") for instance in instances if getattr(instance, attribute)]
return [getattr(instance, attribute).encode("ascii") for instance in instances]
return [_convert_attribute(instance, attribute) for instance in instances if getattr(instance, attribute)]
def _convert_attribute(instance, attribute):
if attribute == "tags":
tags = dict(getattr(instance, attribute))
return {key.encode("utf-8"): value.encode("utf-8") for key, value in six.iteritems(tags)}
return getattr(instance, attribute).encode("ascii")
def enter_standby(name, instance_ids, should_decrement_desired_capacity=False,

View File

@ -154,7 +154,7 @@ def get_unassociated_eip_address(domain='standard', region=None, key=None,
Return the first unassociated EIP
domain
Indicates whether the address is a EC2 address or a VPC address
Indicates whether the address is an EC2 address or a VPC address
(standard|vpc).
CLI Example:
@ -771,9 +771,9 @@ def get_tags(instance_id=None, keyid=None, key=None, profile=None,
def exists(instance_id=None, name=None, tags=None, region=None, key=None,
keyid=None, profile=None, in_states=None, filters=None):
'''
Given a instance id, check to see if the given instance id exists.
Given an instance id, check to see if the given instance id exists.
Returns True if the given an instance with the given id, name, or tags
Returns True if the given instance with the given id, name, or tags
exists; otherwise, False is returned.
CLI Example:

View File

@ -75,7 +75,7 @@ def __virtual__():
Only load if boto libraries exist.
'''
if not HAS_BOTO:
return (False, 'The modle boto_elasticache could not be loaded: boto libraries not found')
return (False, 'The model boto_elasticache could not be loaded: boto libraries not found')
__utils__['boto.assign_funcs'](__name__, 'elasticache', pack=__salt__)
return True

View File

@ -661,7 +661,9 @@ def get_health_check(name, region=None, key=None, keyid=None, profile=None):
salt myminion boto_elb.get_health_check myelb
'''
conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile)
retries = 30
while True:
try:
lb = conn.get_all_load_balancers(load_balancer_names=[name])
lb = lb[0]
@ -673,9 +675,14 @@ def get_health_check(name, region=None, key=None, keyid=None, profile=None):
ret['timeout'] = hc.timeout
ret['unhealthy_threshold'] = hc.unhealthy_threshold
return ret
except boto.exception.BotoServerError as error:
log.debug(error)
log.error('ELB {0} does not exist: {1}'.format(name, error))
except boto.exception.BotoServerError as e:
if retries and e.code == 'Throttling':
log.debug('Throttled by AWS API, will retry in 5 seconds.')
time.sleep(5)
retries -= 1
continue
log.error(error)
log.error('ELB {0} not found.'.format(name))
return {}
@ -691,16 +698,23 @@ def set_health_check(name, health_check, region=None, key=None, keyid=None,
salt myminion boto_elb.set_health_check myelb '{"target": "HTTP:80/"}'
'''
conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile)
retries = 30
hc = HealthCheck(**health_check)
while True:
try:
conn.configure_health_check(name, hc)
log.info('Configured health check on ELB {0}'.format(name))
except boto.exception.BotoServerError as error:
log.debug(error)
log.info('Failed to configure health check on ELB {0}: {1}'.format(name, error))
return False
return True
except boto.exception.BotoServerError as error:
if retries and e.code == 'Throttling':
log.debug('Throttled by AWS API, will retry in 5 seconds.')
time.sleep(5)
retries -= 1
continue
log.error(error)
log.error('Failed to configure health check on ELB {0}'.format(name))
return False
def register_instances(name, instances, region=None, key=None, keyid=None,

View File

@ -763,7 +763,7 @@ def describe_vpcs(vpc_id=None, name=None, cidr=None, tags=None,
'''
Describe all VPCs, matching the filter criteria if provided.
Returns a a list of dictionaries with interesting properties.
Returns a list of dictionaries with interesting properties.
.. versionadded:: 2015.8.0

View File

@ -219,7 +219,7 @@ def _connect(contact_points=None, port=None, cql_user=None, cql_pass=None,
# TODO: Call cluster.shutdown() when the module is unloaded on
# master/minion shutdown. Currently, Master.shutdown() and Minion.shutdown()
# do nothing to allow loaded modules to gracefully handle resources stored
# in __context__ (i.e. connection pools). This means that the the connection
# in __context__ (i.e. connection pools). This means that the connection
# pool is orphaned and Salt relies on Cassandra to reclaim connections.
# Perhaps if Master/Minion daemons could be enhanced to call an "__unload__"
# function, or something similar for each loaded module, connection pools
@ -430,7 +430,7 @@ def cql_query_with_prepare(query, statement_name, statement_arguments, async=Fal
values[key] = value
ret.append(values)
# If this was a synchronous call, then we either have a empty list
# If this was a synchronous call, then we either have an empty list
# because there was no return, or we have a return
# If this was an async call we only return the empty list
return ret

View File

@ -2773,8 +2773,8 @@ def shell_info(shell, list_modules=False):
'''
regex_shells = {
'bash': [r'version (\d\S*)', 'bash', '--version'],
'bash-test-error': [r'versioZ ([-\w.]+)', 'bash', '--version'], # used to test a error result
'bash-test-env': [r'(HOME=.*)', 'bash', '-c', 'declare'], # used to test a error result
'bash-test-error': [r'versioZ ([-\w.]+)', 'bash', '--version'], # used to test an error result
'bash-test-env': [r'(HOME=.*)', 'bash', '-c', 'declare'], # used to test an error result
'zsh': [r'^zsh (\d\S*)', 'zsh', '--version'],
'tcsh': [r'^tcsh (\d\S*)', 'tcsh', '--version'],
'cmd': [r'Version ([\d.]+)', 'cmd.exe', '/C', 'ver'],

View File

@ -1953,7 +1953,7 @@ def status_peers(consul_url):
:param consul_url: The Consul server URL.
:return: Retrieves the Raft peers for the
datacenter in which the the agent is running.
datacenter in which the agent is running.
CLI Example:

View File

@ -48,7 +48,7 @@ __virtualname__ = 'pkgbuild'
def __virtual__():
'''
Confirm this module is on a Debian based system, and has required utilities
Confirm this module is on a Debian-based system, and has required utilities
'''
if __grains__.get('os_family', False) in ('Kali', 'Debian'):
missing_util = False
@ -726,7 +726,7 @@ def make_repo(repodir,
if times_looped > number_retries:
raise SaltInvocationError(
'Attemping to sign file {0} failed, timed out after {1} seconds'
'Attempting to sign file {0} failed, timed out after {1} seconds'
.format(abs_file, int(times_looped * interval))
)
time.sleep(interval)
@ -770,7 +770,7 @@ def make_repo(repodir,
if times_looped > number_retries:
raise SaltInvocationError(
'Attemping to reprepro includedsc for file {0} failed, timed out after {1} loops'.format(abs_file, times_looped)
'Attempting to reprepro includedsc for file {0} failed, timed out after {1} loops'.format(abs_file, times_looped)
)
time.sleep(interval)

View File

@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
'''
The networking module for Debian based distros
The networking module for Debian-based distros
References:
@ -46,7 +46,7 @@ __virtualname__ = 'ip'
def __virtual__():
'''
Confine this module to Debian based distros
Confine this module to Debian-based distros
'''
if __grains__['os_family'] == 'Debian':
return __virtualname__
@ -1562,7 +1562,7 @@ def _read_temp_ifaces(iface, data):
return ''
ifcfg = template.render({'name': iface, 'data': data})
# Return as a array so the difflib works
# Return as an array so the difflib works
return [item + '\n' for item in ifcfg.split('\n')]
@ -1616,7 +1616,7 @@ def _write_file_ifaces(iface, data, **settings):
else:
fout.write(ifcfg)
# Return as a array so the difflib works
# Return as an array so the difflib works
return saved_ifcfg.split('\n')
@ -1646,7 +1646,7 @@ def _write_file_ppp_ifaces(iface, data):
with salt.utils.files.fopen(filename, 'w') as fout:
fout.write(ifcfg)
# Return as a array so the difflib works
# Return as an array so the difflib works
return filename

View File

@ -686,7 +686,7 @@ def ps(path):
def up(path, service_names=None):
'''
Create and start containers defined in the the docker-compose.yml file
Create and start containers defined in the docker-compose.yml file
located in path, service_names is a python list, if omitted create and
start all containers

View File

@ -465,5 +465,5 @@ def server_pxe():
log.warning('failed to set boot order')
return False
log.warning('failed to to configure PXE boot')
log.warning('failed to configure PXE boot')
return False

View File

@ -923,7 +923,7 @@ def server_pxe(host=None,
log.warning('failed to set boot order')
return False
log.warning('failed to to configure PXE boot')
log.warning('failed to configure PXE boot')
return False

View File

@ -45,7 +45,7 @@ def setval(key, val, false_unsets=False, permanent=False):
permanent
On Windows minions this will set the environment variable in the
registry so that it is always added as a environment variable when
registry so that it is always added as an environment variable when
applications open. If you want to set the variable to HKLM instead of
HKCU just pass in "HKLM" for this parameter. On all other minion types
this will be ignored. Note: This will only take affect on applications
@ -144,7 +144,7 @@ def setenv(environ, false_unsets=False, clear_all=False, update_minion=False, pe
permanent
On Windows minions this will set the environment variable in the
registry so that it is always added as a environment variable when
registry so that it is always added as an environment variable when
applications open. If you want to set the variable to HKLM instead of
HKCU just pass in "HKLM" for this parameter. On all other minion types
this will be ignored. Note: This will only take affect on applications

View File

@ -4700,6 +4700,7 @@ def check_file_meta(
contents
File contents
'''
lsattr_cmd = salt.utils.path.which('lsattr')
changes = {}
if not source_sum:
source_sum = {}
@ -4764,6 +4765,7 @@ def check_file_meta(
if mode is not None and mode != smode:
changes['mode'] = mode
if lsattr_cmd:
diff_attrs = _cmp_attrs(name, attrs)
if (
attrs is not None and

View File

@ -6,7 +6,7 @@ Install software from the FreeBSD ``ports(7)`` system
This module allows you to install ports using ``BATCH=yes`` to bypass
configuration prompts. It is recommended to use the :mod:`ports state
<salt.states.freebsdports>` to install ports, but it it also possible to use
<salt.states.freebsdports>` to install ports, but it is also possible to use
this module exclusively from the command line.
.. code-block:: bash

View File

@ -306,7 +306,7 @@ def _bootstrap_yum(
root
The root of the image to install to. Will be created as a directory if
if does not exist. (e.x.: /root/arch)
it does not exist. (e.x.: /root/arch)
pkg_confs
The location of the conf files to copy into the image, to point yum
@ -374,7 +374,7 @@ def _bootstrap_deb(
root
The root of the image to install to. Will be created as a directory if
if does not exist. (e.x.: /root/wheezy)
it does not exist. (e.x.: /root/wheezy)
arch
Architecture of the target image. (e.x.: amd64)
@ -472,7 +472,7 @@ def _bootstrap_pacman(
root
The root of the image to install to. Will be created as a directory if
if does not exist. (e.x.: /root/arch)
it does not exist. (e.x.: /root/arch)
pkg_confs
The location of the conf files to copy into the image, to point pacman
@ -480,7 +480,7 @@ def _bootstrap_pacman(
img_format
The image format to be used. The ``dir`` type needs no special
treatment, but others need special treatement.
treatment, but others need special treatment.
pkgs
A list of packages to be installed on this image. For Arch Linux, this

View File

@ -65,6 +65,9 @@ try:
import keystoneclient.exceptions
HAS_KEYSTONE = True
from keystoneclient.v3 import client as client3
from keystoneclient import discover
from keystoneauth1 import session
from keystoneauth1.identity import generic
# pylint: enable=import-error
except ImportError:
pass
@ -111,7 +114,8 @@ def _get_kwargs(profile=None, **connection_args):
insecure = get('insecure', False)
token = get('token')
endpoint = get('endpoint', 'http://127.0.0.1:35357/v2.0')
user_domain_name = get('user_domain_name', 'Default')
project_domain_name = get('project_domain_name', 'Default')
if token:
kwargs = {'token': token,
'endpoint': endpoint}
@ -120,7 +124,9 @@ def _get_kwargs(profile=None, **connection_args):
'password': password,
'tenant_name': tenant,
'tenant_id': tenant_id,
'auth_url': auth_url}
'auth_url': auth_url,
'user_domain_name': user_domain_name,
'project_domain_name': project_domain_name}
# 'insecure' keyword not supported by all v2.0 keystone clients
# this ensures it's only passed in when defined
if insecure:
@ -159,14 +165,23 @@ def auth(profile=None, **connection_args):
'''
kwargs = _get_kwargs(profile=profile, **connection_args)
if float(api_version(profile=profile, **connection_args).strip('v')) >= 3:
disc = discover.Discover(auth_url=kwargs['auth_url'])
v2_auth_url = disc.url_for('v2.0')
v3_auth_url = disc.url_for('v3.0')
if v3_auth_url:
global _OS_IDENTITY_API_VERSION
global _TENANTS
_OS_IDENTITY_API_VERSION = 3
_TENANTS = 'projects'
return client3.Client(**kwargs)
kwargs['auth_url'] = v3_auth_url
else:
return client.Client(**kwargs)
kwargs['auth_url'] = v2_auth_url
kwargs.pop('user_domain_name')
kwargs.pop('project_domain_name')
auth = generic.Password(**kwargs)
sess = session.Session(auth=auth)
ks_cl = disc.create_client(session=sess)
return ks_cl
def ec2_credentials_create(user_id=None, name=None,

92
salt/modules/opsgenie.py Normal file
View File

@ -0,0 +1,92 @@
# -*- coding: utf-8 -*-
'''
Module for sending data to OpsGenie
.. versionadded:: Oxygen
:configuration: This module can be used in Reactor System for
posting data to OpsGenie as a remote-execution function.
For example:
.. code-block:: yaml
opsgenie_event_poster:
local.opsgenie.post_data:
- tgt: 'salt-minion'
- kwarg:
name: event.reactor
api_key: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
reason: {{ data['data']['reason'] }}
action_type: Create
'''
# Import Python libs
from __future__ import absolute_import
import json
import logging
import requests
# Import Salt libs
import salt.exceptions
API_ENDPOINT = "https://api.opsgenie.com/v1/json/saltstack?apiKey="
log = logging.getLogger(__name__)
def post_data(api_key=None, name='OpsGenie Execution Module', reason=None,
action_type=None):
'''
Post data to OpsGenie. It's designed for Salt's Event Reactor.
After configuring the sls reaction file as shown above, you can trigger the
module with your designated tag (og-tag in this case).
CLI Example:
salt-call event.send 'og-tag' '{"reason" : "Overheating CPU!"}'
Required parameters:
api_key
It's the API Key you've copied while adding integration in OpsGenie.
reason
It will be used as alert's default message in OpsGenie.
action_type
OpsGenie supports the default values Create/Close for action_type. You
can customize this field with OpsGenie's custom actions for other
purposes like adding notes or acknowledging alerts.
Optional parameters:
name
It will be used as alert's alias. If you want to use the close
functionality you must provide name field for both states like in
this case.
'''
if api_key is None or reason is None or action_type is None:
raise salt.exceptions.SaltInvocationError(
'API Key or Reason or Action Type cannot be None.')
data = dict()
data['name'] = name
data['reason'] = reason
data['actionType'] = action_type
data['cpuModel'] = __grains__['cpu_model']
data['cpuArch'] = __grains__['cpuarch']
data['fqdn'] = __grains__['fqdn']
data['host'] = __grains__['host']
data['id'] = __grains__['id']
data['kernel'] = __grains__['kernel']
data['kernelRelease'] = __grains__['kernelrelease']
data['master'] = __grains__['master']
data['os'] = __grains__['os']
data['saltPath'] = __grains__['saltpath']
data['saltVersion'] = __grains__['saltversion']
data['username'] = __grains__['username']
data['uuid'] = __grains__['uuid']
log.debug('Below data will be posted:\n' + str(data))
log.debug('API Key:' + api_key + '\t API Endpoint:' + API_ENDPOINT)
response = requests.post(url=API_ENDPOINT + api_key, data=json.dumps(data),
headers={'Content-Type': 'application/json'})
return response.status_code, response.text

View File

@ -2,6 +2,8 @@
'''
Module for controlling the LED matrix or reading environment data on the SenseHat of a Raspberry Pi.
.. versionadded:: 2017.7.0
:maintainer: Benedikt Werner <1benediktwerner@gmail.com>, Joachim Werner <joe@suse.com>
:maturity: new
:depends: sense_hat Python module

View File

@ -24,6 +24,7 @@ import salt.utils.decorators.path
import salt.utils.files
import salt.utils.path
import salt.utils.platform
import salt.utils.versions
from salt.exceptions import (
SaltInvocationError,
CommandExecutionError,
@ -794,6 +795,22 @@ def set_auth_key(
return 'new'
def _get_matched_host_line_numbers(lines, enc):
'''
Helper function which parses ssh-keygen -F function output and yield line
number of known_hosts entries with encryption key type matching enc,
one by one.
'''
enc = enc if enc else "rsa"
for i, line in enumerate(lines):
if i % 2 == 0:
line_no = int(line.strip().split()[-1])
line_enc = lines[i + 1].strip().split()[-2]
if line_enc != enc:
continue
yield line_no
def _parse_openssh_output(lines, fingerprint_hash_type=None):
'''
Helper function which parses ssh-keygen -F and ssh-keyscan function output
@ -830,12 +847,42 @@ def get_known_host(user,
Return information about known host from the configfile, if any.
If there is no such key, return None.
.. deprecated:: Oxygen
CLI Example:
.. code-block:: bash
salt '*' ssh.get_known_host <user> <hostname>
'''
salt.utils.versions.warn_until(
'Neon',
'\'get_known_host\' has been deprecated in favour of '
'\'get_known_host_entries\'. \'get_known_host\' will be '
'removed in Salt Neon.'
)
known_hosts = get_known_host_entries(user, hostname, config, port, fingerprint_hash_type)
return known_hosts[0] if known_hosts else None
@salt.utils.decorators.path.which('ssh-keygen')
def get_known_host_entries(user,
hostname,
config=None,
port=None,
fingerprint_hash_type=None):
'''
.. versionadded:: Oxygen
Return information about known host entries from the configfile, if any.
If there are no entries for a matching hostname, return None.
CLI Example:
.. code-block:: bash
salt '*' ssh.get_known_host_entries <user> <hostname>
'''
full = _get_known_hosts_file(config=config, user=user)
if isinstance(full, dict):
@ -846,11 +893,11 @@ def get_known_host(user,
lines = __salt__['cmd.run'](cmd,
ignore_retcode=True,
python_shell=False).splitlines()
known_hosts = list(
known_host_entries = list(
_parse_openssh_output(lines,
fingerprint_hash_type=fingerprint_hash_type)
)
return known_hosts[0] if known_hosts else None
return known_host_entries if known_host_entries else None
@salt.utils.decorators.path.which('ssh-keyscan')
@ -863,6 +910,8 @@ def recv_known_host(hostname,
'''
Retrieve information about host public key from remote server
.. deprecated:: Oxygen
hostname
The name of the remote host (e.g. "github.com")
@ -871,9 +920,8 @@ def recv_known_host(hostname,
or ssh-dss
port
optional parameter, denoting the port of the remote host, which will be
used in case, if the public key will be requested from it. By default
the port 22 is used.
Optional parameter, denoting the port of the remote host on which an
SSH daemon is running. By default the port 22 is used.
hash_known_hosts : True
Hash all hostnames and addresses in the known hosts file.
@ -887,8 +935,8 @@ def recv_known_host(hostname,
.. versionadded:: 2016.3.0
fingerprint_hash_type
The public key fingerprint hash type that the public key fingerprint
was originally hashed with. This defaults to ``sha256`` if not specified.
The fingerprint hash type that the public key fingerprints were
originally hashed with. This defaults to ``sha256`` if not specified.
.. versionadded:: 2016.11.4
.. versionchanged:: 2017.7.0: default changed from ``md5`` to ``sha256``
@ -899,6 +947,61 @@ def recv_known_host(hostname,
salt '*' ssh.recv_known_host <hostname> enc=<enc> port=<port>
'''
salt.utils.versions.warn_until(
'Neon',
'\'recv_known_host\' has been deprecated in favour of '
'\'recv_known_host_entries\'. \'recv_known_host\' will be '
'removed in Salt Neon.'
)
known_hosts = recv_known_host_entries(hostname, enc, port, hash_known_hosts, timeout, fingerprint_hash_type)
return known_hosts[0] if known_hosts else None
@salt.utils.decorators.path.which('ssh-keyscan')
def recv_known_host_entries(hostname,
enc=None,
port=None,
hash_known_hosts=True,
timeout=5,
fingerprint_hash_type=None):
'''
.. versionadded:: Oxygen
Retrieve information about host public keys from remote server
hostname
The name of the remote host (e.g. "github.com")
enc
Defines what type of key is being used, can be ed25519, ecdsa ssh-rsa
or ssh-dss
port
Optional parameter, denoting the port of the remote host on which an
SSH daemon is running. By default the port 22 is used.
hash_known_hosts : True
Hash all hostnames and addresses in the known hosts file.
timeout : int
Set the timeout for connection attempts. If ``timeout`` seconds have
elapsed since a connection was initiated to a host or since the last
time anything was read from that host, then the connection is closed
and the host in question considered unavailable. Default is 5 seconds.
fingerprint_hash_type
The fingerprint hash type that the public key fingerprints were
originally hashed with. This defaults to ``sha256`` if not specified.
.. versionadded:: 2016.11.4
.. versionchanged:: 2017.7.0: default changed from ``md5`` to ``sha256``
CLI Example:
.. code-block:: bash
salt '*' ssh.recv_known_host_entries <hostname> enc=<enc> port=<port>
'''
# The following list of OSes have an old version of openssh-clients
# and thus require the '-t' option for ssh-keyscan
need_dash_t = ('CentOS-5',)
@ -919,9 +1022,9 @@ def recv_known_host(hostname,
while not lines and attempts > 0:
attempts = attempts - 1
lines = __salt__['cmd.run'](cmd, python_shell=False).splitlines()
known_hosts = list(_parse_openssh_output(lines,
known_host_entries = list(_parse_openssh_output(lines,
fingerprint_hash_type=fingerprint_hash_type))
return known_hosts[0] if known_hosts else None
return known_host_entries if known_host_entries else None
def check_known_host(user=None, hostname=None, key=None, fingerprint=None,
@ -952,18 +1055,20 @@ def check_known_host(user=None, hostname=None, key=None, fingerprint=None,
else:
config = config or '.ssh/known_hosts'
known_host = get_known_host(user,
known_host_entries = get_known_host_entries(user,
hostname,
config=config,
port=port,
fingerprint_hash_type=fingerprint_hash_type)
known_keys = [h['key'] for h in known_host_entries] if known_host_entries else []
known_fingerprints = [h['fingerprint'] for h in known_host_entries] if known_host_entries else []
if not known_host or 'fingerprint' not in known_host:
if not known_host_entries:
return 'add'
if key:
return 'exists' if key == known_host['key'] else 'update'
return 'exists' if key in known_keys else 'update'
elif fingerprint:
return ('exists' if fingerprint == known_host['fingerprint']
return ('exists' if fingerprint in known_fingerprints
else 'update')
else:
return 'exists'
@ -1083,70 +1188,99 @@ def set_known_host(user=None,
update_required = False
check_required = False
stored_host = get_known_host(user,
stored_host_entries = get_known_host_entries(user,
hostname,
config=config,
port=port,
fingerprint_hash_type=fingerprint_hash_type)
stored_keys = [h['key'] for h in stored_host_entries] if stored_host_entries else []
stored_fingerprints = [h['fingerprint'] for h in stored_host_entries] if stored_host_entries else []
if not stored_host:
if not stored_host_entries:
update_required = True
elif fingerprint and fingerprint != stored_host['fingerprint']:
elif fingerprint and fingerprint not in stored_fingerprints:
update_required = True
elif key and key != stored_host['key']:
elif key and key not in stored_keys:
update_required = True
elif key != stored_host['key']:
elif key is None and fingerprint is None:
check_required = True
if not update_required and not check_required:
return {'status': 'exists', 'key': stored_host['key']}
return {'status': 'exists', 'keys': stored_keys}
if not key:
remote_host = recv_known_host(hostname,
remote_host_entries = recv_known_host_entries(hostname,
enc=enc,
port=port,
hash_known_hosts=hash_known_hosts,
timeout=timeout,
fingerprint_hash_type=fingerprint_hash_type)
if not remote_host:
known_keys = [h['key'] for h in remote_host_entries] if remote_host_entries else []
known_fingerprints = [h['fingerprint'] for h in remote_host_entries] if remote_host_entries else []
if not remote_host_entries:
return {'status': 'error',
'error': 'Unable to receive remote host key'}
'error': 'Unable to receive remote host keys'}
if fingerprint and fingerprint != remote_host['fingerprint']:
if fingerprint and fingerprint not in known_fingerprints:
return {'status': 'error',
'error': ('Remote host public key found but its fingerprint '
'does not match one you have provided')}
'error': ('Remote host public keys found but none of their'
'fingerprints match the one you have provided')}
if check_required:
if remote_host['key'] == stored_host['key']:
return {'status': 'exists', 'key': stored_host['key']}
for key in known_keys:
if key in stored_keys:
return {'status': 'exists', 'keys': stored_keys}
full = _get_known_hosts_file(config=config, user=user)
if isinstance(full, dict):
return full
# Get information about the known_hosts file before rm_known_host()
# because it will create a new file with mode 0600
orig_known_hosts_st = None
try:
orig_known_hosts_st = os.stat(full)
except OSError as exc:
if exc.args[1] == 'No such file or directory':
log.debug('{0} doesnt exist. Nothing to preserve.'.format(full))
if os.path.isfile(full):
origmode = os.stat(full).st_mode
# remove everything we had in the config so far
rm_known_host(user, hostname, config=config)
# remove existing known_host entry with matching hostname and encryption key type
# use ssh-keygen -F to find the specific line(s) for this host + enc combo
ssh_hostname = _hostname_and_port_to_ssh_hostname(hostname, port)
cmd = ['ssh-keygen', '-F', ssh_hostname, '-f', full]
lines = __salt__['cmd.run'](cmd,
ignore_retcode=True,
python_shell=False).splitlines()
remove_lines = list(
_get_matched_host_line_numbers(lines, enc)
)
if remove_lines:
try:
with salt.utils.files.fopen(full, 'r+') as ofile:
known_hosts_lines = list(ofile)
# Delete from last line to first to avoid invalidating earlier indexes
for line_no in sorted(remove_lines, reverse=True):
del known_hosts_lines[line_no - 1]
# Write out changed known_hosts file
ofile.seek(0)
ofile.truncate()
for line in known_hosts_lines:
ofile.write(line)
except (IOError, OSError) as exception:
raise CommandExecutionError(
"Couldn't remove old entry(ies) from known hosts file: '{0}'".format(exception)
)
else:
origmode = None
# set up new value
if key:
remote_host = {'hostname': hostname, 'enc': enc, 'key': key}
remote_host_entries = [{'hostname': hostname, 'enc': enc, 'key': key}]
if hash_known_hosts or port in [DEFAULT_SSH_PORT, None] or ':' in remote_host['hostname']:
line = '{hostname} {enc} {key}\n'.format(**remote_host)
lines = []
for entry in remote_host_entries:
if hash_known_hosts or port in [DEFAULT_SSH_PORT, None] or ':' in entry['hostname']:
line = '{hostname} {enc} {key}\n'.format(**entry)
else:
remote_host['port'] = port
line = '[{hostname}]:{port} {enc} {key}\n'.format(**remote_host)
entry['port'] = port
line = '[{hostname}]:{port} {enc} {key}\n'.format(**entry)
lines.append(line)
# ensure ~/.ssh exists
ssh_dir = os.path.dirname(full)
@ -1172,27 +1306,25 @@ def set_known_host(user=None,
# write line to known_hosts file
try:
with salt.utils.files.fopen(full, 'a') as ofile:
for line in lines:
ofile.write(line)
except (IOError, OSError) as exception:
raise CommandExecutionError(
"Couldn't append to known hosts file: '{0}'".format(exception)
)
if os.geteuid() == 0:
if user:
if os.geteuid() == 0 and user:
os.chown(full, uinfo['uid'], uinfo['gid'])
elif orig_known_hosts_st:
os.chown(full, orig_known_hosts_st.st_uid, orig_known_hosts_st.st_gid)
if orig_known_hosts_st:
os.chmod(full, orig_known_hosts_st.st_mode)
if origmode:
os.chmod(full, origmode)
else:
os.chmod(full, 0o600)
if key and hash_known_hosts:
cmd_result = __salt__['ssh.hash_known_hosts'](user=user, config=full)
return {'status': 'updated', 'old': stored_host, 'new': remote_host}
rval = {'status': 'updated', 'old': stored_host_entries, 'new': remote_host_entries}
return rval
def user_keys(user=None, pubfile=None, prvfile=None):

View File

@ -894,8 +894,8 @@ def highstate(test=None, queue=False, **kwargs):
finally:
st_.pop_active()
if __salt__['config.option']('state_data', '') == 'terse' or \
kwargs.get('terse'):
if isinstance(ret, dict) and (__salt__['config.option']('state_data', '') == 'terse' or
kwargs.get('terse')):
ret = _filter_running(ret)
serial = salt.payload.Serial(__opts__)

View File

@ -596,7 +596,7 @@ def set_computer_name(hostname):
.. code-block:: bash
salt '*' system.set_conputer_name master.saltstack.com
salt '*' system.set_computer_name master.saltstack.com
'''
return __salt__['network.mod_hostname'](hostname)

View File

@ -250,7 +250,7 @@ def adduser(name, username, **kwargs):
'/', '\\').encode('ascii', 'backslashreplace').lower())
try:
if salt.utils.win_functions.get_sam_name(username) not in existingMembers:
if salt.utils.win_functions.get_sam_name(username).lower() not in existingMembers:
if not __opts__['test']:
groupObj.Add('WinNT://' + username.replace('\\', '/'))
@ -309,7 +309,7 @@ def deluser(name, username, **kwargs):
'/', '\\').encode('ascii', 'backslashreplace').lower())
try:
if salt.utils.win_functions.get_sam_name(username) in existingMembers:
if salt.utils.win_functions.get_sam_name(username).lower() in existingMembers:
if not __opts__['test']:
groupObj.Remove('WinNT://' + username.replace('\\', '/'))

View File

@ -34,15 +34,18 @@ Current known limitations
- pywin32 Python module
- lxml
- uuid
- codecs
- struct
- salt.modules.reg
'''
# Import Python libs
from __future__ import absolute_import
import io
import os
import logging
import re
import locale
import ctypes
import time
# Import Salt libs
import salt.utils.files
@ -89,7 +92,6 @@ try:
import win32net
import win32security
import uuid
import codecs
import lxml
import struct
from lxml import etree
@ -116,6 +118,16 @@ try:
ADMX_DISPLAYNAME_SEARCH_XPATH = etree.XPath('//*[local-name() = "policy" and @*[local-name() = "displayName"] = $display_name and (@*[local-name() = "class"] = "Both" or @*[local-name() = "class"] = $registry_class) ]')
PRESENTATION_ANCESTOR_XPATH = etree.XPath('ancestor::*[local-name() = "presentation"]')
TEXT_ELEMENT_XPATH = etree.XPath('.//*[local-name() = "text"]')
# Get the System Install Language
# https://msdn.microsoft.com/en-us/library/dd318123(VS.85).aspx
# local.windows_locale is a dict
# GetSystemDefaultUILanguage() returns a 4 digit language code that
# corresponds to an entry in the dict
# Not available in win32api, so we have to use ctypes
# Default to `en-US` (1033)
windll = ctypes.windll.kernel32
INSTALL_LANGUAGE = locale.windows_locale.get(
windll.GetSystemDefaultUILanguage(), 1033).replace('_', '-')
except ImportError:
HAS_WINDOWS_MODULES = False
@ -2708,7 +2720,8 @@ def _processPolicyDefinitions(policy_def_path='c:\\Windows\\PolicyDefinitions',
helper function to process all ADMX files in the specified policy_def_path
and build a single XML doc that we can search/use for ADMX policy processing
'''
display_language_fallback = 'en-US'
# Fallback to the System Install Language
display_language_fallback = INSTALL_LANGUAGE
t_policy_definitions = lxml.etree.Element('policyDefinitions')
t_policy_definitions.append(lxml.etree.Element('categories'))
t_policy_definitions.append(lxml.etree.Element('policies'))
@ -2772,14 +2785,36 @@ def _processPolicyDefinitions(policy_def_path='c:\\Windows\\PolicyDefinitions',
temp_ns = policy_ns
temp_ns = _updateNamespace(temp_ns, this_namespace)
policydefs_policyns_xpath(t_policy_definitions)[0].append(temp_ns)
adml_file = os.path.join(root, display_language, os.path.splitext(t_admfile)[0] + '.adml')
# We need to make sure the adml file exists. First we'll check
# the passed display_language (eg: en-US). Then we'll try the
# abbreviated version (en) to account for alternate locations.
# We'll do the same for the display_language_fallback (en_US).
adml_file = os.path.join(root, display_language,
os.path.splitext(t_admfile)[0] + '.adml')
if not __salt__['file.file_exists'](adml_file):
msg = ('An ADML file in the specified ADML language "{0}" '
'does not exist for the ADMX "{1}", the abbreviated '
'language code will be tried.')
log.info(msg.format(display_language, t_admfile))
adml_file = os.path.join(root, display_language.split('-')[0],
os.path.splitext(t_admfile)[0] + '.adml')
if not __salt__['file.file_exists'](adml_file):
msg = ('An ADML file in the specified ADML language code "{0}" '
'does not exist for the ADMX "{1}", the fallback '
'language will be tried.')
log.info(msg.format(display_language, t_admfile))
adml_file = os.path.join(root,
display_language_fallback,
log.info(msg.format(display_language[:2], t_admfile))
adml_file = os.path.join(root, display_language_fallback,
os.path.splitext(t_admfile)[0] + '.adml')
if not __salt__['file.file_exists'](adml_file):
msg = ('An ADML file in the specified ADML fallback language "{0}" '
'does not exist for the ADMX "{1}", the abbreviated'
'fallback language code will be tried.')
log.info(msg.format(display_language_fallback, t_admfile))
adml_file = os.path.join(root, display_language_fallback.split('-')[0],
os.path.splitext(t_admfile)[0] + '.adml')
if not __salt__['file.file_exists'](adml_file):
msg = ('An ADML file in the specified ADML language '
@ -2796,7 +2831,7 @@ def _processPolicyDefinitions(policy_def_path='c:\\Windows\\PolicyDefinitions',
xmltree = _remove_unicode_encoding(adml_file)
except Exception:
msg = ('An error was found while processing adml file {0}, all policy '
' languange data from this file will be unavailable via this module')
'language data from this file will be unavailable via this module')
log.error(msg.format(adml_file))
continue
if None in namespaces:
@ -2827,15 +2862,23 @@ def _findOptionValueInSeceditFile(option):
'''
try:
_d = uuid.uuid4().hex
_tfile = '{0}\\{1}'.format(__salt__['config.get']('cachedir'),
_tfile = '{0}\\{1}'.format(__opts__['cachedir'],
'salt-secedit-dump-{0}.txt'.format(_d))
_ret = __salt__['cmd.run']('secedit /export /cfg {0}'.format(_tfile))
if _ret:
_reader = codecs.open(_tfile, 'r', encoding='utf-16')
with io.open(_tfile, encoding='utf-16') as _reader:
_secdata = _reader.readlines()
_reader.close()
if __salt__['file.file_exists'](_tfile):
_ret = __salt__['file.remove'](_tfile)
for _ in range(5):
try:
__salt__['file.remove'](_tfile)
except CommandExecutionError:
time.sleep(.1)
continue
else:
break
else:
log.error('error occurred removing {0}'.format(_tfile))
for _line in _secdata:
if _line.startswith(option):
return True, _line.split('=')[1].strip()
@ -2851,9 +2894,9 @@ def _importSeceditConfig(infdata):
'''
try:
_d = uuid.uuid4().hex
_tSdbfile = '{0}\\{1}'.format(__salt__['config.get']('cachedir'),
_tSdbfile = '{0}\\{1}'.format(__opts__['cachedir'],
'salt-secedit-import-{0}.sdb'.format(_d))
_tInfFile = '{0}\\{1}'.format(__salt__['config.get']('cachedir'),
_tInfFile = '{0}\\{1}'.format(__opts__['cachedir'],
'salt-secedit-config-{0}.inf'.format(_d))
# make sure our temp files don't already exist
if __salt__['file.file_exists'](_tSdbfile):

View File

@ -909,10 +909,7 @@ class SaltAPIHandler(BaseSaltAPIHandler, SaltClientsMixIn): # pylint: disable=W
f_call = self._format_call_run_job_async(chunk)
# fire a job off
try:
pub_data = yield self.saltclients['local'](*f_call.get('args', ()), **f_call.get('kwargs', {}))
except EauthAuthenticationError:
raise tornado.gen.Return('Not authorized to run this job')
# if the job didn't publish, lets not wait around for nothing
# TODO: set header??

View File

@ -5,6 +5,8 @@ return data to the console to verify that it is being passed properly
To use the local returner, append '--return local' to the salt command. ex:
.. code-block:: bash
salt '*' test.ping --return local
'''

View File

@ -1853,7 +1853,7 @@ def stopped(name=None,
.. code-block:: yaml
stopped_containers:
docker.stopped:
docker_container.stopped:
- names:
- foo
- bar
@ -1862,7 +1862,7 @@ def stopped(name=None,
.. code-block:: yaml
stopped_containers:
docker.stopped:
docker_container.stopped:
- containers:
- foo
- bar
@ -1998,10 +1998,10 @@ def absent(name, force=False):
.. code-block:: yaml
mycontainer:
docker.absent
docker_container.absent
multiple_containers:
docker.absent:
docker_container.absent:
- names:
- foo
- bar

View File

@ -65,11 +65,11 @@ def _changes(name,
if lgrp['members']:
lgrp['members'] = [user.lower() for user in lgrp['members']]
if members:
members = [salt.utils.win_functions.get_sam_name(user) for user in members]
members = [salt.utils.win_functions.get_sam_name(user).lower() for user in members]
if addusers:
addusers = [salt.utils.win_functions.get_sam_name(user) for user in addusers]
addusers = [salt.utils.win_functions.get_sam_name(user).lower() for user in addusers]
if delusers:
delusers = [salt.utils.win_functions.get_sam_name(user) for user in delusers]
delusers = [salt.utils.win_functions.get_sam_name(user).lower() for user in delusers]
change = {}
if gid:
@ -244,9 +244,7 @@ def present(name,
return ret
# Group is not present, make it.
if __salt__['group.add'](name,
gid,
system=system):
if __salt__['group.add'](name, gid=gid, system=system):
# if members to be added
grp_members = None
if members:
@ -269,7 +267,7 @@ def present(name,
ret['result'] = False
ret['comment'] = (
'Group {0} has been created but, some changes could not'
' be applied')
' be applied'.format(name))
ret['changes'] = {'Failed': changes}
else:
ret['result'] = False

153
salt/states/opsgenie.py Normal file
View File

@ -0,0 +1,153 @@
# -*- coding: utf-8 -*-
'''
Create/Close an alert in OpsGenie
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. versionadded:: Oxygen
This state is useful for creating or closing alerts in OpsGenie
during state runs.
.. code-block:: yaml
used_space:
disk.status:
- name: /
- maximum: 79%
- minimum: 20%
opsgenie_create_action_sender:
opsgenie.create_alert:
- api_key: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
- reason: 'Disk capacity is out of designated range.'
- name: disk.status
- onfail:
- disk: used_space
opsgenie_close_action_sender:
opsgenie.close_alert:
- api_key: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
- name: disk.status
- require:
- disk: used_space
'''
# Import Python libs
from __future__ import absolute_import
import logging
import inspect
# Import Salt libs
import salt.exceptions
log = logging.getLogger(__name__)
def create_alert(name=None, api_key=None, reason=None, action_type="Create"):
'''
Create an alert in OpsGenie. Example usage with Salt's requisites and other
global state arguments could be found above.
Required Parameters:
api_key
It's the API Key you've copied while adding integration in OpsGenie.
reason
It will be used as alert's default message in OpsGenie.
Optional Parameters:
name
It will be used as alert's alias. If you want to use the close
functionality you must provide name field for both states like
in above case.
action_type
OpsGenie supports the default values Create/Close for action_type.
You can customize this field with OpsGenie's custom actions for
other purposes like adding notes or acknowledging alerts.
'''
_, _, _, values = inspect.getargvalues(inspect.currentframe())
log.info("Arguments values:" + str(values))
ret = {
'result': '',
'name': '',
'changes': '',
'comment': ''
}
if api_key is None or reason is None:
raise salt.exceptions.SaltInvocationError(
'API Key or Reason cannot be None.')
if __opts__['test'] is True:
ret[
'comment'] = 'Test: {0} alert request will be processed ' \
'using the API Key="{1}".'.format(
action_type,
api_key)
# Return ``None`` when running with ``test=true``.
ret['result'] = None
return ret
response_status_code, response_text = __salt__['opsgenie.post_data'](
api_key=api_key,
name=name,
reason=reason,
action_type=action_type
)
if 200 <= response_status_code < 300:
log.info(
"POST Request has succeeded with message:" +
response_text + " status code:" + str(
response_status_code))
ret[
'comment'] = 'Test: {0} alert request will be processed' \
' using the API Key="{1}".'.format(
action_type,
api_key)
ret['result'] = True
else:
log.error(
"POST Request has failed with error:" +
response_text + " status code:" + str(
response_status_code))
ret['result'] = False
return ret
def close_alert(name=None, api_key=None, reason="Conditions are met.",
action_type="Close"):
'''
Close an alert in OpsGenie. It's a wrapper function for create_alert.
Example usage with Salt's requisites and other global state arguments
could be found above.
Required Parameters:
name
It will be used as alert's alias. If you want to use the close
functionality you must provide name field for both states like
in above case.
Optional Parameters:
api_key
It's the API Key you've copied while adding integration in OpsGenie.
reason
It will be used as alert's default message in OpsGenie.
action_type
OpsGenie supports the default values Create/Close for action_type.
You can customize this field with OpsGenie's custom actions for
other purposes like adding notes or acknowledging alerts.
'''
if name is None:
raise salt.exceptions.SaltInvocationError(
'Name cannot be None.')
return create_alert(name, api_key, reason, action_type)

View File

@ -91,7 +91,6 @@ import sys
# Import salt libs
from salt.exceptions import CommandExecutionError, SaltInvocationError
from salt.modules.aptpkg import _strip_uri
from salt.state import STATE_INTERNAL_KEYWORDS as _STATE_INTERNAL_KEYWORDS
import salt.utils.data
import salt.utils.files
@ -406,7 +405,7 @@ def managed(name, ppa=None, **kwargs):
sanitizedkwargs = kwargs
if os_family == 'debian':
repo = _strip_uri(repo)
repo = salt.utils.pkg.deb.strip_uri(repo)
if pre:
for kwarg in sanitizedkwargs:

View File

@ -178,13 +178,13 @@ def present(
return dict(ret, result=False, comment=result['error'])
else: # 'updated'
if key:
new_key = result['new']['key']
new_key = result['new'][0]['key']
return dict(ret,
changes={'old': result['old'], 'new': result['new']},
comment='{0}\'s key saved to {1} (key: {2})'.format(
name, config, new_key))
else:
fingerprint = result['new']['fingerprint']
fingerprint = result['new'][0]['fingerprint']
return dict(ret,
changes={'old': result['old'], 'new': result['new']},
comment='{0}\'s key saved to {1} (fingerprint: {2})'.format(
@ -225,7 +225,7 @@ def absent(name, user=None, config=None):
ret['result'] = False
return dict(ret, comment=comment)
known_host = __salt__['ssh.get_known_host'](user=user, hostname=name, config=config)
known_host = __salt__['ssh.get_known_host_entries'](user=user, hostname=name, config=config)
if not known_host:
return dict(ret, comment='Host is already absent')

View File

@ -26,3 +26,15 @@ def combine_comments(comments):
else:
comments = [comments]
return ' '.join(comments).strip()
def strip_uri(repo):
'''
Remove the trailing slash from the URI in a repo definition
'''
splits = repo.split()
for idx in range(len(splits)):
if any(splits[idx].startswith(x)
for x in ('http://', 'https://', 'ftp://')):
splits[idx] = splits[idx].rstrip('/')
return ' '.join(splits)

View File

@ -1151,7 +1151,8 @@ class Schedule(object):
# Sort the list of "whens" from earlier to later schedules
_when.sort()
for i in _when:
# Copy the list so we can loop through it
for i in copy.deepcopy(_when):
if i < now and len(_when) > 1:
# Remove all missed schedules except the latest one.
# We need it to detect if it was triggered previously.

View File

@ -111,7 +111,7 @@ def get_sid_from_name(name):
sid = win32security.LookupAccountName(None, name)[0]
except pywintypes.error as exc:
raise CommandExecutionError(
'User {0} found: {1}'.format(name, exc.strerror))
'User {0} not found: {1}'.format(name, exc.strerror))
return win32security.ConvertSidToStringSid(sid)
@ -144,19 +144,21 @@ def get_current_user():
def get_sam_name(username):
'''
r'''
Gets the SAM name for a user. It basically prefixes a username without a
backslash with the computer name. If the username contains a backslash, it
is returned as is.
backslash with the computer name. If the user does not exist, a SAM
compatible name will be returned using the local hostname as the domain.
Everything is returned lower case
i.e. salt.utils.get_same_name('Administrator') would return 'DOMAIN.COM\Administrator'
i.e. salt.utils.fix_local_user('Administrator') would return 'computername\administrator'
.. note:: Long computer names are truncated to 15 characters
'''
if '\\' not in username:
username = '{0}\\{1}'.format(platform.node(), username)
return username.lower()
try:
sid_obj = win32security.LookupAccountName(None, username)[0]
except pywintypes.error:
return '\\'.join([platform.node()[:15].upper(), username])
username, domain, _ = win32security.LookupAccountSid(None, sid_obj)
return '\\'.join([domain, username])
def enable_ctrl_logoff_handler():

View File

@ -104,7 +104,7 @@ class SSHModuleTest(ModuleCase):
# user will get an indicator of what went wrong.
self.assertEqual(len(list(ret.items())), 0) # Zero keys found
def test_get_known_host(self):
def test_get_known_host_entries(self):
'''
Check that known host information is returned from ~/.ssh/config
'''
@ -113,7 +113,7 @@ class SSHModuleTest(ModuleCase):
KNOWN_HOSTS)
arg = ['root', 'github.com']
kwargs = {'config': KNOWN_HOSTS}
ret = self.run_function('ssh.get_known_host', arg, **kwargs)
ret = self.run_function('ssh.get_known_host_entries', arg, **kwargs)[0]
try:
self.assertEqual(ret['enc'], 'ssh-rsa')
self.assertEqual(ret['key'], self.key)
@ -125,16 +125,16 @@ class SSHModuleTest(ModuleCase):
)
)
def test_recv_known_host(self):
def test_recv_known_host_entries(self):
'''
Check that known host information is returned from remote host
'''
ret = self.run_function('ssh.recv_known_host', ['github.com'])
ret = self.run_function('ssh.recv_known_host_entries', ['github.com'])
try:
self.assertNotEqual(ret, None)
self.assertEqual(ret['enc'], 'ssh-rsa')
self.assertEqual(ret['key'], self.key)
self.assertEqual(ret['fingerprint'], GITHUB_FINGERPRINT)
self.assertEqual(ret[0]['enc'], 'ssh-rsa')
self.assertEqual(ret[0]['key'], self.key)
self.assertEqual(ret[0]['fingerprint'], GITHUB_FINGERPRINT)
except AssertionError as exc:
raise AssertionError(
'AssertionError: {0}. Function returned: {1}'.format(
@ -215,7 +215,7 @@ class SSHModuleTest(ModuleCase):
try:
self.assertEqual(ret['status'], 'updated')
self.assertEqual(ret['old'], None)
self.assertEqual(ret['new']['fingerprint'], GITHUB_FINGERPRINT)
self.assertEqual(ret['new'][0]['fingerprint'], GITHUB_FINGERPRINT)
except AssertionError as exc:
raise AssertionError(
'AssertionError: {0}. Function returned: {1}'.format(
@ -223,8 +223,8 @@ class SSHModuleTest(ModuleCase):
)
)
# check that item does exist
ret = self.run_function('ssh.get_known_host', ['root', 'github.com'],
config=KNOWN_HOSTS)
ret = self.run_function('ssh.get_known_host_entries', ['root', 'github.com'],
config=KNOWN_HOSTS)[0]
try:
self.assertEqual(ret['fingerprint'], GITHUB_FINGERPRINT)
except AssertionError as exc:

View File

@ -0,0 +1 @@
# -*- coding: utf-8 -*-

View File

@ -0,0 +1,65 @@
# -*- coding: utf-8 -*-
'''
Tests for the spm build utility
'''
# Import python libs
from __future__ import absolute_import
import os
import shutil
import textwrap
# Import Salt Testing libs
from tests.support.case import SPMCase
from tests.support.helpers import destructiveTest
# Import Salt Libraries
import salt.utils.files
@destructiveTest
class SPMBuildTest(SPMCase):
'''
Validate the spm build command
'''
def setUp(self):
self.config = self._spm_config()
self.formula_dir = os.path.join(' '.join(self.config['file_roots']['base']), 'formulas')
self.formula_sls_dir = os.path.join(self.formula_dir, 'apache')
self.formula_sls = os.path.join(self.formula_sls_dir, 'apache.sls')
self.formula_file = os.path.join(self.formula_dir, 'FORMULA')
dirs = [self.formula_dir, self.formula_sls_dir]
for formula_dir in dirs:
os.makedirs(formula_dir)
with salt.utils.files.fopen(self.formula_sls, 'w') as fp:
fp.write(textwrap.dedent('''\
install-apache:
pkg.installed:
- name: apache2
'''))
with salt.utils.files.fopen(self.formula_file, 'w') as fp:
fp.write(textwrap.dedent('''\
name: apache
os: RedHat, Debian, Ubuntu, Suse, FreeBSD
os_family: RedHat, Debian, Suse, FreeBSD
version: 201506
release: 2
summary: Formula for installing Apache
description: Formula for installing Apache
'''))
def test_spm_build(self):
'''
test spm build
'''
build_spm = self.run_spm('build', self.config, self.formula_dir)
spm_file = os.path.join(self.config['spm_build_dir'], 'apache-201506-2.spm')
# Make sure .spm file gets created
self.assertTrue(os.path.exists(spm_file))
# Make sure formula path dir is created
self.assertTrue(os.path.isdir(self.config['formula_path']))
def tearDown(self):
shutil.rmtree(self._tmp_spm)

View File

@ -66,7 +66,7 @@ class SSHKnownHostsStateTest(ModuleCase, SaltReturnAssertsMixin):
raise err
self.assertSaltStateChangesEqual(
ret, GITHUB_FINGERPRINT, keys=('new', 'fingerprint')
ret, GITHUB_FINGERPRINT, keys=('new', 0, 'fingerprint')
)
# save twice, no changes
@ -81,7 +81,7 @@ class SSHKnownHostsStateTest(ModuleCase, SaltReturnAssertsMixin):
**dict(kwargs, name=GITHUB_IP))
try:
self.assertSaltStateChangesEqual(
ret, GITHUB_FINGERPRINT, keys=('new', 'fingerprint')
ret, GITHUB_FINGERPRINT, keys=('new', 0, 'fingerprint')
)
except AssertionError as err:
try:
@ -94,8 +94,8 @@ class SSHKnownHostsStateTest(ModuleCase, SaltReturnAssertsMixin):
# record for every host must be available
ret = self.run_function(
'ssh.get_known_host', ['root', 'github.com'], config=KNOWN_HOSTS
)
'ssh.get_known_host_entries', ['root', 'github.com'], config=KNOWN_HOSTS
)[0]
try:
self.assertNotIn(ret, ('', None))
except AssertionError:
@ -103,8 +103,8 @@ class SSHKnownHostsStateTest(ModuleCase, SaltReturnAssertsMixin):
'Salt return \'{0}\' is in (\'\', None).'.format(ret)
)
ret = self.run_function(
'ssh.get_known_host', ['root', GITHUB_IP], config=KNOWN_HOSTS
)
'ssh.get_known_host_entries', ['root', GITHUB_IP], config=KNOWN_HOSTS
)[0]
try:
self.assertNotIn(ret, ('', None, {}))
except AssertionError:
@ -144,7 +144,7 @@ class SSHKnownHostsStateTest(ModuleCase, SaltReturnAssertsMixin):
# remove once, the key is gone
ret = self.run_state('ssh_known_hosts.absent', **kwargs)
self.assertSaltStateChangesEqual(
ret, GITHUB_FINGERPRINT, keys=('old', 'fingerprint')
ret, GITHUB_FINGERPRINT, keys=('old', 0, 'fingerprint')
)
# remove twice, nothing has changed

View File

@ -564,6 +564,49 @@ class ShellCase(ShellTestCase, AdaptedConfigurationTestCaseMixin, ScriptPathMixi
timeout=timeout)
class SPMCase(TestCase, AdaptedConfigurationTestCaseMixin):
'''
Class for handling spm commands
'''
def _spm_config(self):
self._tmp_spm = tempfile.mkdtemp()
config = self.get_temp_config('minion', **{
'spm_logfile': os.path.join(self._tmp_spm, 'log'),
'spm_repos_config': os.path.join(self._tmp_spm, 'etc', 'spm.repos'),
'spm_cache_dir': os.path.join(self._tmp_spm, 'cache'),
'spm_build_dir': os.path.join(self._tmp_spm, 'build'),
'spm_build_exclude': ['.git'],
'spm_db_provider': 'sqlite3',
'spm_files_provider': 'local',
'spm_db': os.path.join(self._tmp_spm, 'packages.db'),
'extension_modules': os.path.join(self._tmp_spm, 'modules'),
'file_roots': {'base': [self._tmp_spm, ]},
'formula_path': os.path.join(self._tmp_spm, 'spm'),
'pillar_path': os.path.join(self._tmp_spm, 'pillar'),
'reactor_path': os.path.join(self._tmp_spm, 'reactor'),
'assume_yes': True,
'force': False,
'verbose': False,
'cache': 'localfs',
'cachedir': os.path.join(self._tmp_spm, 'cache'),
'spm_repo_dups': 'ignore',
'spm_share_dir': os.path.join(self._tmp_spm, 'share'),
})
return config
def _spm_client(self, config):
import salt.spm
ui = salt.spm.SPMCmdlineInterface()
client = salt.spm.SPMClient(ui, config)
return client
def run_spm(self, cmd, config, arg=()):
client = self._spm_client(config)
spm_cmd = client.run([cmd, arg])
return spm_cmd
class ModuleCase(TestCase, SaltClientTestCaseMixin):
'''
Execute a module function
@ -582,7 +625,7 @@ class ModuleCase(TestCase, SaltClientTestCaseMixin):
behavior of the raw function call
'''
know_to_return_none = (
'file.chown', 'file.chgrp', 'ssh.recv_known_host'
'file.chown', 'file.chgrp', 'ssh.recv_known_host_entries'
)
if 'f_arg' in kwargs:
kwargs['arg'] = kwargs.pop('f_arg')

View File

@ -59,7 +59,9 @@ class BTMPBeaconTestCase(TestCase, LoaderModuleMockMixin):
self.assertEqual(ret, (True, 'Valid beacon configuration'))
with patch('salt.utils.files.fopen', mock_open()) as m_open:
ret = btmp.beacon(config)
m_open.assert_called_with(btmp.BTMP, 'rb')
self.assertEqual(ret, [])
def test_match(self):

View File

@ -60,7 +60,9 @@ class WTMPBeaconTestCase(TestCase, LoaderModuleMockMixin):
self.assertEqual(ret, (True, 'Valid beacon configuration'))
with patch('salt.utils.files.fopen', mock_open()) as m_open:
ret = wtmp.beacon(config)
m_open.assert_called_with(wtmp.WTMP, 'rb')
self.assertEqual(ret, [])
def test_match(self):

View File

@ -22,9 +22,14 @@ TEST_PROFILES = {
'ssh_username': 'fred',
'remove_config_on_destroy': False, # expected for test
'shutdown_on_destroy': True # expected value for test
},
'testprofile3': { # this profile is used in test_create_wake_on_lan()
'wake_on_lan_mac': 'aa-bb-cc-dd-ee-ff',
'wol_sender_node': 'friend1',
'wol_boot_wait': 0.01 # we want the wait to be very short
}
}
TEST_PROFILE_NAMES = ['testprofile1', 'testprofile2']
TEST_PROFILE_NAMES = ['testprofile1', 'testprofile2', 'testprofile3']
@skipIf(NO_MOCK, NO_MOCK_REASON)
@ -78,12 +83,38 @@ class SaltifyTestCase(TestCase, LoaderModuleMockMixin):
{'cloud.bootstrap': mock_cmd}):
vm_ = {'deploy': True,
'driver': 'saltify',
'name': 'testprofile2',
'name': 'new2',
'profile': 'testprofile2',
}
result = saltify.create(vm_)
mock_cmd.assert_called_once_with(vm_, ANY)
self.assertTrue(result)
def test_create_wake_on_lan(self):
'''
Test if wake on lan works
'''
mock_sleep = MagicMock()
mock_cmd = MagicMock(return_value=True)
mm_cmd = MagicMock(return_value={'friend1': True})
lcl = salt.client.LocalClient()
lcl.cmd = mm_cmd
with patch('time.sleep', mock_sleep):
with patch('salt.client.LocalClient', return_value=lcl):
with patch.dict(
'salt.cloud.clouds.saltify.__utils__',
{'cloud.bootstrap': mock_cmd}):
vm_ = {'deploy': True,
'driver': 'saltify',
'name': 'new1',
'profile': 'testprofile3',
}
result = saltify.create(vm_)
mock_cmd.assert_called_once_with(vm_, ANY)
mm_cmd.assert_called_with('friend1', 'network.wol', ['aa-bb-cc-dd-ee-ff'])
mock_sleep.assert_called_with(0.01)
self.assertTrue(result)
def test_avail_locations(self):
'''
Test the avail_locations will always return {}

View File

@ -59,6 +59,9 @@ class BeaconsTestCase(TestCase, LoaderModuleMockMixin):
event_returns = [{'complete': True,
'tag': '/salt/minion/minion_beacons_list_complete',
'beacons': {}},
{'complete': True,
'tag': '/salt/minion/minion_beacons_list_available_complete',
'beacons': ['ps']},
{'complete': True,
'valid': True,
'vcomment': '',

View File

@ -504,6 +504,26 @@ class FileModuleTestCase(TestCase, LoaderModuleMockMixin):
}
}
def test_check_file_meta_no_lsattr(self):
'''
Ensure that we skip attribute comparison if lsattr(1) is not found
'''
source = "salt:///README.md"
name = "/home/git/proj/a/README.md"
source_sum = {}
stats_result = {'size': 22, 'group': 'wheel', 'uid': 0, 'type': 'file',
'mode': '0600', 'gid': 0, 'target': name, 'user':
'root', 'mtime': 1508356390, 'atime': 1508356390,
'inode': 447, 'ctime': 1508356390}
with patch('salt.modules.file.stats') as m_stats:
m_stats.return_value = stats_result
with patch('salt.utils.path.which') as m_which:
m_which.return_value = None
result = filemod.check_file_meta(name, name, source, source_sum,
'root', 'root', '755', None,
'base')
self.assertTrue(result, None)
@skipIf(salt.utils.platform.is_windows(), 'SED is not available on Windows')
def test_sed_limit_escaped(self):
with tempfile.NamedTemporaryFile(mode='w+') as tfile:

View File

@ -13,6 +13,7 @@ from tests.support.unit import TestCase, skipIf
# Import Salt libs
import salt.auth
from salt.ext.six.moves import map # pylint: disable=import-error
try:
import salt.netapi.rest_tornado as rest_tornado
from salt.netapi.rest_tornado import saltnado
@ -619,6 +620,34 @@ class TestSaltAuthHandler(SaltnadoTestCase):
self.assertEqual(response.code, 400)
class TestSaltRunHandler(SaltnadoTestCase):
def get_app(self):
urls = [('/run', saltnado.RunSaltAPIHandler)]
return self.build_tornado_app(urls)
def test_authentication_exception_consistency(self):
'''
Test consistency of authentication exception of each clients.
'''
valid_response = {'return': ['Failed to authenticate']}
clients = ['local', 'local_async', 'runner', 'runner_async']
request_lowstates = map(lambda client: {"client": client,
"tgt": "*",
"fun": "test.fib",
"arg": ["10"]},
clients)
for request_lowstate in request_lowstates:
response = self.fetch('/run',
method='POST',
body=json.dumps(request_lowstate),
headers={'Content-Type': self.content_type_map['json']})
self.assertEqual(valid_response, json.loads(response.body))
@skipIf(HAS_TORNADO is False, 'The tornado package needs to be installed') # pylint: disable=W0223
class TestWebsocketSaltAPIHandler(SaltnadoTestCase):

View File

@ -33,11 +33,11 @@ class MountTestCase(TestCase, LoaderModuleMockMixin):
'''
Test to verify that a device is mounted.
'''
name = '/mnt/sdb'
device = '/dev/sdb5'
name = os.path.realpath('/mnt/sdb')
device = os.path.realpath('/dev/sdb5')
fstype = 'xfs'
name2 = '/mnt/cifs'
name2 = os.path.realpath('/mnt/cifs')
device2 = '//SERVER/SHARE/'
fstype2 = 'cifs'
opts2 = ['noowners']
@ -64,12 +64,11 @@ class MountTestCase(TestCase, LoaderModuleMockMixin):
mock_group = MagicMock(return_value={'gid': 100})
mock_read_cache = MagicMock(return_value={})
mock_write_cache = MagicMock(return_value=True)
umount1 = ("Forced unmount because devices don't match. "
"Wanted: /dev/sdb6, current: /dev/sdb5, /dev/sdb5")
with patch.dict(mount.__grains__, {'os': 'Darwin'}):
with patch.dict(mount.__salt__, {'mount.active': mock_mnt,
'cmd.run_all': mock_ret,
'mount.umount': mock_f}):
'mount.umount': mock_f}), \
patch('os.path.exists', MagicMock(return_value=True)):
comt = ('Unable to find device with label /dev/sdb5.')
ret.update({'comment': comt})
self.assertDictEqual(mount.mounted(name, 'LABEL=/dev/sdb5',
@ -83,7 +82,7 @@ class MountTestCase(TestCase, LoaderModuleMockMixin):
ret)
with patch.dict(mount.__opts__, {'test': False}):
comt = ('Unable to unmount /mnt/sdb: False.')
comt = ('Unable to unmount {0}: False.'.format(name))
umount = ('Forced unmount and mount because'
' options (noowners) changed')
ret.update({'comment': comt, 'result': False,
@ -91,16 +90,19 @@ class MountTestCase(TestCase, LoaderModuleMockMixin):
self.assertDictEqual(mount.mounted(name, device, 'nfs'),
ret)
umount1 = ("Forced unmount because devices don't match. "
"Wanted: {0}, current: {1}, {1}".format(os.path.realpath('/dev/sdb6'), device))
comt = ('Unable to unmount')
ret.update({'comment': comt, 'result': None,
'changes': {'umount': umount1}})
self.assertDictEqual(mount.mounted(name, '/dev/sdb6',
self.assertDictEqual(mount.mounted(name, os.path.realpath('/dev/sdb6'),
fstype, opts=[]), ret)
with patch.dict(mount.__salt__, {'mount.active': mock_emt,
'mount.mount': mock_str,
'mount.set_automaster': mock}):
with patch.dict(mount.__opts__, {'test': True}):
with patch.dict(mount.__opts__, {'test': True}), \
patch('os.path.exists', MagicMock(return_value=False)):
comt = ('{0} does not exist and would not be created'.format(name))
ret.update({'comment': comt, 'changes': {}})
self.assertDictEqual(mount.mounted(name, device,
@ -119,14 +121,16 @@ class MountTestCase(TestCase, LoaderModuleMockMixin):
self.assertDictEqual(mount.mounted(name, device,
fstype), ret)
with patch.dict(mount.__opts__, {'test': True}):
with patch.dict(mount.__opts__, {'test': True}), \
patch('os.path.exists', MagicMock(return_value=False)):
comt = ('{0} does not exist and would neither be created nor mounted. '
'{0} needs to be written to the fstab in order to be made persistent.'.format(name))
ret.update({'comment': comt, 'result': None})
self.assertDictEqual(mount.mounted(name, device, fstype,
mount=False), ret)
with patch.dict(mount.__opts__, {'test': False}):
with patch.dict(mount.__opts__, {'test': False}), \
patch('os.path.exists', MagicMock(return_value=False)):
comt = ('{0} not present and not mounted. '
'Entry already exists in the fstab.'.format(name))
ret.update({'comment': comt, 'result': True})

View File

@ -96,7 +96,7 @@ class SshKnownHostsTestCase(TestCase, LoaderModuleMockMixin):
self.assertDictEqual(ssh_known_hosts.present(name, user), ret)
result = {'status': 'updated', 'error': '',
'new': {'fingerprint': fingerprint, 'key': key},
'new': [{'fingerprint': fingerprint, 'key': key}],
'old': ''}
mock = MagicMock(return_value=result)
with patch.dict(ssh_known_hosts.__salt__,
@ -104,8 +104,8 @@ class SshKnownHostsTestCase(TestCase, LoaderModuleMockMixin):
comt = ("{0}'s key saved to .ssh/known_hosts (key: {1})"
.format(name, key))
ret.update({'comment': comt, 'result': True,
'changes': {'new': {'fingerprint': fingerprint,
'key': key}, 'old': ''}})
'changes': {'new': [{'fingerprint': fingerprint,
'key': key}], 'old': ''}})
self.assertDictEqual(ssh_known_hosts.present(name, user,
key=key), ret)
@ -136,14 +136,14 @@ class SshKnownHostsTestCase(TestCase, LoaderModuleMockMixin):
mock = MagicMock(return_value=False)
with patch.dict(ssh_known_hosts.__salt__,
{'ssh.get_known_host': mock}):
{'ssh.get_known_host_entries': mock}):
comt = ('Host is already absent')
ret.update({'comment': comt, 'result': True})
self.assertDictEqual(ssh_known_hosts.absent(name, user), ret)
mock = MagicMock(return_value=True)
with patch.dict(ssh_known_hosts.__salt__,
{'ssh.get_known_host': mock}):
{'ssh.get_known_host_entries': mock}):
with patch.dict(ssh_known_hosts.__opts__, {'test': True}):
comt = ('Key for github.com is set to be'
' removed from .ssh/known_hosts')

View File

@ -2,14 +2,16 @@
# Import python libs
from __future__ import absolute_import
import os
from jinja2 import Environment, DictLoader, exceptions
import ast
import copy
import tempfile
import json
import datetime
import json
import os
import pprint
import re
import tempfile
import yaml
# Import Salt Testing libs
from tests.support.unit import skipIf, TestCase
@ -17,27 +19,30 @@ from tests.support.case import ModuleCase
from tests.support.mock import NO_MOCK, NO_MOCK_REASON, patch, MagicMock
from tests.support.paths import TMP_CONF_DIR
# Import salt libs
# Import Salt libs
import salt.config
from salt.ext import six
import salt.loader
import salt.utils.files
from salt.utils import get_context
from salt.exceptions import SaltRenderError
from salt.ext import six
from salt.ext.six.moves import builtins
from salt.utils.decorators.jinja import JinjaFilter
from salt.utils.jinja import (
SaltCacheLoader,
SerializerExtension,
ensure_sequence_filter
)
from salt.utils.templates import JINJA, render_jinja_tmpl
from salt.utils.odict import OrderedDict
from salt.utils.templates import (
get_context,
JINJA,
render_jinja_tmpl
)
import salt.utils.files
import salt.utils.stringutils
# Import 3rd party libs
import yaml
from jinja2 import Environment, DictLoader, exceptions
try:
import timelib # pylint: disable=W0611
HAS_TIMELIB = True

View File

@ -14,6 +14,7 @@ from tests.support.mock import patch, call, NO_MOCK, NO_MOCK_REASON, MagicMock
import salt.master
from tests.support.case import ModuleCase
from salt import auth
import salt.utils.platform
@skipIf(NO_MOCK, NO_MOCK_REASON)
@ -150,6 +151,7 @@ class MasterACLTestCase(ModuleCase):
}
self.addCleanup(delattr, self, 'valid_clear_load')
@skipIf(salt.utils.platform.is_windows(), 'PAM eauth not available on Windows')
def test_master_publish_name(self):
'''
Test to ensure a simple name can auth against a given function.
@ -220,6 +222,7 @@ class MasterACLTestCase(ModuleCase):
self.clear.publish(self.valid_clear_load)
self.assertEqual(self.fire_event_mock.mock_calls, [])
@skipIf(salt.utils.platform.is_windows(), 'PAM eauth not available on Windows')
def test_master_minion_glob(self):
'''
Test to ensure we can allow access to a given
@ -257,6 +260,7 @@ class MasterACLTestCase(ModuleCase):
# Unimplemented
pass
@skipIf(salt.utils.platform.is_windows(), 'PAM eauth not available on Windows')
def test_args_empty_spec(self):
'''
Test simple arg restriction allowed.
@ -275,6 +279,7 @@ class MasterACLTestCase(ModuleCase):
self.clear.publish(self.valid_clear_load)
self.assertEqual(self.fire_event_mock.call_args[0][0]['fun'], 'test.empty')
@skipIf(salt.utils.platform.is_windows(), 'PAM eauth not available on Windows')
def test_args_simple_match(self):
'''
Test simple arg restriction allowed.
@ -296,6 +301,7 @@ class MasterACLTestCase(ModuleCase):
self.clear.publish(self.valid_clear_load)
self.assertEqual(self.fire_event_mock.call_args[0][0]['fun'], 'test.echo')
@skipIf(salt.utils.platform.is_windows(), 'PAM eauth not available on Windows')
def test_args_more_args(self):
'''
Test simple arg restriction allowed to pass unlisted args.
@ -356,6 +362,7 @@ class MasterACLTestCase(ModuleCase):
self.clear.publish(self.valid_clear_load)
self.assertEqual(self.fire_event_mock.mock_calls, [])
@skipIf(salt.utils.platform.is_windows(), 'PAM eauth not available on Windows')
def test_args_kwargs_match(self):
'''
Test simple kwargs restriction allowed.
@ -429,6 +436,7 @@ class MasterACLTestCase(ModuleCase):
self.clear.publish(self.valid_clear_load)
self.assertEqual(self.fire_event_mock.mock_calls, [])
@skipIf(salt.utils.platform.is_windows(), 'PAM eauth not available on Windows')
def test_args_mixed_match(self):
'''
Test mixed args and kwargs restriction allowed.
@ -574,6 +582,7 @@ class AuthACLTestCase(ModuleCase):
}
self.addCleanup(delattr, self, 'valid_clear_load')
@skipIf(salt.utils.platform.is_windows(), 'PAM eauth not available on Windows')
def test_acl_simple_allow(self):
self.clear.publish(self.valid_clear_load)
self.assertEqual(self.auth_check_mock.call_args[0][0],

View File

@ -68,26 +68,7 @@ class LoggerMock(object):
return False
@skipIf(NO_MOCK, NO_MOCK_REASON)
class DaemonsStarterTestCase(TestCase, SaltClientTestCaseMixin):
'''
Unit test for the daemons starter classes.
'''
def _multiproc_exec_test(self, exec_test):
m_parent, m_child = multiprocessing.Pipe()
p_ = multiprocessing.Process(target=exec_test, args=(m_child,))
p_.start()
self.assertTrue(m_parent.recv())
p_.join()
def test_master_daemon_hash_type_verified(self):
'''
Verify if Master is verifying hash_type config option.
:return:
'''
def exec_test(child_pipe):
def _master_exec_test(child_pipe):
def _create_master():
'''
Create master instance
@ -118,16 +99,9 @@ class DaemonsStarterTestCase(TestCase, SaltClientTestCaseMixin):
and not _logger.has_message('Do not use ')
child_pipe.send(ret)
child_pipe.close()
self._multiproc_exec_test(exec_test)
def test_minion_daemon_hash_type_verified(self):
'''
Verify if Minion is verifying hash_type config option.
:return:
'''
def exec_test(child_pipe):
def _minion_exec_test(child_pipe):
def _create_minion():
'''
Create minion instance
@ -160,16 +134,8 @@ class DaemonsStarterTestCase(TestCase, SaltClientTestCaseMixin):
child_pipe.send(ret)
child_pipe.close()
self._multiproc_exec_test(exec_test)
def test_proxy_minion_daemon_hash_type_verified(self):
'''
Verify if ProxyMinion is verifying hash_type config option.
:return:
'''
def exec_test(child_pipe):
def _proxy_exec_test(child_pipe):
def _create_proxy_minion():
'''
Create proxy minion instance
@ -202,16 +168,8 @@ class DaemonsStarterTestCase(TestCase, SaltClientTestCaseMixin):
child_pipe.send(ret)
child_pipe.close()
self._multiproc_exec_test(exec_test)
def test_syndic_daemon_hash_type_verified(self):
'''
Verify if Syndic is verifying hash_type config option.
:return:
'''
def exec_test(child_pipe):
def _syndic_exec_test(child_pipe):
def _create_syndic():
'''
Create syndic instance
@ -244,4 +202,48 @@ class DaemonsStarterTestCase(TestCase, SaltClientTestCaseMixin):
child_pipe.send(ret)
child_pipe.close()
self._multiproc_exec_test(exec_test)
@skipIf(NO_MOCK, NO_MOCK_REASON)
class DaemonsStarterTestCase(TestCase, SaltClientTestCaseMixin):
'''
Unit test for the daemons starter classes.
'''
def _multiproc_exec_test(self, exec_test):
m_parent, m_child = multiprocessing.Pipe()
p_ = multiprocessing.Process(target=exec_test, args=(m_child,))
p_.start()
self.assertTrue(m_parent.recv())
p_.join()
def test_master_daemon_hash_type_verified(self):
'''
Verify if Master is verifying hash_type config option.
:return:
'''
self._multiproc_exec_test(_master_exec_test)
def test_minion_daemon_hash_type_verified(self):
'''
Verify if Minion is verifying hash_type config option.
:return:
'''
self._multiproc_exec_test(_minion_exec_test)
def test_proxy_minion_daemon_hash_type_verified(self):
'''
Verify if ProxyMinion is verifying hash_type config option.
:return:
'''
self._multiproc_exec_test(_proxy_exec_test)
def test_syndic_daemon_hash_type_verified(self):
'''
Verify if Syndic is verifying hash_type config option.
:return:
'''
self._multiproc_exec_test(_syndic_exec_test)