Merge branch 'develop' into svn-bug-41022

This commit is contained in:
Sébastien Wains 2017-06-08 22:11:30 +02:00 committed by GitHub
commit 9ba37fdb37
28 changed files with 1827 additions and 640 deletions

View File

@ -194,6 +194,7 @@ Set up an initial profile at ``/etc/salt/cloud.profiles`` or
guestinfo.foo: bar
guestinfo.domain: foobar.com
guestinfo.customVariable: customValue
annotation: Created by Salt-Cloud
deploy: True
customization: True
@ -451,6 +452,11 @@ Set up an initial profile at ``/etc/salt/cloud.profiles`` or
present, it will be reset with the new value provided. Otherwise, a new option is
added. Keys with empty values will be removed.
``annotation``
User-provided description of the virtual machine. This will store a message in the
vSphere interface, under the annotations section in the Summary view of the virtual
machine.
``deploy``
Specifies if salt should be installed on the newly created VM. Default is ``True``
so salt will be installed using the bootstrap script. If ``template: True`` or

File diff suppressed because it is too large Load Diff

View File

@ -25,7 +25,7 @@ The methodologies for network automation have been introduced in
minions:
- :mod:`NAPALM proxy <salt.proxy.napalm>`
- :mod:`Junos <salt.proxy.junos>`
- :mod:`Junos proxy<salt.proxy.junos>`
- :mod:`Cisco NXOS <salt.proxy.nxos>`
- :mod:`Cisco NOS <salt.proxy.cisconso>`
@ -40,7 +40,7 @@ and the interaction with the network device does not rely on a particular vendor
.. image:: /_static/napalm_logo.png
Bgeginning with Nitrogen, the NAPALM modules have been transformed so they can
Beginning with Nitrogen, the NAPALM modules have been transformed so they can
run in both proxy and regular minions. That means, if the operating system
allows, the salt-minion package can be installed directly on the network gear.
The interface between the network operating system and Salt in that case would
@ -411,6 +411,224 @@ multi-vendor network:
Besides CLI, the state can be scheduled or executed when triggered by a certain
event.
JUNOS
-----
Juniper has developed a Junos specific proxy infrastructure which allows
remote execution and configuration management of Junos devices without
having to install SaltStack on the device. The infrastructure includes:
- :mod:`Junos proxy <salt.proxy.junos>`
- :mod:`Junos execution module <salt.modules.junos>`
- :mod:`Junos state module <salt.states.junos>`
- :mod:`Junos syslog engine <salt.engines.junos_syslog>`
The execution and state modules are implemented using junos-eznc (PyEZ).
Junos PyEZ is a microframework for Python that enables you to remotely manage
and automate devices running the Junos operating system.
Getting started
###############
Install PyEZ on the system which will run the Junos proxy minion.
It is required to run Junos specific modules.
.. code-block:: shell
pip install junos-eznc
Next, set the master of the proxy minions.
``/etc/salt/proxy``
.. code-block:: yaml
master: <master_ip>
Add the details of the Junos device. Device details are usually stored in
salt pillars. If the you do not wish to store credentials in the pillar,
one can setup passwordless ssh.
``/srv/pillar/vmx_details.sls``
.. code-block:: yaml
proxy:
proxytype: junos
host: <hostip>
username: user
passwd: secret123
Map the pillar file to the proxy minion. This is done in the top file.
``/srv/pillar/top.sls``
.. code-block:: yaml
base:
vmx:
- vmx_details
.. note::
Before starting the Junos proxy make sure that netconf is enabled on the
Junos device. This can be done by adding the following configuration on
the Junos device.
.. code-block:: shell
set system services netconf ssh
Start the salt master.
.. code-block:: bash
salt-master -l debug
Then start the salt proxy.
.. code-block:: bash
salt-proxy --proxyid=vmx -l debug
Once the master and junos proxy minion have started, we can run execution
and state modules on the proxy minion. Below are few examples.
CLI examples
############
For detailed documentation of all the junos execution modules refer:
:mod:`Junos execution module <salt.modules.junos>`
Display device facts.
.. code-block:: bash
$ sudo salt 'vmx' junos.facts
Refresh the Junos facts. This function will also refresh the facts which are
stored in salt grains. (Junos proxy stores Junos facts in the salt grains)
.. code-block:: bash
$ sudo salt 'vmx' junos.facts_refresh
Call an RPC.
.. code-block:: bash
$ sudo salt 'vmx' junos.rpc 'get-interface-information' '/var/log/interface-info.txt' terse=True
Install config on the device.
.. code-block:: bash
$ sudo salt 'vmx' junos.install_config 'salt://my_config.set'
Shutdown the junos device.
.. code-block:: bash
$ sudo salt 'vmx' junos.shutdown shutdown=True in_min=10
State file examples
###################
For detailed documentation of all the junos state modules refer:
:mod:`Junos state module <salt.states.junos>`
Executing an RPC on Junos device and storing the output in a file.
``/srv/salt/rpc.sls``
.. code-block:: yaml
get-interface-information:
junos:
- rpc
- dest: /home/user/rpc.log
- interface_name: lo0
Lock the junos device, load the configuration, commit it and unlock
the device.
``/srv/salt/load.sls``
.. code-block:: yaml
lock the config:
junos.lock
salt://configs/my_config.set:
junos:
- install_config
- timeout: 100
- diffs_file: 'var/log/diff'
commit the changes:
junos:
- commit
unlock the config:
junos.unlock
According to the device personality install appropriate image on the device.
``/srv/salt/image_install.sls``
.. code-block:: jinja
{% if grains['junos_facts']['personality'] == MX %}
salt://images/mx_junos_image.tgz:
junos:
- install_os
- timeout: 100
- reboot: True
{% elif grains['junos_facts']['personality'] == EX %}
salt://images/ex_junos_image.tgz:
junos:
- install_os
- timeout: 150
{% elif grains['junos_facts']['personality'] == SRX %}
salt://images/srx_junos_image.tgz:
junos:
- install_os
- timeout: 150
{% endif %}
Junos Syslog Engine
###################
:mod:`Junos Syslog Engine <salt.engines.junos_syslog>` is a Salt engine
which receives data from various Junos devices, extracts event information and
forwards it on the master/minion event bus. To start the engine on the salt
master, add the following configuration in the master config file.
The engine can also run on the salt minion.
``/etc/salt/master``
.. code-block:: yaml
engines:
- junos_syslog:
port: xxx
For junos_syslog engine to receive events, syslog must be set on the Junos device.
This can be done via following configuration:
.. code-block:: shell
set system syslog host <ip-of-the-salt-device> port xxx any any
.. toctree::
:maxdepth: 2
:glob:

View File

@ -266,6 +266,82 @@ As well as from salt-api:
{"return": [{"jerry": {"jid": "20170520151531477653", "retcode": 1, "ret": ""}}]}
Jinja
=====
Filters
-------
New filters in Nitrogen:
- :jinja_ref:`to_bool`
- :jinja_ref:`exactly_n_true`
- :jinja_ref:`exactly_one_true`
- :jinja_ref:`quote`
- :jinja_ref:`regex_search`
- :jinja_ref:`regex_match`
- :jinja_ref:`uuid`
- :jinja_ref:`is_list`
- :jinja_ref:`is_iter`
- :jinja_ref:`min`
- :jinja_ref:`max`
- :jinja_ref:`avg`
- :jinja_ref:`union`
- :jinja_ref:`intersect`
- :jinja_ref:`difference`
- :jinja_ref:`symmetric_difference`
- :jinja_ref:`is_sorted`
- :jinja_ref:`compare_lists`
- :jinja_ref:`compare_dicts`
- :jinja_ref:`is_hex`
- :jinja_ref:`contains_whitespace`
- :jinja_ref:`substring_in_list`
- :jinja_ref:`check_whitelist_blacklist`
- :jinja_ref:`date_format`
- :jinja_ref:`str_to_num`
- :jinja_ref:`to_bytes`
- :jinja_ref:`json_decode_list`
- :jinja_ref:`json_decode_dict`
- :jinja_ref:`rand_str`
- :jinja_ref:`md5`
- :jinja_ref:`sha256`
- :jinja_ref:`sha512`
- :jinja_ref:`base64_encode`
- :jinja_ref:`base64_decode`
- :jinja_ref:`hmac`
- :jinja_ref:`http_query`
- :jinja_ref:`is_ip`
- :jinja_ref:`is_ipv4`
- :jinja_ref:`is_ipv6`
- :jinja_ref:`ipaddr`
- :jinja_ref:`ipv4`
- :jinja_ref:`ipv6`
- :jinja_ref:`network_hosts`
- :jinja_ref:`network_size`
- :jinja_ref:`gen_mac`
- :jinja_ref:`mac_str_to_bytes`
- :jinja_ref:`dns_check`
- :jinja_ref:`is_text_file`
- :jinja_ref:`is_binary_file`
- :jinja_ref:`is_empty_file`
- :jinja_ref:`file_hashsum`
- :jinja_ref:`list_files`
- :jinja_ref:`path_join`
- :jinja_ref:`which`
Logs
----
Another new feature - although not limited to Jinja only -
is being able to log debug messages directly from the template:
.. code-block:: jinja
{%- do salt.log.error('logging from jinja') -%}
See the :jinja_ref:`logs` paragraph.
Network Automation
==================
@ -334,6 +410,7 @@ New functions:
(in dBm).
New grains: :mod:`Host <salt.grains.napalm.host>`,
:mod:`Host DNS<salt.grains.napalm.host_dns>`,
:mod:`Username <salt.grains.napalm.username>` and
:mod:`Optional args <salt.grains.napalm.optional_args>`.
@ -465,14 +542,155 @@ Using the new ``roster_order`` configuration syntax it's now possible to compose
of grains, pillar and mine data and even Salt SDB URLs.
The new release is also fully IPv4 and IPv6 enabled and even has support for CIDR ranges.
Additional Features
===================
- The :mod:`mine.update <salt.modules.mine.update>` function
has a new optional argument ``mine_functions`` that can be used
to refresh mine functions at a more specific interval
than scheduled using the ``mine_interval`` option.
However, this argument can be used by explicit schedule.
For example, if we need the mines for ``net.lldp`` to be refreshed
every 12 hours:
.. code-block:: yaml
schedule:
lldp_mine_update:
function: mine.update
kwargs:
mine_functions:
net.lldp: []
hours: 12
- The ``salt`` runner has a new function: :mod:`salt.execute <salt.runners.salt.execute>`.
It is mainly a shortcut to facilitate the execution of various functions
from other runners, e.g.:
.. code-block:: python
ret1 = __salt__['salt.execute']('*', 'mod.fun')
New Modules
===========
Beacons
-------
- :mod:`salt.beacons.log <salt.beacons.log>`
Engines
-------
- :mod:`salt.engines.stalekey <salt.engines.stalekey>`
- :mod:`salt.engines.junos_syslog <salt.engines.junos_syslog>`
- :mod:`salt.engines.napalm_syslog <salt.engines.napalm_syslog>`
Execution modules
-----------------
- :mod:`salt.modules.apk <salt.modules.apk>`
- :mod:`salt.modules.at_solaris <salt.modules.at_solaris>`
- :mod:`salt.modules.boto_kinesis <salt.modules.boto_kinesis>`
- :mod:`salt.modules.boto3_elasticache <salt.modules.boto3_elasticache>`
- :mod:`salt.modules.boto3_route53 <salt.modules.boto3_route53>`
- :mod:`salt.modules.capirca_acl <salt.modules.capirca_acl>`
- :mod:`salt.modules.freebsd_update <salt.modules.freebsd_update>`
- :mod:`salt.modules.grafana4 <salt.modules.grafana4>`
- :mod:`salt.modules.heat <salt.modules.heat>`
- :mod:`salt.modules.icinga2 <salt.modules.icinga2>`
- :mod:`salt.modules.logmod <salt.modules.logmod>`
- :mod:`salt.modules.mattermost <salt.modules.mattermost>`
- :mod:`salt.modules.mattermost <salt.modules.mattermost>`
- :mod:`salt.modules.namecheap_dns <salt.modules.namecheap_dns>`
- :mod:`salt.modules.namecheap_domains <salt.modules.namecheap_domains>`
- :mod:`salt.modules.namecheap_ns <salt.modules.namecheap_ns>`
- :mod:`salt.modules.namecheap_users <salt.modules.namecheap_users>`
- :mod:`salt.modules.namecheap_ssl <salt.modules.namecheap_ssl>`
- :mod:`salt.modules.napalm <salt.modules.napalm>`
- :mod:`salt.modules.napalm_acl <salt.modules.napalm_acl>`
- :mod:`salt.modules.napalm_yang_mod <salt.modules.napalm_yang_mod>`
- :mod:`salt.modules.pdbedit <salt.modules.pdbedit>`
- :mod:`salt.modules.solrcloud <salt.modules.solrcloud>`
- :mod:`salt.modules.statuspage <salt.modules.statuspage>`
- :mod:`salt.modules.zonecfg <salt.modules.zonecfg>`
- :mod:`salt.modules.zoneadm <salt.modules.zoneadm>`
Grains
------
- :mod:`salt.grains.metadata <salt.grains.metadata>`
- :mod:`salt.grains.mdata <salt.grains.mdata>`
Outputters
----------
- :mod:`table <salt.output.table_out>`
- :mod:`profile <salt.output.profile>`
- :mod:`salt.output.table_out <salt.output.table_out>`
Pillar
------
- :mod:`salt.pillar.postgres <salt.pillar.postgres>`
- :mod:`salt.pillar.vmware_pillar <salt.pillar.vmware_pillar>`
Returners
---------
- :mod:`salt.returners.mattermost_returner <salt.returners.mattermost_returner>`
- :mod:`salt.returners.highstate_return <salt.returners.highstate_return>`
Roster
------
- :mod:`salt.roster.cache <salt.roster.cache>`
Runners
-------
- :mod:`salt.runners.bgp <salt.runners.bgp>`
- :mod:`salt.runners.mattermost <salt.runners.mattermost>`
- :mod:`salt.runners.net <salt.runners.net>`
SDB
---
- :mod:`salt.sdb.yaml <salt.sdb.yaml>`
- :mod:`salt.sdb.tism <salt.sdb.tism>`
- :mod:`salt.sdb.cache <salt.sdb.cache>`
States
------
- :mod:`salt.states.boto_kinesis <salt.states.boto_kinesis>`
- :mod:`salt.states.boto_efs <salt.states.boto_efs>`
- :mod:`salt.states.boto3_elasticache <salt.states.boto3_elasticache>`
- :mod:`salt.states.boto3_route53 <salt.states.boto3_route53>`
- :mod:`salt.states.docker_container <salt.states.docker_container>`
- :mod:`salt.states.docker_image <salt.states.docker_image>`
- :mod:`salt.states.docker_network <salt.states.docker_network>`
- :mod:`salt.states.docker_volume <salt.states.docker_volume>`
- :mod:`salt.states.elasticsearch <salt.states.elasticsearch>`
- :mod:`salt.states.grafana4_dashboard <salt.states.grafana4_dashboard>`
- :mod:`salt.states.grafana4_datasource <salt.states.grafana4_datasource>`
- :mod:`salt.states.grafana4_org <salt.states.grafana4_org>`
- :mod:`salt.states.grafana4_user <salt.states.grafana4_user>`
- :mod:`salt.states.heat <salt.states.heat>`
- :mod:`salt.states.icinga2 <salt.states.icinga2>`
- :mod:`salt.states.influxdb_continuous_query <salt.states.influxdb_continuous_query>`
- :mod:`salt.states.influxdb_retention_policy <salt.states.influxdb_retention_policy>`
- :mod:`salt.states.logadm <salt.states.logadm>`
- :mod:`salt.states.logrotate <salt.states.logrotate>`
- :mod:`salt.states.msteams <salt.states.msteams>`
- :mod:`salt.states.netacl <salt.states.netacl>`
- :mod:`salt.states.netconfig <salt.states.netconfig>`
- :mod:`salt.states.netyang <salt.states.netyang>`
- :mod:`salt.states.nix <salt.states.nix>`
- :mod:`salt.states.pdbedit <salt.states.pdbedit>`
- :mod:`salt.states.solrcloud <salt.states.solrcloud>`
- :mod:`salt.states.statuspage <salt.states.statuspage>`
- :mod:`salt.states.vault <salt.states.vault>`
- :mod:`salt.states.win_wua <salt.states.win_wua>`
- :mod:`salt.states.zone <salt.states.zone>`
Deprecations
============
@ -611,41 +829,41 @@ State Deprecations
The ``apache_conf`` state had the following functions removed:
- ``disable``: Please use ``disabled`` instead.
- ``enable``: Please use ``enabled`` instead.
- ``disable``: Please use ``disabled`` instead.
- ``enable``: Please use ``enabled`` instead.
The ``apache_module`` state had the following functions removed:
- ``disable``: Please use ``disabled`` instead.
- ``enable``: Please use ``enabled`` instead.
- ``disable``: Please use ``disabled`` instead.
- ``enable``: Please use ``enabled`` instead.
The ``apache_site`` state had the following functions removed:
- ``disable``: Please use ``disabled`` instead.
- ``enable``: Please use ``enabled`` instead.
- ``disable``: Please use ``disabled`` instead.
- ``enable``: Please use ``enabled`` instead.
The ``chocolatey`` state had the following functions removed:
- ``install``: Please use ``installed`` instead.
- ``uninstall``: Please use ``uninstalled`` instead.
- ``install``: Please use ``installed`` instead.
- ``uninstall``: Please use ``uninstalled`` instead.
The ``git`` state had the following changes:
- The ``config`` function was removed. Please use ``config_set`` instead.
- The ``is_global`` option was removed from the ``config_set`` function.
- The ``config`` function was removed. Please use ``config_set`` instead.
- The ``is_global`` option was removed from the ``config_set`` function.
Please use ``global`` instead.
- The ``always_fetch`` option was removed from the ``latest`` function, as
- The ``always_fetch`` option was removed from the ``latest`` function, as
it no longer has any effect. Please see the :ref:`2015.8.0<release-2015-8-0>`
release notes for more information.
- The ``force`` option was removed from the ``latest`` function. Please
- The ``force`` option was removed from the ``latest`` function. Please
use ``force_clone`` instead.
- The ``remote_name`` option was removed from the ``latest`` function.
- The ``remote_name`` option was removed from the ``latest`` function.
Please use ``remote`` instead.
The ``glusterfs`` state had the following function removed:
- ``created``: Please use ``volume_present`` instead.
- ``created``: Please use ``volume_present`` instead.
The ``openvswitch_port`` state had the following change:
- The ``type`` option was removed from the ``present`` function. Please use ``tunnel_type`` instead.
- The ``type`` option was removed from the ``present`` function. Please use ``tunnel_type`` instead.

1
pkg/rpm/salt-proxy@.service Symbolic link
View File

@ -0,0 +1 @@
../salt-proxy@.service

View File

@ -2863,6 +2863,8 @@ def create_attach_volumes(name, kwargs, call=None, wait_to_finish=True):
volume_dict['iops'] = volume['iops']
if 'encrypted' in volume:
volume_dict['encrypted'] = volume['encrypted']
if 'kmskeyid' in volume:
volume_dict['kmskeyid'] = volume['kmskeyid']
if 'volume_id' not in volume_dict:
created_volume = create_volume(volume_dict, call='function', wait_to_finish=wait_to_finish)
@ -4059,6 +4061,13 @@ def create_volume(kwargs=None, call=None, wait_to_finish=False):
# You can't set `encrypted` if you pass a snapshot
if 'encrypted' in kwargs and 'snapshot' not in kwargs:
params['Encrypted'] = kwargs['encrypted']
if 'kmskeyid' in kwargs:
params['KmsKeyId'] = kwargs['kmskeyid']
if 'kmskeyid' in kwargs and 'encrypted' not in kwargs:
log.error(
'If a KMS Key ID is specified, encryption must be enabled'
)
return False
log.debug(params)

View File

@ -2359,6 +2359,9 @@ def create(vm_):
extra_config = config.get_cloud_config_value(
'extra_config', vm_, __opts__, default=None
)
annotation = config.get_cloud_config_value(
'annotation', vm_, __opts__, default=None
)
power = config.get_cloud_config_value(
'power_on', vm_, __opts__, default=True
)
@ -2569,6 +2572,9 @@ def create(vm_):
option = vim.option.OptionValue(key=key, value=value)
config_spec.extraConfig.append(option)
if annotation:
config_spec.annotation = str(annotation)
if 'clonefrom' in vm_:
clone_spec = handle_snapshot(
config_spec,

View File

@ -14,7 +14,7 @@ Set up the cloud configuration at ``/etc/salt/cloud.providers`` or
.. code-block:: yaml
my-vultr-config:
my-vultr-config:
# Vultr account api key
api_key: <supersecretapi_key>
driver: vultr
@ -38,11 +38,11 @@ from __future__ import absolute_import
import pprint
import logging
import time
import urllib
# Import salt cloud libs
# Import salt libs
import salt.config as config
import salt.ext.six as six
from salt.ext.six.moves.urllib.parse import urlencode as _urlencode # pylint: disable=E0611
from salt.exceptions import (
SaltCloudConfigError,
SaltCloudSystemExit
@ -173,7 +173,7 @@ def destroy(name):
'''
node = show_instance(name, call='action')
params = {'SUBID': node['SUBID']}
result = _query('server/destroy', method='POST', decode=False, data=urllib.urlencode(params))
result = _query('server/destroy', method='POST', decode=False, data=_urlencode(params))
# The return of a destroy call is empty in the case of a success.
# Errors are only indicated via HTTP status code. Status code 200
@ -291,7 +291,7 @@ def create(vm_):
)
try:
data = _query('server/create', method='POST', data=urllib.urlencode(kwargs))
data = _query('server/create', method='POST', data=_urlencode(kwargs))
if int(data.get('status', '200')) >= 300:
log.error('Error creating {0} on Vultr\n\n'
'Vultr API returned {1}\n'.format(vm_['name'], data))

View File

@ -2058,7 +2058,9 @@ def include_config(include, orig_path, verbose, exit_on_config_errors=False):
else:
# Initialize default config if we wish to skip config errors
opts = {}
schedule = opts.get('schedule', {})
if schedule and 'schedule' in configuration:
configuration['schedule'].update(schedule)
include = opts.get('include', [])
if include:
opts.update(include_config(include, fn_, verbose))

View File

@ -123,7 +123,8 @@ changes on the device(s) firing the event, one is able to
identify the minion ID, using one of the following alternatives, but not limited to:
- :mod:`Host grains <salt.grains.napalm.host>` to match the event tag
- :mod:`Hostname grains <salt.grains.napalm.hostname>` to match the IP address in the event data
- :mod:`Host DNS grain <salt.grains.napalm.host_dns>` to match the IP address in the event data
- :mod:`Hostname grains <salt.grains.napalm.hostname>` to match the event tag
- :ref:`Define static grains <static-custom-grains>`
- :ref:`Write a grains module <writing-grains>`
- :ref:`Targeting minions using pillar data <targeting-pillar>` -- the user

View File

@ -22,6 +22,7 @@ import logging
log = logging.getLogger(__name__)
# Salt lib
import salt.utils.dns
import salt.utils.napalm
# ----------------------------------------------------------------------------------------------------------------------
@ -355,6 +356,72 @@ def host(proxy=None):
return {'host': _get_device_grain('hostname', proxy=proxy)}
def host_dns(proxy=None):
'''
Return the DNS information of the host.
This grain is a dictionary having two keys:
- ``A``
- ``AAAA``
.. note::
This grain is disabled by default, as the proxy startup may be slower
when the lookup fails.
The user can enable it using the ``napalm_host_dns_grain`` option (in
the pillar or proxy configuration file):
.. code-block:: yaml
napalm_host_dns_grain: true
.. versionadded:: Nitrogen
CLI Example:
.. code-block:: bash
salt 'device*' grains.get host_dns
Output:
.. code-block:: yaml
device1:
A:
- 172.31.9.153
AAAA:
- fd52:188c:c068::1
device2:
A:
- 172.31.46.249
AAAA:
- fdca:3b17:31ab::17
device3:
A:
- 172.31.8.167
AAAA:
- fd0f:9fd6:5fab::1
'''
if not __opts__.get('napalm_host_dns_grain', False):
return
device_host = host(proxy=proxy)
if device_host:
device_host_value = device_host['host']
host_dns_ret = {
'host_dns': {
'A': [],
'AAAA': []
}
}
dns_a = salt.utils.dns.query(device_host_value, 'A')
if dns_a:
host_dns_ret['host_dns']['A'] = dns_a
dns_aaaa = salt.utils.dns.query(device_host_value, 'AAAA')
if dns_aaaa:
host_dns_ret['host_dns']['AAAA'] = dns_aaaa
return host_dns_ret
def optional_args(proxy=None):
'''
Return the connection optional args.

View File

@ -53,7 +53,7 @@ if six.PY3:
for suffix in importlib.machinery.BYTECODE_SUFFIXES:
SUFFIXES.append((suffix, 'rb', 2))
for suffix in importlib.machinery.SOURCE_SUFFIXES:
SUFFIXES.append((suffix, 'r', 1))
SUFFIXES.append((suffix, 'rb', 1))
# pylint: enable=no-member,no-name-in-module,import-error
else:
SUFFIXES = imp.get_suffixes()

View File

@ -6,6 +6,7 @@ A module to wrap (non-Windows) archive calls
'''
from __future__ import absolute_import
import contextlib # For < 2.7 compat
import copy
import errno
import glob
import logging
@ -249,13 +250,16 @@ def list_(name,
else:
files.append(path)
for path in files:
_files = copy.deepcopy(files)
for path in _files:
# ZIP files created on Windows do not add entries
# to the archive for directories. So, we'll need to
# manually add them.
dirname = ''.join(path.rpartition('/')[:2])
if dirname:
dirs.add(dirname)
if dirname in files:
files.remove(dirname)
return list(dirs), files, links
except zipfile.BadZipfile:
raise CommandExecutionError('{0} is not a ZIP file'.format(name))
@ -1055,7 +1059,15 @@ def unzip(zip_file,
continue
zfile.extract(target, dest, password)
if extract_perms:
os.chmod(os.path.join(dest, target), zfile.getinfo(target).external_attr >> 16)
perm = zfile.getinfo(target).external_attr >> 16
if perm == 0:
umask_ = os.umask(0)
os.umask(umask_)
if target.endswith('/'):
perm = 0o777 & ~umask_
else:
perm = 0o666 & ~umask_
os.chmod(os.path.join(dest, target), perm)
except Exception as exc:
if runas:
os.seteuid(euid)

View File

@ -816,7 +816,7 @@ def create_floatingip(floating_network, port=None, profile=None):
return conn.create_floatingip(floating_network, port)
def update_floatingip(floatingip_id, port, profile=None):
def update_floatingip(floatingip_id, port=None, profile=None):
'''
Updates a floatingIP
@ -827,7 +827,8 @@ def update_floatingip(floatingip_id, port, profile=None):
salt '*' neutron.update_floatingip network-name port-name
:param floatingip_id: ID of floatingIP
:param port: ID or name of port
:param port: ID or name of port, to associate floatingip to
`None` or do not specify to disassociate the floatingip (Optional)
:param profile: Profile to build on (Optional)
:return: Value of updated floating IP information
'''

View File

@ -6,6 +6,7 @@ Render the pillar data
# Import python libs
from __future__ import absolute_import
import copy
import fnmatch
import os
import collections
import logging
@ -289,6 +290,7 @@ class Pillar(object):
self.opts = self.__gen_opts(opts, grains, saltenv=saltenv, pillarenv=pillarenv)
self.saltenv = saltenv
self.client = salt.fileclient.get_file_client(self.opts, True)
self.avail = self.__gather_avail()
if opts.get('file_client', '') == 'local':
opts['grains'] = grains
@ -359,6 +361,15 @@ class Pillar(object):
return False
return True
def __gather_avail(self):
'''
Gather the lists of available sls data from the master
'''
avail = {}
for saltenv in self._get_envs():
avail[saltenv] = self.client.list_states(saltenv)
return avail
def __gen_opts(self, opts_in, grains, saltenv=None, ext=None, pillarenv=None):
'''
The options need to be altered to conform to the file client
@ -722,8 +733,23 @@ class Pillar(object):
if errors is None:
errors = []
for saltenv, pstates in six.iteritems(matches):
pstatefiles = []
mods = set()
for sls in pstates:
for sls_match in pstates:
matched_pstates = []
try:
matched_pstates = fnmatch.filter(self.avail[saltenv], sls_match)
except KeyError:
errors.extend(
['No matching pillar environment for environment '
'\'{0}\' found'.format(saltenv)]
)
if matched_pstates:
pstatefiles.extend(matched_pstates)
else:
pstatefiles.append(sls_match)
for sls in pstatefiles:
pstate, mods, err = self.render_pstate(sls, saltenv, mods)
if err:

View File

@ -6,7 +6,7 @@ A module that adds data to the Pillar structure retrieved by an http request
Configuring the HTTP_JSON ext_pillar
====================================
Set the following Salt config to setup Foreman as external pillar source:
Set the following Salt config to setup http json result as external pillar source:
.. code-block:: yaml
@ -17,6 +17,16 @@ Set the following Salt config to setup Foreman as external pillar source:
username: username
password: password
If the with_grains parameter is set, grain keys wrapped in can be provided (wrapped
in <> brackets) in the url in order to populate pillar data based on the grain value.
.. code-block:: yaml
ext_pillar:
- http_json:
url: http://example.com/api/<nodename>
with_grains: True
Module Documentation
====================
'''
@ -24,32 +34,61 @@ Module Documentation
# Import python libs
from __future__ import absolute_import
import logging
import re
# Import Salt libs
import salt.ext.six as six
try:
from salt.ext.six.moves.urllib.parse import quote as _quote
_HAS_DEPENDENCIES = True
except ImportError:
_HAS_DEPENDENCIES = False
# Set up logging
_LOG = logging.getLogger(__name__)
def __virtual__():
return _HAS_DEPENDENCIES
def ext_pillar(minion_id,
pillar, # pylint: disable=W0613
url=None):
url,
with_grains=False):
'''
Read pillar data from HTTP response.
:param url String to make request
:returns dict with pillar data to add
:returns empty if error
'''
# Set up logging
log = logging.getLogger(__name__)
:param str url: Url to request.
:param bool with_grains: Whether to substitute strings in the url with their grain values.
:return: A dictionary of the pillar data to add.
:rtype: dict
'''
grain_pattern = r'<(?P<grain_name>.*?)>'
if with_grains:
# Get the value of the grain and substitute each grain
# name for the url-encoded version of its grain value.
for match in re.finditer(grain_pattern, url):
grain_name = match.group('grain_name')
grain_value = __salt__['grains.get'](grain_name, None)
if not grain_value:
_LOG.error("Unable to get minion '%s' grain: %s", minion_id, grain_name)
return {}
grain_value = _quote(str(grain_value))
url = re.sub('<{0}>'.format(grain_name), grain_value, url)
_LOG.debug('Getting url: %s', url)
data = __salt__['http.query'](url=url, decode=True, decode_type='json')
if 'dict' in data:
return data['dict']
log.error('Error caught on query to' + url + '\nMore Info:\n')
_LOG.error("Error on minion '%s' http query: %s\nMore Info:\n", minion_id, url)
for k, v in six.iteritems(data):
log.error(k + ' : ' + v)
for key in data:
_LOG.error('%s: %s', key, data[key])
return {}

View File

@ -17,6 +17,16 @@ Set the following Salt config to setup an http endpoint as the external pillar s
username: username
password: password
If the with_grains parameter is set, grain keys wrapped in can be provided (wrapped
in <> brackets) in the url in order to populate pillar data based on the grain value.
.. code-block:: yaml
ext_pillar:
- http_yaml:
url: http://example.com/api/<nodename>
with_grains: True
Module Documentation
====================
'''
@ -24,32 +34,62 @@ Module Documentation
# Import python libs
from __future__ import absolute_import
import logging
import re
# Import Salt libs
import salt.ext.six as six
try:
from salt.ext.six.moves.urllib.parse import quote as _quote
_HAS_DEPENDENCIES = True
except ImportError:
_HAS_DEPENDENCIES = False
# Set up logging
_LOG = logging.getLogger(__name__)
def __virtual__():
return _HAS_DEPENDENCIES
def ext_pillar(minion_id,
pillar, # pylint: disable=W0613
url):
"""
url,
with_grains=False):
'''
Read pillar data from HTTP response.
:param url String to make request
:returns dict with pillar data to add
:returns empty if error
"""
# Set up logging
log = logging.getLogger(__name__)
:param str url: Url to request.
:param bool with_grains: Whether to substitute strings in the url with their grain values.
:return: A dictionary of the pillar data to add.
:rtype: dict
'''
grain_pattern = r'<(?P<grain_name>.*?)>'
if with_grains:
# Get the value of the grain and substitute each grain
# name for the url-encoded version of its grain value.
for match in re.finditer(grain_pattern, url):
grain_name = match.group('grain_name')
grain_value = __salt__['grains.get'](grain_name, None)
if not grain_value:
_LOG.error("Unable to get minion '%s' grain: %s", minion_id, grain_name)
return {}
grain_value = _quote(str(grain_value))
url = re.sub('<{0}>'.format(grain_name), grain_value, url)
_LOG.debug('Getting url: %s', url)
data = __salt__['http.query'](url=url, decode=True, decode_type='yaml')
if 'dict' in data:
return data['dict']
log.error('Error caught on query to' + url + '\nMore Info:\n')
_LOG.error("Error on minion '%s' http query: %s\nMore Info:\n", minion_id, url)
for k, v in six.iteritems(data):
log.error(k + ' : ' + v)
for key in data:
_LOG.error('%s: %s', key, data[key])
return {}

View File

@ -72,7 +72,8 @@ def event_return(events):
try:
with salt.utils.flopen(opts['filename'], 'a') as logfile:
for event in events:
logfile.write(str(json.dumps(event))+'\n')
json.dump(event, logfile)
logfile.write('\n')
except:
log.error('Could not write to rawdata_json file {0}'.format(opts['filename']))
raise

View File

@ -34,6 +34,8 @@ def update(tgt,
clear=False,
mine_functions=None):
'''
.. versionadded:: Nitrogen
Update the mine data on a certain group of minions.
tgt

View File

@ -68,7 +68,10 @@ def import_cert(name, cert_format=_DEFAULT_FORMAT, context=_DEFAULT_CONTEXT, sto
cached_source_path = __salt__['cp.cache_file'](name, saltenv)
current_certs = __salt__['win_pki.get_certs'](context=context, store=store)
cert_props = __salt__['win_pki.get_cert_file'](name=cached_source_path)
if password:
cert_props = __salt__['win_pki.get_cert_file'](name=cached_source_path, cert_format=cert_format, password=password)
else:
cert_props = __salt__['win_pki.get_cert_file'](name=cached_source_path, cert_format=cert_format)
if cert_props['thumbprint'] in current_certs:
ret['comment'] = ("Certificate '{0}' already contained in store:"

View File

@ -537,10 +537,14 @@ class SaltNeutron(NeutronShell):
return self.network_conn.create_floatingip(body={'floatingip': body})
def update_floatingip(self, floatingip_id, port):
def update_floatingip(self, floatingip_id, port=None):
'''
Updates a floatingip
Updates a floatingip, disassociates the floating ip if
port is set to `None`
'''
if port is None:
body = {'floatingip': {}}
else:
port_id = self._find_port_id(port)
body = {'floatingip': {'port_id': port_id}}
return self.network_conn.update_floatingip(

View File

@ -9,7 +9,7 @@ from __future__ import absolute_import, print_function
# Import Salt Testing libs
from tests.support.case import ModuleCase
from tests.support.unit import skipIf
from tests.support.helpers import destructiveTest, skip_if_not_root
from tests.support.helpers import destructiveTest, skip_if_not_root, flaky
# Import salt libs
import salt.utils
@ -40,6 +40,7 @@ class MacPowerModuleTest(ModuleCase):
self.run_function('power.set_harddisk_sleep', [self.HARD_DISK_SLEEP])
@destructiveTest
@flaky
def test_computer_sleep(self):
'''
Test power.get_computer_sleep

View File

@ -728,40 +728,6 @@ class StateModuleTest(ModuleCase, SaltReturnAssertsMixin):
#ret = self.run_function('state.sls', mods='requisites.fullsls_prereq')
#self.assertEqual(['sls command can only be used with require requisite'], ret)
def test_requisites_full_sls_require_in(self):
'''
Test require_in when including an entire sls
'''
expected_result = {
'cmd_|-A_|-echo A_|-run': {
'__run_num__': 0,
'comment': 'Command "echo A" run',
'result': True,
'changes': True},
'cmd_|-B_|-echo B_|-run': {
'__run_num__': 1,
'comment': 'Command "echo B" run',
'result': True,
'changes': True},
'cmd_|-C_|-echo C_|-run': {
'__run_num__': 2,
'comment': 'Command "echo C" run',
'result': True,
'changes': True},
}
ret = self.run_function('state.sls',
mods='requisites.fullsls_require_in')
self.assertReturnNonEmptySaltType(ret)
result = self.normalize_ret(ret)
self.assertEqual(expected_result, result)
def test_requisites_full_sls_import(self):
'''
Test full sls requisite with nothing but an import
'''
ret = self.run_function('state.sls', mods='requisites.fullsls_require_import')
self.assertSaltTrueReturn(ret)
def test_requisites_prereq_simple_ordering_and_errors(self):
'''
Call sls file containing several prereq_in and prereq.

View File

@ -117,6 +117,10 @@ def _rand_key_name(length):
)
def _windows_or_mac():
return salt.utils.is_windows() or salt.utils.is_darwin()
class GitPythonMixin(object):
'''
GitPython doesn't support anything fancy in terms of authentication
@ -127,6 +131,8 @@ class GitPythonMixin(object):
Test using a single ext_pillar repo
'''
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: gitpython
cachedir: {cachedir}
extension_modules: {extmods}
@ -152,6 +158,8 @@ class GitPythonMixin(object):
pillar_merge_lists disabled.
'''
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: gitpython
cachedir: {cachedir}
extension_modules: {extmods}
@ -180,6 +188,8 @@ class GitPythonMixin(object):
pillar_merge_lists disabled.
'''
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: gitpython
cachedir: {cachedir}
extension_modules: {extmods}
@ -208,6 +218,8 @@ class GitPythonMixin(object):
pillar_merge_lists enabled.
'''
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: gitpython
cachedir: {cachedir}
extension_modules: {extmods}
@ -236,6 +248,8 @@ class GitPythonMixin(object):
pillar_merge_lists enabled.
'''
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: gitpython
cachedir: {cachedir}
extension_modules: {extmods}
@ -260,6 +274,8 @@ class GitPythonMixin(object):
Test using pillarenv to restrict results to those from a single branch
'''
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: gitpython
cachedir: {cachedir}
extension_modules: {extmods}
@ -285,6 +301,8 @@ class GitPythonMixin(object):
SLS file (included_pillar) in the compiled pillar data.
'''
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: gitpython
cachedir: {cachedir}
extension_modules: {extmods}
@ -313,6 +331,8 @@ class GitPythonMixin(object):
message in the compiled data.
'''
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: gitpython
git_pillar_includes: False
cachedir: {cachedir}
@ -337,7 +357,7 @@ class GitPythonMixin(object):
@destructiveTest
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(salt.utils.is_windows(), 'minion is windows')
@skipIf(_windows_or_mac(), 'minion is windows or mac')
@skip_if_not_root
@skipIf(not HAS_GITPYTHON, 'GitPython >= {0} required'.format(GITPYTHON_MINVER))
@skipIf(not HAS_SSHD, 'sshd not present')
@ -352,7 +372,7 @@ class TestGitPythonSSH(GitPillarSSHTestBase, GitPythonMixin):
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(salt.utils.is_windows(), 'minion is windows')
@skipIf(_windows_or_mac(), 'minion is windows or mac')
@skip_if_not_root
@skipIf(not HAS_GITPYTHON, 'GitPython >= {0} required'.format(GITPYTHON_MINVER))
@skipIf(not HAS_NGINX, 'nginx not present')
@ -365,7 +385,7 @@ class TestGitPythonHTTP(GitPillarHTTPTestBase, GitPythonMixin):
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(salt.utils.is_windows(), 'minion is windows')
@skipIf(_windows_or_mac(), 'minion is windows or mac')
@skip_if_not_root
@skipIf(not HAS_GITPYTHON, 'GitPython >= {0} required'.format(GITPYTHON_MINVER))
@skipIf(not HAS_NGINX, 'nginx not present')
@ -396,7 +416,7 @@ class TestGitPythonAuthenticatedHTTP(TestGitPythonHTTP, GitPythonMixin):
@destructiveTest
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(salt.utils.is_windows(), 'minion is windows')
@skipIf(_windows_or_mac(), 'minion is windows or mac')
@skip_if_not_root
@skipIf(not HAS_PYGIT2, 'pygit2 >= {0} required'.format(PYGIT2_MINVER))
@skipIf(not HAS_SSHD, 'sshd not present')
@ -433,6 +453,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphraseless key and global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_pubkey: {pubkey_nopass}
git_pillar_privkey: {privkey_nopass}
@ -446,6 +468,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphraseless key and per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -463,6 +487,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphrase-protected key and global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_pubkey: {pubkey_withpass}
git_pillar_privkey: {privkey_withpass}
@ -477,6 +503,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphrase-protected key and per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -509,6 +537,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphraseless key and global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_pubkey: {pubkey_nopass}
git_pillar_privkey: {privkey_nopass}
@ -524,6 +554,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphraseless key and per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -545,6 +577,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphrase-protected key and global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_pubkey: {pubkey_withpass}
git_pillar_privkey: {privkey_withpass}
@ -561,6 +595,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphrase-protected key and per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -598,6 +634,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphraseless key and global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_pubkey: {pubkey_nopass}
git_pillar_privkey: {privkey_nopass}
@ -613,6 +651,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphraseless key and per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -634,6 +674,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphrase-protected key and global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_pubkey: {pubkey_withpass}
git_pillar_privkey: {privkey_withpass}
@ -650,6 +692,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphrase-protected key and per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -687,6 +731,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphraseless key and global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_pubkey: {pubkey_nopass}
git_pillar_privkey: {privkey_nopass}
@ -702,6 +748,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphraseless key and per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -723,6 +771,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphrase-protected key and global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_pubkey: {pubkey_withpass}
git_pillar_privkey: {privkey_withpass}
@ -739,6 +789,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphrase-protected key and per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -776,6 +828,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphraseless key and global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_pubkey: {pubkey_nopass}
git_pillar_privkey: {privkey_nopass}
@ -791,6 +845,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphraseless key and per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -812,6 +868,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphrase-protected key and global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_pubkey: {pubkey_withpass}
git_pillar_privkey: {privkey_withpass}
@ -828,6 +886,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphrase-protected key and per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -860,6 +920,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphraseless key and global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_pubkey: {pubkey_nopass}
git_pillar_privkey: {privkey_nopass}
@ -875,6 +937,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphraseless key and per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -896,6 +960,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphrase-protected key and global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_pubkey: {pubkey_withpass}
git_pillar_privkey: {privkey_withpass}
@ -912,6 +978,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphrase-protected key and per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -947,6 +1015,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphraseless key and global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_pubkey: {pubkey_nopass}
git_pillar_privkey: {privkey_nopass}
@ -962,6 +1032,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphraseless key and per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -983,6 +1055,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphrase-protected key and global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_pubkey: {pubkey_withpass}
git_pillar_privkey: {privkey_withpass}
@ -999,6 +1073,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphrase-protected key and per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -1037,6 +1113,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphraseless key and global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_includes: False
git_pillar_pubkey: {pubkey_nopass}
@ -1053,6 +1131,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphraseless key and per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_includes: False
cachedir: {cachedir}
@ -1075,6 +1155,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphrase-protected key and global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_includes: False
git_pillar_pubkey: {pubkey_withpass}
@ -1092,6 +1174,8 @@ class TestPygit2SSH(GitPillarSSHTestBase):
# Test with passphrase-protected key and per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_includes: False
cachedir: {cachedir}
@ -1112,7 +1196,7 @@ class TestPygit2SSH(GitPillarSSHTestBase):
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(salt.utils.is_windows(), 'minion is windows')
@skipIf(_windows_or_mac(), 'minion is windows or mac')
@skip_if_not_root
@skipIf(not HAS_PYGIT2, 'pygit2 >= {0} required'.format(PYGIT2_MINVER))
@skipIf(not HAS_NGINX, 'nginx not present')
@ -1140,6 +1224,8 @@ class TestPygit2HTTP(GitPillarHTTPTestBase):
}
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -1167,6 +1253,8 @@ class TestPygit2HTTP(GitPillarHTTPTestBase):
}
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -1196,6 +1284,8 @@ class TestPygit2HTTP(GitPillarHTTPTestBase):
}
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -1225,6 +1315,8 @@ class TestPygit2HTTP(GitPillarHTTPTestBase):
}
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -1254,6 +1346,8 @@ class TestPygit2HTTP(GitPillarHTTPTestBase):
}
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -1278,6 +1372,8 @@ class TestPygit2HTTP(GitPillarHTTPTestBase):
}
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -1305,6 +1401,8 @@ class TestPygit2HTTP(GitPillarHTTPTestBase):
}
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -1335,6 +1433,8 @@ class TestPygit2HTTP(GitPillarHTTPTestBase):
}
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_includes: False
cachedir: {cachedir}
@ -1349,7 +1449,7 @@ class TestPygit2HTTP(GitPillarHTTPTestBase):
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(salt.utils.is_windows(), 'minion is windows')
@skipIf(_windows_or_mac(), 'minion is windows or mac')
@skip_if_not_root
@skipIf(not HAS_PYGIT2, 'pygit2 >= {0} required'.format(PYGIT2_MINVER))
@skipIf(not HAS_NGINX, 'nginx not present')
@ -1384,6 +1484,8 @@ class TestPygit2AuthenticatedHTTP(GitPillarHTTPTestBase):
# Test with global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_user: {user}
git_pillar_password: {password}
@ -1398,6 +1500,8 @@ class TestPygit2AuthenticatedHTTP(GitPillarHTTPTestBase):
# Test with per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -1429,6 +1533,8 @@ class TestPygit2AuthenticatedHTTP(GitPillarHTTPTestBase):
# Test with global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_user: {user}
git_pillar_password: {password}
@ -1445,6 +1551,8 @@ class TestPygit2AuthenticatedHTTP(GitPillarHTTPTestBase):
# Test with per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -1481,6 +1589,8 @@ class TestPygit2AuthenticatedHTTP(GitPillarHTTPTestBase):
# Test with global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_user: {user}
git_pillar_password: {password}
@ -1497,6 +1607,8 @@ class TestPygit2AuthenticatedHTTP(GitPillarHTTPTestBase):
# Test with per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -1533,6 +1645,8 @@ class TestPygit2AuthenticatedHTTP(GitPillarHTTPTestBase):
# Test with global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_user: {user}
git_pillar_password: {password}
@ -1549,6 +1663,8 @@ class TestPygit2AuthenticatedHTTP(GitPillarHTTPTestBase):
# Test with per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -1585,6 +1701,8 @@ class TestPygit2AuthenticatedHTTP(GitPillarHTTPTestBase):
# Test with global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_user: {user}
git_pillar_password: {password}
@ -1601,6 +1719,8 @@ class TestPygit2AuthenticatedHTTP(GitPillarHTTPTestBase):
# Test with per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -1632,6 +1752,8 @@ class TestPygit2AuthenticatedHTTP(GitPillarHTTPTestBase):
# Test with global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_user: {user}
git_pillar_password: {password}
@ -1648,6 +1770,8 @@ class TestPygit2AuthenticatedHTTP(GitPillarHTTPTestBase):
# Test with per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -1682,6 +1806,8 @@ class TestPygit2AuthenticatedHTTP(GitPillarHTTPTestBase):
# Test with global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_user: {user}
git_pillar_password: {password}
@ -1698,6 +1824,8 @@ class TestPygit2AuthenticatedHTTP(GitPillarHTTPTestBase):
# Test with per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
cachedir: {cachedir}
extension_modules: {extmods}
@ -1735,6 +1863,8 @@ class TestPygit2AuthenticatedHTTP(GitPillarHTTPTestBase):
# Test with global credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_includes: False
git_pillar_user: {user}
@ -1752,6 +1882,8 @@ class TestPygit2AuthenticatedHTTP(GitPillarHTTPTestBase):
# Test with per-repo credential options
ret = self.get_pillar('''\
file_ignore_regex: []
file_ignore_glob: []
git_pillar_provider: pygit2
git_pillar_includes: False
cachedir: {cachedir}

View File

@ -34,6 +34,10 @@ class VirtualenvTest(ModuleCase, SaltReturnAssertsMixin):
uinfo = self.run_function('user.info', [user])
if salt.utils.is_darwin():
# MacOS does not support createhome with user.present
self.assertSaltTrueReturn(self.run_state('file.directory', name=uinfo['home'], user=user, group=uinfo['groups'][0], dir_mode=755))
venv_dir = os.path.join(
RUNTIME_VARS.SYS_TMP_DIR, 'issue-1959-virtualenv-runas'
)

View File

@ -309,8 +309,11 @@ class TimezoneModuleTestCase(TestCase, LoaderModuleMockMixin):
:return:
'''
# Incomplete
hwclock = 'localtime'
if not os.path.isfile('/etc/environment'):
hwclock = 'UTC'
with patch.dict(timezone.__grains__, {'os_family': ['AIX']}):
assert timezone.get_hwclock() == 'localtime'
assert timezone.get_hwclock() == hwclock
@patch('salt.utils.which', MagicMock(return_value=False))
@patch('os.path.exists', MagicMock(return_value=True))

View File

@ -56,6 +56,8 @@ class GitPillarTestCase(TestCase, AdaptedConfigurationTestCaseMixin, LoaderModul
'cachedir': cachedir,
'pillar_roots': {},
'hash_type': 'sha256',
'file_ignore_regex': [],
'file_ignore_glob': [],
'file_roots': {},
'state_top': 'top.sls',
'extension_modules': '',

View File

@ -37,8 +37,14 @@ class PillarTestCase(TestCase):
'renderer_blacklist': [],
'renderer_whitelist': [],
'state_top': '',
'pillar_roots': ['dev', 'base'],
'file_roots': ['dev', 'base'],
'pillar_roots': {
'dev': [],
'base': []
},
'file_roots': {
'dev': [],
'base': []
},
'extension_modules': '',
'pillarenv_from_saltenv': True
}