Merge remote-tracking branch 'upstream/develop' into develop

This commit is contained in:
Jerzy Drozdz 2015-06-08 22:15:40 +02:00
commit fbb6f88b28
138 changed files with 7707 additions and 1969 deletions

View File

@ -21,6 +21,11 @@ Description
Salt copy copies a local file out to all of the Salt minions matched by the
given target.
Note: salt-cp uses salt's publishing mechanism. This means the privacy of the
contents of the file on the wire are completely dependant upon the transport
in use. In addition, if the salt-master is running with debug logging it is
possible that the contents of the file will be logged to disk.
Options
=======

View File

@ -35,6 +35,7 @@ Full list of builtin execution modules
boto_elasticache
boto_elb
boto_iam
boto_kms
boto_rds
boto_route53
boto_secgroup

View File

@ -0,0 +1,6 @@
=====================
salt.modules.bamboohr
=====================
.. automodule:: salt.modules.bamboohr
:members:

View File

@ -0,0 +1,6 @@
====================
salt.modules.beacons
====================
.. automodule:: salt.modules.beacons
:members:

View File

@ -0,0 +1,6 @@
==================
salt.modules.bigip
==================
.. automodule:: salt.modules.bigip
:members:

View File

@ -0,0 +1,6 @@
=====================
salt.modules.boto_kms
=====================
.. automodule:: salt.modules.boto_kms
:members:

View File

@ -0,0 +1,6 @@
===================
salt.modules.consul
===================
.. automodule:: salt.modules.consul
:members:

View File

@ -0,0 +1,6 @@
=================
salt.modules.node
=================
.. automodule:: salt.modules.node
:members:

View File

@ -0,0 +1,6 @@
===========================
salt.modules.pagerduty_util
===========================
.. automodule:: salt.modules.pagerduty_util
:members:

View File

@ -0,0 +1,6 @@
=====================
salt.modules.rallydev
=====================
.. automodule:: salt.modules.rallydev
:members:

View File

@ -0,0 +1,6 @@
==================
salt.modules.splay
==================
.. automodule:: salt.modules.splay
:members:

View File

@ -0,0 +1,6 @@
======================
salt.modules.stormpath
======================
.. automodule:: salt.modules.stormpath
:members:

View File

@ -0,0 +1,6 @@
============================
salt.modules.system_profiler
============================
.. automodule:: salt.modules.system_profiler
:members:

View File

@ -0,0 +1,6 @@
==========================
salt.modules.trafficserver
==========================
.. automodule:: salt.modules.trafficserver
:members:

View File

@ -0,0 +1,6 @@
====================
salt.modules.win_wua
====================
.. automodule:: salt.modules.win_wua
:members:

View File

@ -0,0 +1,6 @@
==============================
salt.returners.influxdb_return
==============================
.. automodule:: salt.returners.influxdb_return
:members:

View File

@ -0,0 +1,6 @@
======================
salt.returners.pgjsonb
======================
.. automodule:: salt.returners.pgjsonb
:members:

View File

@ -0,0 +1,6 @@
================
salt.runners.ssh
================
.. automodule:: salt.runners.ssh
:members:

View File

@ -30,6 +30,7 @@ Full list of builtin state modules
boto_elb
boto_iam
boto_iam_role
boto_kms
boto_lc
boto_rds
boto_route53

View File

@ -0,0 +1,6 @@
==================
salt.states.beacon
==================
.. automodule:: salt.states.beacon
:members:

View File

@ -0,0 +1,6 @@
=================
salt.states.bigip
=================
.. automodule:: salt.states.bigip
:members:

View File

@ -0,0 +1,6 @@
====================
salt.states.boto_kms
====================
.. automodule:: salt.states.boto_kms
:members:

View File

@ -0,0 +1,6 @@
=====================
salt.states.firewalld
=====================
.. automodule:: salt.states.firewalld
:members:

View File

@ -0,0 +1,6 @@
===============================
salt.states.postgres_tablespace
===============================
.. automodule:: salt.states.postgres_tablespace
:members:

View File

@ -0,0 +1,6 @@
=============================
salt.states.stormpath_account
=============================
.. automodule:: salt.states.stormpath_account
:members:

View File

@ -0,0 +1,6 @@
===============
salt.states.tls
===============
.. automodule:: salt.states.tls
:members:

View File

@ -0,0 +1,6 @@
=========================
salt.states.trafficserver
=========================
.. automodule:: salt.states.trafficserver
:members:

View File

@ -459,7 +459,7 @@ each cloud profile. Note that the number of instance stores varies by instance
type. If more mappings are provided than are supported by the instance type,
mappings will be created in the order provided and additional mappings will be
ignored. Consult the `AWS documentation`_ for a listing of the available
instance stores, device names, and mount points.
instance stores, and device names.
.. code-block:: yaml
@ -490,7 +490,6 @@ existing volume use the ``volume_id`` parameter.
.. code-block:: yaml
device: /dev/xvdj
mount_point: /mnt/my_ebs
volume_id: vol-12345abcd
Or, to create a volume from an EBS snapshot, use the ``snapshot`` parameter.
@ -498,7 +497,6 @@ Or, to create a volume from an EBS snapshot, use the ``snapshot`` parameter.
.. code-block:: yaml
device: /dev/xvdj
mount_point: /mnt/my_ebs
snapshot: snap-abcd12345
Note that ``volume_id`` will take precedence over the ``snapshot`` parameter.

View File

@ -186,6 +186,27 @@ minion. In your pillar file, you would use something like this:
Cloud Configurations
====================
Scaleway
--------
To use Salt Cloud with Scaleway, you need to get an ``access key`` and an ``API token``. ``API tokens`` are unique identifiers associated with your Scaleway account.
To retrieve your ``access key`` and ``API token``, log-in to the Scaleway control panel, open the pull-down menu on your account name and click on "My Credentials" link.
If you do not have ``API token`` you can create one by clicking the "Create New Token" button on the right corner.
.. code-block:: yaml
my-scaleway-config:
access_key: 15cf404d-4560-41b1-9a0c-21c3d5c4ff1f
token: a7347ec8-5de1-4024-a5e3-24b77d1ba91d
provider: scaleway
.. note::
In the cloud profile that uses this provider configuration, the syntax for the
``provider`` required field would be ``provider: my-scaleway-config``.
Rackspace
---------

View File

@ -58,9 +58,11 @@ Cloud Provider Specifics
Getting Started With Parallels <parallels>
Getting Started With Proxmox <proxmox>
Getting Started With Rackspace <rackspace>
Getting Started With Scaleway <scaleway>
Getting Started With SoftLayer <softlayer>
Getting Started With Vexxhost <vexxhost>
Getting Started With VMware <vmware>
Getting Started With vSphere <vsphere>
Miscellaneous Options
=====================

View File

@ -97,6 +97,15 @@ A map file may also be used with the various query options:
Proceed? [N/y]
.. warning:: Specifying Nodes with Maps on the Command Line
Specifying the name of a node or nodes with the maps options on the command
line is *not* supported. This is especially important to remember when
using ``--destroy`` with maps; ``salt-cloud`` will ignore any arguments
passed in which are not directly relevant to the map file. *When using
``--destroy`` with a map, every node in the map file will be deleted!*
Maps don't provide any useful information for destroying individual nodes,
and should not be used to destroy a subset of a map.
Setting up New Salt Masters
===========================

View File

@ -0,0 +1,95 @@
=============================
Getting Started With Scaleway
=============================
Scaleway is the first IaaS provider worldwide to offer an ARM based cloud. Its the ideal platform for horizontal scaling with BareMetal SSD servers. The solution provides on demand resources: it comes with on-demand SSD storage, movable IPs , images, security group and an Object Storage solution. https://scaleway.com
Configuration
=============
Using Salt for Scaleway, requires an ``access key`` and an ``API token``. ``API tokens`` are unique identifiers associated with your Scaleway account.
To retrieve your ``access key`` and ``API token``, log-in to the Scaleway control panel, open the pull-down menu on your account name and click on "My Credentials" link.
If you do not have API token you can create one by clicking the "Create New Token" button on the right corner.
.. code-block:: yaml
# Note: This example is for /etc/salt/cloud.providers or any file in the
# /etc/salt/cloud.providers.d/ directory.
my-scaleway-config:
access_key: 15cf404d-4560-41b1-9a0c-21c3d5c4ff1f
token: a7347ec8-5de1-4024-a5e3-24b77d1ba91d
provider: scaleway
Profiles
========
Cloud Profiles
~~~~~~~~~~~~~~
Set up an initial profile at /etc/salt/cloud.profiles or in the /etc/salt/cloud.profiles.d/ directory:
.. code-block:: yaml
scalewa-ubuntu:
provider: my-scaleway-config
image: Ubuntu Trusty (14.04 LTS)
Images can be obtained using the ``--list-images`` option for the ``salt-cloud`` command:
.. code-block:: bash
#salt-cloud --list-images my-scaleway-config
my-scaleway-config:
----------
scaleway:
----------
069fd876-eb04-44ab-a9cd-47e2fa3e5309:
----------
arch:
arm
creation_date:
2015-03-12T09:35:45.764477+00:00
default_bootscript:
{u'kernel': {u'dtb': u'', u'title': u'Pimouss 3.2.34-30-std', u'id': u'cfda4308-cd6f-4e51-9744-905fc0da370f', u'path': u'kernel/pimouss-uImage-3.2.34-30-std'}, u'title': u'3.2.34-std #30 (stable)', u'id': u'c5af0215-2516-4316-befc-5da1cfad609c', u'initrd': {u'path': u'initrd/c1-uInitrd', u'id': u'1be14b1b-e24c-48e5-b0b6-7ba452e42b92', u'title': u'C1 initrd'}, u'bootcmdargs': {u'id': u'd22c4dde-e5a4-47ad-abb9-d23b54d542ff', u'value': u'ip=dhcp boot=local root=/dev/nbd0 USE_XNBD=1 nbd.max_parts=8'}, u'organization': u'11111111-1111-4111-8111-111111111111', u'public': True}
extra_volumes:
[]
id:
069fd876-eb04-44ab-a9cd-47e2fa3e5309
modification_date:
2015-04-24T12:02:16.820256+00:00
name:
Ubuntu Vivid (15.04)
organization:
a283af0b-d13e-42e1-a43f-855ffbf281ab
public:
True
root_volume:
{u'name': u'distrib-ubuntu-vivid-2015-03-12_10:32-snapshot', u'id': u'a6d02e63-8dee-4bce-b627-b21730f35a05', u'volume_type': u'l_ssd', u'size': 50000000000L}
...
Execute a query and return all information about the nodes running on configured cloud providers using the ``-Q`` option for the ``salt-cloud`` command:
.. code-block:: bash
# salt-cloud -F
[INFO ] salt-cloud starting
[INFO ] Starting new HTTPS connection (1): api.scaleway.com
my-scaleway-config:
----------
scaleway:
----------
salt-manager:
----------
creation_date:
2015-06-03T08:17:38.818068+00:00
hostname:
salt-manager
...
.. note::
Additional documentation about Scaleway can be found at `<https://www.scaleway.com/docs>`_.

View File

@ -73,7 +73,7 @@ Set up an initial profile at ``/etc/salt/cloud.profiles`` or
## Optional arguments
num_cpus: 4
memory: 8192
memory: 8GB
devices:
cd:
CD/DVD drive 1:
@ -164,12 +164,13 @@ Set up an initial profile at ``/etc/salt/cloud.profiles`` or
Enter the name of the VM/template to clone from.
``num_cpus``
Enter the number of vCPUS you want the VM/template to have. If not specified,
Enter the number of vCPUS that you want the VM/template to have. If not specified,
the current VM/template\'s vCPU count is used.
``memory``
Enter memory (in MB) you want the VM/template to have. If not specified, the
current VM/template\'s memory size is used.
Enter the memory size (in MB or GB) that you want the VM/template to have. If
not specified, the current VM/template\'s memory size is used. Example
``memory: 8GB`` or ``memory: 8192MB``.
``devices``
Enter the device specifications here. Currently, the following devices can be

View File

@ -2,6 +2,16 @@
Getting Started With vSphere
============================
.. note::
.. deprecated:: Carbon
The :py:func:`vsphere <salt.cloud.clouds.vsphere>` cloud driver has been
deprecated in favor of the :py:func:`vmware <salt.cloud.clouds.vmware>`
cloud driver and will be removed in Salt Carbon. Please refer to
:doc:`Getting started with VMware </topics/cloud/vmware>` instead to get
started with the configuration.
VMware vSphere is a management platform for virtual infrastructure and cloud
computing.

View File

@ -680,10 +680,11 @@ example, the following macro could be used to write a php.ini config file:
.. code-block:: yaml
PHP:
engine: 'On'
short_open_tag: 'Off'
error_reporting: 'E_ALL & ~E_DEPRECATED & ~E_STRICT'
php_ini:
PHP:
engine: 'On'
short_open_tag: 'Off'
error_reporting: 'E_ALL & ~E_DEPRECATED & ~E_STRICT'
``/srv/salt/php.ini.tmpl``:
@ -691,8 +692,8 @@ example, the following macro could be used to write a php.ini config file:
{% macro php_ini_serializer(data) %}
{% for section_name, name_val_pairs in data.items() %}
[{{ section }}]
{% for name, val in name_val_pairs.items() %}
[{{ section_name }}]
{% for name, val in name_val_pairs.items() -%}
{{ name }} = "{{ val }}"
{% endfor %}
{% endfor %}

View File

@ -204,15 +204,15 @@ membership. Then the following LDAP quey is executed:
external_auth:
ldap:
test_ldap_user:
- '*':
- test.ping
- '*':
- test.ping
To configure an LDAP group, append a ``%`` to the ID:
.. code-block:: yaml
external_auth:
ldap:
test_ldap_group%:
- '*':
- test.echo
ldap:
test_ldap_group%:
- '*':
- test.echo

File diff suppressed because it is too large Load Diff

View File

@ -44,3 +44,5 @@ The ``digital_ocean.py`` Salt Cloud driver was removed in favor of the
``digital_ocean_v2.py`` driver as DigitalOcean has removed support for APIv1.
The ``digital_ocean_v2.py`` was renamed to ``digital_ocean.py`` and supports
DigitalOcean's APIv2.
The ``vsphere.py`` Salt Cloud driver has been deprecated in favor of the
``vmware.py`` driver.

View File

@ -236,6 +236,17 @@ container-by-container basis, for instance using the ``nic_opts`` argument to
for instance, typically are configured for eth0 to use DHCP, which will
conflict with static IP addresses set at the container level.
.. note::
For LXC < 1.0.7 and DHCP support, set ``ipv4.gateway: 'auto'`` is your
network profile, ie.::
lxc.network_profile.nic:
debian:
eth0:
link: lxcbr0
ipv4.gateway: 'auto'
Old lxc support (<1.0.7)
---------------------------

View File

@ -143,9 +143,10 @@ class Batch(object):
for ping_ret in self.ping_gen:
if ping_ret is None:
break
if ping_ret not in self.minions:
self.minions.append(ping_ret)
to_run.append(ping_ret)
m = next(ping_ret.iterkeys())
if m not in self.minions:
self.minions.append(m)
to_run.append(m)
for queue in iters:
try:

View File

@ -225,6 +225,10 @@ class SSH(object):
'ssh_sudo',
salt.config.DEFAULT_MASTER_OPTS['ssh_sudo']
),
'identities_only': self.opts.get(
'ssh_identities_only',
salt.config.DEFAULT_MASTER_OPTS['ssh_identities_only']
),
}
if self.opts.get('rand_thin_dir'):
self.defaults['thin_dir'] = os.path.join(
@ -582,6 +586,7 @@ class Single(object):
thin=None,
mine=False,
minion_opts=None,
identities_only=False,
**kwargs):
# Get mine setting and mine_functions if defined in kwargs (from roster)
self.mine = mine
@ -627,7 +632,8 @@ class Single(object):
'timeout': timeout,
'sudo': sudo,
'tty': tty,
'mods': self.mods}
'mods': self.mods,
'identities_only': identities_only}
self.minion_opts = opts.get('ssh_minion_opts', {})
if minion_opts is not None:
self.minion_opts.update(minion_opts)
@ -1209,6 +1215,6 @@ def ssh_version():
stdout=subprocess.PIPE,
stderr=subprocess.PIPE).communicate()
try:
return ret[1].split(',')[0].split('_')[1]
return ret[1].split(b',')[0].split(b'_')[1]
except IndexError:
return '2.0'

View File

@ -57,7 +57,8 @@ class Shell(object):
timeout=None,
sudo=False,
tty=False,
mods=None):
mods=None,
identities_only=False):
self.opts = opts
self.host = host
self.user = user
@ -68,6 +69,7 @@ class Shell(object):
self.sudo = sudo
self.tty = tty
self.mods = mods
self.identities_only = identities_only
def get_error(self, errstr):
'''
@ -108,6 +110,8 @@ class Shell(object):
options.append('IdentityFile={0}'.format(self.priv))
if self.user:
options.append('User={0}'.format(self.user))
if self.identities_only:
options.append('IdentitiesOnly=yes')
ret = []
for option in options:
@ -143,6 +147,8 @@ class Shell(object):
options.append('Port={0}'.format(self.port))
if self.user:
options.append('User={0}'.format(self.user))
if self.identities_only:
options.append('IdentitiesOnly=yes')
ret = []
for option in options:

View File

@ -2114,7 +2114,7 @@ class Map(Cloud):
output[name] = self.create(
profile, local_master=local_master
)
if self.opts.get('show_deploy_args', False) is False:
if self.opts.get('show_deploy_args', False) is False and 'deploy_kwargs' in output:
output[name].pop('deploy_kwargs', None)
except SaltCloudException as exc:
log.error(

View File

@ -265,6 +265,12 @@ def create(vm_):
transport=__opts__['transport']
)
displayname = cloudstack_displayname(vm_)
if displayname:
kwargs['ex_displayname'] = displayname
else:
kwargs['ex_displayname'] = kwargs['name']
volumes = {}
ex_blockdevicemappings = block_device_mappings(vm_)
if ex_blockdevicemappings:
@ -561,3 +567,15 @@ def block_device_mappings(vm_):
return config.get_cloud_config_value(
'block_device_mappings', vm_, __opts__, search_global=True
)
def cloudstack_displayname(vm_):
'''
Return display name of VM:
::
"minion1"
'''
return config.get_cloud_config_value(
'cloudstack_displayname', vm_, __opts__, search_global=True
)

View File

@ -83,6 +83,10 @@ import hashlib
import binascii
import datetime
import base64
import msgpack
import json
import re
import decimal
# Import 3rd-party libs
# pylint: disable=import-error,no-name-in-module,redefined-builtin
@ -109,9 +113,11 @@ except ImportError:
# Import salt libs
import salt.utils
from salt import syspaths
from salt.utils import namespaced_function
from salt.cloud.libcloudfuncs import get_salt_interface
from salt._compat import ElementTree as ET
import salt.utils.http as http
import salt.utils.aws as aws
# Import salt.cloud libs
@ -171,6 +177,8 @@ EC2_RETRY_CODES = [
'InsufficientReservedInstanceCapacity',
]
JS_COMMENT_RE = re.compile(r'/\*.*?\*/', re.S)
# Only load in this module if the EC2 configurations are in place
def __virtual__():
@ -4008,3 +4016,185 @@ def get_password_data(
ret['password'] = key_obj.decrypt(pwdata, sentinel)
return ret
def update_pricing(kwargs=None, call=None):
'''
Download most recent pricing information from AWS and convert to a local
JSON file.
CLI Examples:
.. code-block:: bash
salt-cloud -f update_pricing my-ec2-config
salt-cloud -f update_pricing my-ec2-config type=linux
.. versionadded:: Beryllium
'''
sources = {
'linux': 'https://a0.awsstatic.com/pricing/1/ec2/linux-od.min.js',
'rhel': 'https://a0.awsstatic.com/pricing/1/ec2/rhel-od.min.js',
'sles': 'https://a0.awsstatic.com/pricing/1/ec2/sles-od.min.js',
'mswin': 'https://a0.awsstatic.com/pricing/1/ec2/mswin-od.min.js',
'mswinsql': 'https://a0.awsstatic.com/pricing/1/ec2/mswinSQL-od.min.js',
'mswinsqlweb': 'https://a0.awsstatic.com/pricing/1/ec2/mswinSQLWeb-od.min.js',
}
if kwargs is None:
kwargs = {}
if 'type' not in kwargs:
for source in sources:
_parse_pricing(sources[source], source)
else:
_parse_pricing(sources[kwargs['type']], kwargs['type'])
def _parse_pricing(url, name):
'''
Download and parse an individual pricing file from AWS
.. versionadded:: Beryllium
'''
price_js = http.query(url, text=True)
items = []
current_item = ''
price_js = re.sub(JS_COMMENT_RE, '', price_js['text'])
price_js = price_js.strip().rstrip(');').lstrip('callback(')
for keyword in (
'vers',
'config',
'rate',
'valueColumns',
'currencies',
'instanceTypes',
'type',
'ECU',
'storageGB',
'name',
'vCPU',
'memoryGiB',
'storageGiB',
'USD',
):
price_js = price_js.replace(keyword, '"{0}"'.format(keyword))
for keyword in ('region', 'price', 'size'):
price_js = price_js.replace(keyword, '"{0}"'.format(keyword))
price_js = price_js.replace('"{0}"s'.format(keyword), '"{0}s"'.format(keyword))
price_js = price_js.replace('""', '"')
# Turn the data into something that's easier/faster to process
regions = {}
price_json = json.loads(price_js)
for region in price_json['config']['regions']:
sizes = {}
for itype in region['instanceTypes']:
for size in itype['sizes']:
sizes[size['size']] = size
regions[region['region']] = sizes
outfile = os.path.join(
syspaths.CACHE_DIR, 'cloud', 'ec2-pricing-{0}.p'.format(name)
)
with salt.utils.fopen(outfile, 'w') as fho:
msgpack.dump(regions, fho)
return True
def show_pricing(kwargs=None, call=None):
'''
Show pricing for a particular profile. This is only an estimate, based on
unofficial pricing sources.
CLI Examples:
.. code-block:: bash
salt-cloud -f show_pricing my-ec2-config my-profile
If pricing sources have not been cached, they will be downloaded. Once they
have been cached, they will not be updated automatically. To manually update
all prices, use the following command:
.. code-block:: bash
salt-cloud -f update_pricing <provider>
.. versionadded:: Beryllium
'''
profile = __opts__['profiles'].get(kwargs['profile'], {})
if not profile:
return {'Error': 'The requested profile was not found'}
# Make sure the profile belongs to ec2
provider = profile.get('provider', '0:0')
comps = provider.split(':')
if len(comps) < 2 or comps[1] != 'ec2':
return {'Error': 'The requested profile does not belong to EC2'}
image_id = profile.get('image', None)
image_dict = show_image({'image': image_id}, 'function')
image_info = image_dict[0]
# Find out what platform it is
if image_info.get('imageOwnerAlias', '') == 'amazon':
if image_info.get('platform', '') == 'windows':
image_description = image_info.get('description', '')
if 'sql' in image_description.lower():
if 'web' in image_description.lower():
name = 'mswinsqlweb'
else:
name = 'mswinsql'
else:
name = 'mswin'
elif image_info.get('imageLocation', '').strip().startswith('amazon/suse'):
name = 'sles'
else:
name = 'linux'
elif image_info.get('imageOwnerId', '') == '309956199498':
name = 'rhel'
else:
name = 'linux'
pricefile = os.path.join(
syspaths.CACHE_DIR, 'cloud', 'ec2-pricing-{0}.p'.format(name)
)
if not os.path.isfile(pricefile):
update_pricing({'type': name}, 'function')
with salt.utils.fopen(pricefile, 'r') as fhi:
ec2_price = msgpack.load(fhi)
region = get_location(profile)
size = profile.get('size', None)
if size is None:
return {'Error': 'The requested profile does not contain a size'}
try:
raw = ec2_price[region][size]
except KeyError:
return {'Error': 'The size ({0}) in the requested profile does not have '
'a price associated with it for the {1} region'.format(size, region)}
ret = {}
if kwargs.get('raw', False):
ret['_raw'] = raw
ret['per_hour'] = 0
for col in raw.get('valueColumns', []):
ret['per_hour'] += decimal.Decimal(col['prices'].get('USD', 0))
ret['per_hour'] = decimal.Decimal(ret['per_hour'])
ret['per_day'] = ret['per_hour'] * 24
ret['per_week'] = ret['per_day'] * 7
ret['per_month'] = ret['per_day'] * 30
ret['per_year'] = ret['per_week'] * 52
return {profile['profile']: ret}

View File

@ -25,10 +25,11 @@ Setting up Service Account Authentication:
- Create or navigate to your desired Project.
- Make sure Google Compute Engine service is enabled under the Services
section.
- Go to "APIs and auth" and then the "Registered apps" section.
- Click the "REGISTER APP" button and give it a meaningful name.
- Select "Web Application" and click "Register".
- Select Certificate, then "Generate Certificate"
- Go to "APIs and auth" section, and then the "Credentials" link.
- Click the "CREATE NEW CLIENT ID" button.
- Select "Service Account" and click "Create Client ID" button.
- This will automatically download a .json file; ignore it.
- Look for a new "Service Account" section in the page, click on the "Generate New P12 key" button
- Copy the Email Address for inclusion in your /etc/salt/cloud file
in the 'service_account_email_address' setting.
- Download the Private Key
@ -45,7 +46,7 @@ Setting up Service Account Authentication:
my-gce-config:
# The Google Cloud Platform Project ID
project: google.com:erjohnso
project: "my-project-id"
# The Service ACcount client ID
service_account_email_address: 1234567890@developer.gserviceaccount.com
# The location of the private key (PEM format)

View File

@ -278,7 +278,11 @@ def list_nodes(conn=None, call=None):
hide = True
if not get_configured_provider():
return
lxclist = _salt('lxc.list', extra=True)
path = None
if profile and profile in profiles:
path = profiles[profile].get('path', None)
lxclist = _salt('lxc.list', extra=True, path=path)
nodes = {}
for state, lxcs in six.iteritems(lxclist):
for lxcc, linfos in six.iteritems(lxcs):
@ -295,10 +299,9 @@ def list_nodes(conn=None, call=None):
# do not also mask half configured nodes which are explicitly asked
# to be acted on, on the command line
if (
(call in ['full'] or not hide)
and (
(lxcc in names and call in ['action'])
or (call in ['full'])
(call in ['full'] or not hide) and (
(lxcc in names and call in ['action']) or (
call in ['full'])
)
):
nodes[lxcc] = info
@ -383,7 +386,7 @@ def destroy(vm_, call=None):
return
ret = {'comment': '{0} was not found'.format(vm_),
'result': False}
if _salt('lxc.info', vm_):
if _salt('lxc.info', vm_, path=path):
salt.utils.cloud.fire_event(
'event',
'destroying instance',
@ -391,7 +394,7 @@ def destroy(vm_, call=None):
{'name': vm_, 'instance_id': vm_},
transport=__opts__['transport']
)
cret = _salt('lxc.destroy', vm_, stop=True)
cret = _salt('lxc.destroy', vm_, stop=True, path=path)
ret['result'] = cret['result']
if ret['result']:
ret['comment'] = '{0} was destroyed'.format(vm_)
@ -507,18 +510,21 @@ def get_configured_provider(vm_=None):
profs = __opts__['profiles']
tgt = 'profile: {0}'.format(curprof)
if (
curprof in profs
and profs[curprof]['provider'] == __active_provider_name__
curprof in profs and
profs[curprof]['provider'] == __active_provider_name__
):
prov, cdriver = profs[curprof]['provider'].split(':')
tgt += ' provider: {0}'.format(prov)
data = get_provider(prov)
matched = True
# fallback if we have only __active_provider_name__
if ((__opts__.get('destroy', False) and not data)
or (not matched and __active_provider_name__)):
if (
(__opts__.get('destroy', False) and not data) or (
not matched and __active_provider_name__
)
):
data = __opts__.get('providers',
{}).get(dalias, {}).get(driver, {})
{}).get(dalias, {}).get(driver, {})
# in all cases, verify that the linked saltmaster is alive.
if data:
try:

View File

@ -0,0 +1,515 @@
# -*- coding: utf-8 -*-
'''
Scaleway Cloud Module
==========================
.. versionadded:: Beryllium
The Scaleway cloud module is used to interact with your Scaleway BareMetal
Servers.
Use of this module only requires the ``api_key`` parameter to be set. Set up
the cloud configuration at ``/etc/salt/cloud.providers`` or
``/etc/salt/cloud.providers.d/scaleway.conf``:
.. code-block:: yaml
scaleway-config:
# Scaleway organization and token
access_key: 0e604a2c-aea6-4081-acb2-e1d1258ef95c
token: be8fd96b-04eb-4d39-b6ba-a9edbcf17f12
provider: scaleway
:depends: requests
'''
from __future__ import absolute_import
import copy
import json
import logging
import pprint
import time
try:
import requests
except ImportError:
requests = None
import salt.config as config
from salt.exceptions import (
SaltCloudNotFound,
SaltCloudSystemExit,
SaltCloudExecutionFailure,
SaltCloudExecutionTimeout
)
from salt.ext.six.moves import range
import salt.utils.cloud
log = logging.getLogger(__name__)
# Only load in this module if the Scaleway configurations are in place
def __virtual__():
''' Check for Scaleway configurations.
'''
if requests is None:
return False
if get_configured_provider() is False:
return False
return True
def get_configured_provider():
''' Return the first configured instance.
'''
return config.is_provider_configured(
__opts__,
__active_provider_name__ or 'scaleway',
('token',)
)
def avail_images(call=None):
''' Return a list of the images that are on the provider.
'''
if call == 'action':
raise SaltCloudSystemExit(
'The avail_images function must be called with '
'-f or --function, or with the --list-images option'
)
items = query(method='images')
ret = {}
for image in items['images']:
ret[image['id']] = {}
for item in image:
ret[image['id']][item] = str(image[item])
return ret
def list_nodes(call=None):
''' Return a list of the BareMetal servers that are on the provider.
'''
if call == 'action':
raise SaltCloudSystemExit(
'The list_nodes function must be called with -f or --function.'
)
items = query(method='servers')
ret = {}
for node in items['servers']:
public_ips = []
private_ips = []
image_id = ''
if node.get('public_ip'):
public_ips = [node['public_ip']['address']]
if node.get('private_ip'):
private_ips = [node['private_ip']]
if node.get('image'):
image_id = node['image']['id']
ret[node['name']] = {
'id': node['id'],
'image_id': image_id,
'public_ips': public_ips,
'private_ips': private_ips,
'size': node['volumes']['0']['size'],
'state': node['state'],
}
return ret
def list_nodes_full(call=None):
''' Return a list of the BareMetal servers that are on the provider.
'''
if call == 'action':
raise SaltCloudSystemExit(
'list_nodes_full must be called with -f or --function'
)
items = query(method='servers')
# For each server, iterate on its paramters.
ret = {}
for node in items['servers']:
ret[node['name']] = {}
for item in node:
value = node[item]
ret[node['name']][item] = value
return ret
def list_nodes_select(call=None):
''' Return a list of the BareMetal servers that are on the provider, with
select fields.
'''
return salt.utils.cloud.list_nodes_select(
list_nodes_full('function'), __opts__['query.selection'], call,
)
def get_image(server_):
''' Return the image object to use.
'''
images = avail_images()
server_image = str(config.get_cloud_config_value(
'image', server_, __opts__, search_global=False
))
for image in images:
if server_image in (images[image]['name'], images[image]['id']):
return images[image]['id']
raise SaltCloudNotFound(
'The specified image, {0!r}, could not be found.'.format(server_image)
)
def create_node(args):
''' Create a node.
'''
node = query(method='servers', args=args, http_method='post')
action = query(
method='servers',
server_id=node['server']['id'],
command='action',
args={'action': 'poweron'},
http_method='post'
)
return node
def create(server_):
''' Create a single BareMetal server from a data dict.
'''
salt.utils.cloud.fire_event(
'event',
'starting create',
'salt/cloud/{0}/creating'.format(server_['name']),
{
'name': server_['name'],
'profile': server_['profile'],
'provider': server_['provider'],
},
transport=__opts__['transport']
)
log.info('Creating a BareMetal server {0}'.format(server_['name']))
access_key = config.get_cloud_config_value(
'access_key', get_configured_provider(), __opts__, search_global=False
)
kwargs = {
'name': server_['name'],
'organization': access_key,
'image': get_image(server_),
}
salt.utils.cloud.fire_event(
'event',
'requesting instance',
'salt/cloud/{0}/requesting'.format(server_['name']),
{'kwargs': kwargs},
transport=__opts__['transport']
)
try:
ret = create_node(kwargs)
except Exception as exc:
log.error(
'Error creating {0} on Scaleway\n\n'
'The following exception was thrown when trying to '
'run the initial deployment: {1}'.format(
server_['name'],
str(exc)
),
# Show the traceback if the debug logging level is enabled
exc_info_on_loglevel=logging.DEBUG
)
return False
def __query_node_data(server_name):
''' Called to check if the server has a public IP address.
'''
data = show_instance(server_name, 'action')
if data and data.get('public_ip'):
return data
return False
try:
data = salt.utils.cloud.wait_for_ip(
__query_node_data,
update_args=(server_['name'],),
timeout=config.get_cloud_config_value(
'wait_for_ip_timeout', server_, __opts__, default=10 * 60),
interval=config.get_cloud_config_value(
'wait_for_ip_interval', server_, __opts__, default=10),
)
except (SaltCloudExecutionTimeout, SaltCloudExecutionFailure) as exc:
try:
# It might be already up, let's destroy it!
destroy(server_['name'])
except SaltCloudSystemExit:
pass
finally:
raise SaltCloudSystemExit(str(exc))
ssh_username = config.get_cloud_config_value(
'ssh_username', server_, __opts__, default='root'
)
if config.get_cloud_config_value('deploy', server_, __opts__) is True:
deploy_script = script(server_)
if data.get('public_ip') is not None:
ip_address = data['public_ip']['address']
deploy_kwargs = {
'opts': __opts__,
'host': ip_address,
'username': ssh_username,
'script': deploy_script,
'name': server_['name'],
'tmp_dir': config.get_cloud_config_value(
'tmp_dir', server_, __opts__, default='/tmp/.saltcloud'
),
'deploy_command': config.get_cloud_config_value(
'deploy_command', server_, __opts__,
default='/tmp/.saltcloud/deploy.sh',
),
'start_action': __opts__['start_action'],
'parallel': __opts__['parallel'],
'sock_dir': __opts__['sock_dir'],
'conf_file': __opts__['conf_file'],
'minion_pem': server_['priv_key'],
'minion_pub': server_['pub_key'],
'keep_tmp': __opts__['keep_tmp'],
'preseed_minion_keys': server_.get('preseed_minion_keys', None),
'display_ssh_output': config.get_cloud_config_value(
'display_ssh_output', server_, __opts__, default=True
),
'sudo': config.get_cloud_config_value(
'sudo', server_, __opts__, default=(ssh_username != 'root')
),
'sudo_password': config.get_cloud_config_value(
'sudo_password', server_, __opts__, default=None
),
'tty': config.get_cloud_config_value(
'tty', server_, __opts__, default=False
),
'script_args': config.get_cloud_config_value(
'script_args', server_, __opts__
),
'script_env': config.get_cloud_config_value('script_env', server_,
__opts__),
'minion_conf': salt.utils.cloud.minion_config(__opts__, server_)
}
# Deploy salt-master files, if necessary
if config.get_cloud_config_value('make_master', server_, __opts__) is True:
deploy_kwargs['make_master'] = True
deploy_kwargs['master_pub'] = server_['master_pub']
deploy_kwargs['master_pem'] = server_['master_pem']
master_conf = salt.utils.cloud.master_config(__opts__, server_)
deploy_kwargs['master_conf'] = master_conf
if master_conf.get('syndic_master', None):
deploy_kwargs['make_syndic'] = True
deploy_kwargs['make_minion'] = config.get_cloud_config_value(
'make_minion', server_, __opts__, default=True
)
# Store what was used to the deploy the BareMetal server
event_kwargs = copy.deepcopy(deploy_kwargs)
del event_kwargs['minion_pem']
del event_kwargs['minion_pub']
del event_kwargs['sudo_password']
if 'password' in event_kwargs:
del event_kwargs['password']
ret['deploy_kwargs'] = event_kwargs
salt.utils.cloud.fire_event(
'event',
'executing deploy script',
'salt/cloud/{0}/deploying'.format(server_['name']),
{'kwargs': event_kwargs},
transport=__opts__['transport']
)
deployed = salt.utils.cloud.deploy_script(**deploy_kwargs)
if deployed:
log.info('Salt installed on {0}'.format(server_['name']))
else:
log.error(
'Failed to start Salt on BareMetal server {0}'.format(
server_['name']
)
)
ret.update(data)
log.info('Created BareMetal server {0[name]!r}'.format(server_))
log.debug(
'{0[name]!r} BareMetal server creation details:\n{1}'.format(
server_, pprint.pformat(data)
)
)
salt.utils.cloud.fire_event(
'event',
'created instance',
'salt/cloud/{0}/created'.format(server_['name']),
{
'name': server_['name'],
'profile': server_['profile'],
'provider': server_['provider'],
},
transport=__opts__['transport']
)
return ret
def query(method='servers', server_id=None, command=None, args=None,
http_method='get'):
''' Make a call to the Scaleway API.
'''
base_path = str(config.get_cloud_config_value(
'api_root',
get_configured_provider(),
__opts__,
search_global=False,
default='https://api.scaleway.com'
))
path = '{0}/{1}/'.format(base_path, method)
if server_id:
path += '{0}/'.format(server_id)
if command:
path += command
if not isinstance(args, dict):
args = {}
token = config.get_cloud_config_value(
'token', get_configured_provider(), __opts__, search_global=False
)
data = json.dumps(args)
requester = getattr(requests, http_method)
request = requester(
path, data=data,
headers={'X-Auth-Token': token, 'Content-Type': 'application/json'}
)
if request.status_code > 299:
raise SaltCloudSystemExit(
'An error occurred while querying Scaleway. HTTP Code: {0} '
'Error: {1!r}'.format(
request.getcode(),
request.text
)
)
log.debug(request.url)
# success without data
if request.status_code == 204:
return True
return request.json()
def script(server_):
''' Return the script deployment object.
'''
return salt.utils.cloud.os_script(
config.get_cloud_config_value('script', server_, __opts__),
server_,
__opts__,
salt.utils.cloud.salt_config_to_yaml(
salt.utils.cloud.minion_config(__opts__, server_)
)
)
def show_instance(name, call=None):
''' Show the details from a Scaleway BareMetal server.
'''
if call != 'action':
raise SaltCloudSystemExit(
'The show_instance action must be called with -a or --action.'
)
node = _get_node(name)
salt.utils.cloud.cache_node(node, __active_provider_name__, __opts__)
return node
def _get_node(name):
for attempt in reversed(list(range(10))):
try:
return list_nodes_full()[name]
except KeyError:
log.debug(
'Failed to get the data for the node {0!r}. Remaining '
'attempts {1}'.format(
name, attempt
)
)
# Just a little delay between attempts...
time.sleep(0.5)
return {}
def destroy(name, call=None):
''' Destroy a node. Will check termination protection and warn if enabled.
CLI Example:
.. code-block:: bash
salt-cloud --destroy mymachine
'''
if call == 'function':
raise SaltCloudSystemExit(
'The destroy action must be called with -d, --destroy, '
'-a or --action.'
)
salt.utils.cloud.fire_event(
'event',
'destroying instance',
'salt/cloud/{0}/destroying'.format(name),
{'name': name},
transport=__opts__['transport']
)
data = show_instance(name, call='action')
node = query(
method='servers', server_id=data['id'], command='action',
args={'action': 'terminate'}, http_method='post'
)
salt.utils.cloud.fire_event(
'event',
'destroyed instance',
'salt/cloud/{0}/destroyed'.format(name),
{'name': name},
transport=__opts__['transport']
)
if __opts__.get('update_cachedir', False) is True:
salt.utils.cloud.delete_minion_cachedir(
name, __active_provider_name__.split(':')[0], __opts__
)
return node

View File

@ -33,6 +33,7 @@ import copy
import pprint
import logging
import time
import decimal
# Import salt cloud libs
import salt.utils.cloud
@ -128,6 +129,8 @@ def avail_locations(call=None):
available = conn.getAvailableLocations(id=50)
for location in available:
if location.get('isAvailable', 0) is 0:
continue
ret[location['locationId']]['available'] = True
return ret
@ -629,3 +632,56 @@ def list_vlans(call=None):
conn = get_conn(service='Account')
return conn.getNetworkVlans()
def show_pricing(kwargs=None, call=None):
'''
Show pricing for a particular profile. This is only an estimate, based on
unofficial pricing sources.
CLI Examples:
.. code-block:: bash
salt-cloud -f show_pricing my-softlayerhw-config my-profile
If pricing sources have not been cached, they will be downloaded. Once they
have been cached, they will not be updated automatically. To manually update
all prices, use the following command:
.. code-block:: bash
salt-cloud -f update_pricing <provider>
.. versionadded:: Beryllium
'''
profile = __opts__['profiles'].get(kwargs['profile'], {})
if not profile:
return {'Error': 'The requested profile was not found'}
# Make sure the profile belongs to Softlayer HW
provider = profile.get('provider', '0:0')
comps = provider.split(':')
if len(comps) < 2 or comps[1] != 'softlayer_hw':
return {'Error': 'The requested profile does not belong to Softlayer HW'}
raw = {}
ret = {}
ret['per_hour'] = 0
conn = get_conn(service='SoftLayer_Product_Item_Price')
for item in profile:
if item in ('profile', 'provider', 'location'):
continue
price = conn.getObject(id=profile[item])
raw[item] = price
ret['per_hour'] += decimal.Decimal(price.get('hourlyRecurringFee', 0))
ret['per_day'] = ret['per_hour'] * 24
ret['per_week'] = ret['per_day'] * 7
ret['per_month'] = ret['per_day'] * 30
ret['per_year'] = ret['per_week'] * 52
if kwargs.get('raw', False):
ret['_raw'] = raw
return {profile['profile']: ret}

View File

@ -59,7 +59,7 @@ configuration, run :py:func:`test_vcenter_connection`
# Import python libs
from __future__ import absolute_import
from random import randint
from re import match
from re import match, findall
import atexit
import pprint
import logging
@ -68,6 +68,7 @@ import os.path
import subprocess
# Import salt libs
import salt.utils
import salt.utils.cloud
import salt.utils.xmlutil
from salt.exceptions import SaltCloudSystemExit
@ -312,7 +313,7 @@ def _edit_existing_hard_disk_helper(disk, size_kb):
def _add_new_hard_disk_helper(disk_label, size_gb, unit_number):
random_key = randint(-2099, -2000)
size_kb = int(size_gb) * 1024 * 1024
size_kb = int(size_gb * 1024.0 * 1024.0)
disk_spec = vim.vm.device.VirtualDeviceSpec()
disk_spec.fileOperation = 'create'
@ -354,11 +355,11 @@ def _edit_existing_network_adapter_helper(network_adapter, new_network_name, ada
if isinstance(network_adapter, type(edited_network_adapter)):
edited_network_adapter = network_adapter
else:
log.debug("Changing type of {0} from {1} to {2}".format(network_adapter.deviceInfo.label, type(network_adapter).__name__.rsplit(".", 1)[1][7:].lower(), adapter_type))
log.debug("Changing type of '{0}' from '{1}' to '{2}'".format(network_adapter.deviceInfo.label, type(network_adapter).__name__.rsplit(".", 1)[1][7:].lower(), adapter_type))
else:
# If type not specified or does not match, dont change adapter type
if adapter_type:
log.error("Cannot change type of {0} to {1}. Not changing type".format(network_adapter.deviceInfo.label, adapter_type))
log.error("Cannot change type of '{0}' to '{1}'. Not changing type".format(network_adapter.deviceInfo.label, adapter_type))
edited_network_adapter = network_adapter
if switch_type == 'standard':
@ -377,9 +378,9 @@ def _edit_existing_network_adapter_helper(network_adapter, new_network_name, ada
else:
# If switch type not specified or does not match, show error and return
if not switch_type:
err_msg = "The switch type to be used by {0} has not been specified".format(network_adapter.deviceInfo.label)
err_msg = "The switch type to be used by '{0}' has not been specified".format(network_adapter.deviceInfo.label)
else:
err_msg = "Cannot create {0}. Invalid/unsupported switch type {1}".format(network_adapter.deviceInfo.label, switch_type)
err_msg = "Cannot create '{0}'. Invalid/unsupported switch type '{1}'".format(network_adapter.deviceInfo.label, switch_type)
raise SaltCloudSystemExit(err_msg)
edited_network_adapter.key = network_adapter.key
@ -411,9 +412,9 @@ def _add_new_network_adapter_helper(network_adapter_label, network_name, adapter
else:
# If type not specified or does not match, create adapter of type vmxnet3
if not adapter_type:
log.debug("The type of {0} has not been specified. Creating of default type vmxnet3".format(network_adapter_label))
log.debug("The type of '{0}' has not been specified. Creating of default type 'vmxnet3'".format(network_adapter_label))
else:
log.error("Cannot create network adapter of type {0}. Creating {1} of default type vmxnet3".format(adapter_type, network_adapter_label))
log.error("Cannot create network adapter of type '{0}'. Creating '{1}' of default type 'vmxnet3'".format(adapter_type, network_adapter_label))
network_spec.device = vim.vm.device.VirtualVmxnet3()
network_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
@ -433,9 +434,9 @@ def _add_new_network_adapter_helper(network_adapter_label, network_name, adapter
else:
# If switch type not specified or does not match, show error and return
if not switch_type:
err_msg = "The switch type to be used by {0} has not been specified".format(network_adapter_label)
err_msg = "The switch type to be used by '{0}' has not been specified".format(network_adapter_label)
else:
err_msg = "Cannot create {0}. Invalid/unsupported switch type {1}".format(network_adapter_label, switch_type)
err_msg = "Cannot create '{0}'. Invalid/unsupported switch type '{1}'".format(network_adapter_label, switch_type)
raise SaltCloudSystemExit(err_msg)
network_spec.device.key = random_key
@ -478,9 +479,9 @@ def _add_new_scsi_adapter_helper(scsi_adapter_label, properties, bus_number):
else:
# If type not specified or does not match, show error and return
if not adapter_type:
err_msg = "The type of {0} has not been specified".format(scsi_adapter_label)
err_msg = "The type of '{0}' has not been specified".format(scsi_adapter_label)
else:
err_msg = "Cannot create {0} of invalid/unsupported type {1}".format(scsi_adapter_label, adapter_type)
err_msg = "Cannot create '{0}'. Invalid/unsupported type '{1}'".format(scsi_adapter_label, adapter_type)
raise SaltCloudSystemExit(err_msg)
scsi_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
@ -556,9 +557,9 @@ def _add_new_cd_or_dvd_drive_helper(drive_label, controller_key, device_type, mo
else:
# If device_type not specified or does not match, create drive of Client type with Passthough mode
if not device_type:
log.debug("The device_type of {0} has not been specified. Creating of default type client_device".format(drive_label))
log.debug("The 'device_type' of '{0}' has not been specified. Creating of default type 'client_device'".format(drive_label))
else:
log.error("Cannot create CD/DVD drive of type {0}. Creating {1} of default type client_device".format(device_type, drive_label))
log.error("Cannot create CD/DVD drive of type '{0}'. Creating '{1}' of default type 'client_device'".format(device_type, drive_label))
drive_spec.device.backing = vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo()
drive_spec.device.deviceInfo.summary = 'Remote Device'
@ -614,8 +615,8 @@ def _manage_devices(devices, vm):
unit_number += 1
existing_disks_label.append(device.deviceInfo.label)
if device.deviceInfo.label in list(devices['disk'].keys()):
size_gb = devices['disk'][device.deviceInfo.label]['size']
size_kb = int(size_gb) * 1024 * 1024
size_gb = float(devices['disk'][device.deviceInfo.label]['size'])
size_kb = int(size_gb * 1024.0 * 1024.0)
if device.capacityInKB < size_kb:
# expand the disk
disk_spec = _edit_existing_hard_disk_helper(device, size_kb)
@ -674,7 +675,7 @@ def _manage_devices(devices, vm):
log.debug("Hard disks to create: {0}".format(disks_to_create))
for disk_label in disks_to_create:
# create the disk
size_gb = devices['disk'][disk_label]['size']
size_gb = float(devices['disk'][disk_label]['size'])
disk_spec = _add_new_hard_disk_helper(disk_label, size_gb, unit_number)
device_specs.append(disk_spec)
unit_number += 1
@ -720,7 +721,7 @@ def _manage_devices(devices, vm):
else:
controller_key = None
if not controller_key:
log.error('No more available controllers for {0}. All IDE controllers are currently in use'.format(cd_drive_label))
log.error("No more available controllers for '{0}'. All IDE controllers are currently in use".format(cd_drive_label))
else:
cd_drive_spec = _add_new_cd_or_dvd_drive_helper(cd_drive_label, controller_key, device_type, mode, iso_path)
device_specs.append(cd_drive_spec)
@ -734,27 +735,51 @@ def _manage_devices(devices, vm):
return ret
def _wait_for_ip(vm_ref, max_wait_minute):
def _wait_for_vmware_tools(vm_ref, max_wait_minute):
time_counter = 0
starttime = time.time()
max_wait_second = int(max_wait_minute * 60)
while time_counter < max_wait_second:
if time_counter % 5 == 0:
log.info("[ {0} ] Waiting to get IP information [{1} s]".format(vm_ref.name, time_counter))
log.info("[ {0} ] Waiting for VMware tools to be running [{1} s]".format(vm_ref.name, time_counter))
if str(vm_ref.summary.guest.toolsRunningStatus) == "guestToolsRunning":
log.info("[ {0} ] Succesfully got VMware tools running on the guest in {1} seconds".format(vm_ref.name, time_counter))
return True
time.sleep(1.0 - ((time.time() - starttime) % 1.0))
time_counter += 1
log.warning("[ {0} ] Timeout Reached. VMware tools still not running after waiting for {1} minutes".format(vm_ref.name, max_wait_minute))
return False
def _wait_for_ip(vm_ref, max_wait_minute):
max_wait_minute_vmware_tools = max_wait_minute - 5
max_wait_minute_ip = max_wait_minute - max_wait_minute_vmware_tools
vmware_tools_status = _wait_for_vmware_tools(vm_ref, max_wait_minute_vmware_tools)
if not vmware_tools_status:
return False
time_counter = 0
starttime = time.time()
max_wait_second = int(max_wait_minute_ip * 60)
while time_counter < max_wait_second:
if time_counter % 5 == 0:
log.info("[ {0} ] Waiting to retrieve IPv4 information [{1} s]".format(vm_ref.name, time_counter))
if vm_ref.summary.guest.ipAddress:
if match(r'^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$', vm_ref.summary.guest.ipAddress) and vm_ref.summary.guest.ipAddress != '127.0.0.1':
log.info("[ {0} ] Successfully got IP information in {1} seconds".format(vm_ref.name, time_counter))
log.info("[ {0} ] Successfully retrieved IPv4 information in {1} seconds".format(vm_ref.name, time_counter))
return vm_ref.summary.guest.ipAddress
for net in vm_ref.guest.net:
if net.ipConfig.ipAddress:
for current_ip in net.ipConfig.ipAddress:
if match(r'^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$', current_ip.ipAddress) and current_ip.ipAddress != '127.0.0.1':
log.info("[ {0} ] Successfully got IP information in {1} seconds".format(vm_ref.name, time_counter))
log.info("[ {0} ] Successfully retrieved IPv4 information in {1} seconds".format(vm_ref.name, time_counter))
return current_ip.ipAddress
time.sleep(1.0 - ((time.time() - starttime) % 1.0))
time_counter += 1
log.warning("[ {0} ] Timeout Reached. Unable to retrieve IPv4 information after waiting for {1} minutes".format(vm_ref.name, max_wait_minute_ip))
return False
@ -836,7 +861,7 @@ def _format_instance_info_select(vm, selection):
vm_select_info['path'] = vm["config.files.vmPathName"]
if 'tools_status' in selection:
vm_select_info['tools_status'] = str(vm["guest.toolsStatus"])
vm_select_info['tools_status'] = str(vm["guest.toolsStatus"]) if "guest.toolsStatus" in vm else "N/A"
if ('private_ips' or 'mac_address' or 'networks') in selection:
network_full_info = {}
@ -1973,6 +1998,7 @@ def destroy(name, call=None):
)
return 'failed to destroy'
try:
log.info('Destroying VM {0}'.format(name))
task = vm["object"].Destroy_Task()
_wait_for_task(task, name, "destroy")
except Exception as exc:
@ -2061,7 +2087,7 @@ def create(vm_):
'extra_config', vm_, __opts__, default=None
)
power = config.get_cloud_config_value(
'power_on', vm_, __opts__, default=False
'power_on', vm_, __opts__, default=True
)
key_filename = config.get_cloud_config_value(
'private_key', vm_, __opts__, search_global=False, default=None
@ -2087,13 +2113,13 @@ def create(vm_):
if resourcepool:
resourcepool_ref = _get_mor_by_property(vim.ResourcePool, resourcepool)
if not resourcepool_ref:
log.error('Specified resource pool: {0} does not exist'.format(resourcepool))
log.error("Specified resource pool: '{0}' does not exist".format(resourcepool))
if clone_type == "template":
raise SaltCloudSystemExit('You must specify a resource pool that exists.')
elif cluster:
cluster_ref = _get_mor_by_property(vim.ClusterComputeResource, cluster)
if not cluster_ref:
log.error('Specified cluster: {0} does not exist'.format(cluster))
log.error("Specified cluster: '{0}' does not exist".format(cluster))
if clone_type == "template":
raise SaltCloudSystemExit('You must specify a cluster that exists.')
else:
@ -2103,26 +2129,26 @@ def create(vm_):
'You must either specify a cluster or a resource pool when cloning from a template.'
)
else:
log.debug('Using resource pool used by the {0} {1}'.format(clone_type, vm_['clonefrom']))
log.debug("Using resource pool used by the {0} {1}".format(clone_type, vm_['clonefrom']))
# Either a datacenter or a folder can be optionally specified
# If not specified, the existing VM/template\'s parent folder is used.
if folder:
folder_ref = _get_mor_by_property(vim.Folder, folder)
if not folder_ref:
log.error('Specified folder: {0} does not exist'.format(folder))
log.debug('Using folder in which {0} {1} is present'.format(clone_type, vm_['clonefrom']))
log.error("Specified folder: '{0}' does not exist".format(folder))
log.debug("Using folder in which {0} {1} is present".format(clone_type, vm_['clonefrom']))
folder_ref = object_ref.parent
elif datacenter:
datacenter_ref = _get_mor_by_property(vim.Datacenter, datacenter)
if not datacenter_ref:
log.error('Specified datacenter: {0} does not exist'.format(datacenter))
log.debug('Using datacenter folder in which {0} {1} is present'.format(clone_type, vm_['clonefrom']))
log.error("Specified datacenter: '{0}' does not exist".format(datacenter))
log.debug("Using datacenter folder in which {0} {1} is present".format(clone_type, vm_['clonefrom']))
folder_ref = object_ref.parent
else:
folder_ref = datacenter_ref.vmFolder
else:
log.debug('Using folder in which {0} {1} is present'.format(clone_type, vm_['clonefrom']))
log.debug("Using folder in which {0} {1} is present".format(clone_type, vm_['clonefrom']))
folder_ref = object_ref.parent
# Create the relocation specs
@ -2141,27 +2167,41 @@ def create(vm_):
else:
datastore_cluster_ref = _get_mor_by_property(vim.StoragePod, datastore)
if not datastore_cluster_ref:
log.error('Specified datastore/datastore cluster: {0} does not exist'.format(datastore))
log.debug('Using datastore used by the {0} {1}'.format(clone_type, vm_['clonefrom']))
log.error("Specified datastore/datastore cluster: '{0}' does not exist".format(datastore))
log.debug("Using datastore used by the {0} {1}".format(clone_type, vm_['clonefrom']))
else:
log.debug('No datastore/datastore cluster specified')
log.debug('Using datastore used by the {0} {1}'.format(clone_type, vm_['clonefrom']))
log.debug("No datastore/datastore cluster specified")
log.debug("Using datastore used by the {0} {1}".format(clone_type, vm_['clonefrom']))
if host:
host_ref = _get_mor_by_property(vim.HostSystem, host)
if host_ref:
reloc_spec.host = host_ref
else:
log.error('Specified host: {0} does not exist'.format(host))
log.error("Specified host: '{0}' does not exist".format(host))
# Create the config specs
config_spec = vim.vm.ConfigSpec()
if num_cpus:
config_spec.numCPUs = num_cpus
log.debug("Setting cpu to: {0}".format(num_cpus))
config_spec.numCPUs = int(num_cpus)
if memory:
config_spec.memoryMB = memory
try:
memory_num, memory_unit = findall(r"[^\W\d_]+|\d+.\d+|\d+", memory)
if memory_unit.lower() == "mb":
memory_mb = int(memory_num)
elif memory_unit.lower() == "gb":
memory_mb = int(float(memory_num)*1024.0)
else:
err_msg = "Invalid memory type specified: '{0}'".format(memory_unit)
log.error(err_msg)
return {'Error': err_msg}
except ValueError:
memory_mb = int(memory)
log.debug("Setting memory to: {0} MB".format(memory_mb))
config_spec.memoryMB = memory_mb
if devices:
specs = _manage_devices(devices, object_ref)
@ -2201,12 +2241,12 @@ def create(vm_):
if not template:
clone_spec.powerOn = power
log.debug('clone_spec set to {0}\n'.format(
log.debug('clone_spec set to:\n{0}'.format(
pprint.pformat(clone_spec))
)
try:
log.info("Creating {0} from {1}({2})\n".format(vm_['name'], clone_type, vm_['clonefrom']))
log.info("Creating {0} from {1}({2})".format(vm_['name'], clone_type, vm_['clonefrom']))
salt.utils.cloud.fire_event(
'event',
'requesting instance',
@ -2242,31 +2282,27 @@ def create(vm_):
task = object_ref.Clone(folder_ref, vm_name, clone_spec)
_wait_for_task(task, vm_name, "clone", 5, 'info')
except Exception as exc:
err_msg = 'Error creating {0}: {1}'.format(vm_['name'], exc)
log.error(
'Error creating {0}: {1}'.format(
vm_['name'],
exc
),
err_msg,
# Show the traceback if the debug logging level is enabled
exc_info_on_loglevel=logging.DEBUG
)
return False
return {'Error': err_msg}
new_vm_ref = _get_mor_by_property(vim.VirtualMachine, vm_name)
# If it a template or if it does not need to be powered on, or if deploy is False then do not wait for ip
# If it a template or if it does not need to be powered on then do not wait for the IP
if not template and power:
ip = _wait_for_ip(new_vm_ref, 20)
if ip:
log.debug("IP is: {0}".format(ip))
# ssh or smb using ip and install salt
log.info("[ {0} ] IPv4 is: {1}".format(vm_name, ip))
# ssh or smb using ip and install salt only if deploy is True
if deploy:
vm_['key_filename'] = key_filename
vm_['ssh_host'] = ip
salt.utils.cloud.bootstrap(vm_, __opts__)
else:
log.warning("Could not get IP information for {0}".format(vm_name))
data = show_instance(vm_name, call='action')
@ -2281,11 +2317,12 @@ def create(vm_):
},
transport=__opts__['transport']
)
else:
log.error("clonefrom option hasn\'t been specified. Exiting.")
return False
return data
return {vm_name: data}
else:
err_msg = "clonefrom option hasn\'t been specified. Exiting."
log.error(err_msg)
return {'Error': err_msg}
def create_datacenter(kwargs=None, call=None):
@ -3252,7 +3289,8 @@ def add_host(kwargs=None, call=None):
p1 = subprocess.Popen(('echo', '-n'), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p2 = subprocess.Popen(('openssl', 's_client', '-connect', '{0}:443'.format(host_name)), stdin=p1.stdout, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p3 = subprocess.Popen(('openssl', 'x509', '-noout', '-fingerprint', '-sha1'), stdin=p2.stdout, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
ssl_thumbprint = p3.stdout.read().split('=')[-1].strip()
out = salt.utils.to_str(p3.stdout.read())
ssl_thumbprint = out.split('=')[-1].strip()
log.debug('SSL thumbprint received from the host system: {0}'.format(ssl_thumbprint))
spec.sslThumbprint = ssl_thumbprint
except Exception as exc:

View File

@ -3,7 +3,15 @@
vSphere Cloud Module
====================
.. versionadded:: 2014.7.0
.. note::
.. deprecated:: Carbon
The :py:func:`vsphere <salt.cloud.clouds.vsphere>` cloud driver has been
deprecated in favor of the :py:func:`vmware <salt.cloud.clouds.vmware>`
cloud driver and will be removed in Salt Carbon. Please refer to
:doc:`Getting started with VMware </topics/cloud/vmware>` to get started
and convert your vsphere provider configurations to use the vmware driver.
The vSphere cloud module is used to control access to VMWare vSphere.
@ -76,6 +84,7 @@ import time
import salt.utils.cloud
import salt.utils.xmlutil
from salt.exceptions import SaltCloudSystemExit
from salt.utils import warn_until
# Import salt cloud libs
import salt.config as config
@ -110,6 +119,11 @@ def get_configured_provider():
'''
Return the first configured instance.
'''
warn_until(
'Carbon',
'The vsphere driver is deprecated in favor of the vmware driver and will be removed '
'in Salt Carbon. Please convert your vsphere provider configs to use the vmware driver.'
)
return config.is_provider_configured(
__opts__,
__active_provider_name__ or 'vsphere',

View File

@ -616,6 +616,7 @@ VALID_OPTS = {
'ssh_user': str,
'ssh_scan_ports': str,
'ssh_scan_timeout': float,
'ssh_identities_only': bool,
# Enable ioflo verbose logging. Warning! Very verbose!
'ioflo_verbose': int,
@ -999,6 +1000,7 @@ DEFAULT_MASTER_OPTS = {
'ssh_user': 'root',
'ssh_scan_ports': '22',
'ssh_scan_timeout': 0.01,
'ssh_identities_only': False,
'master_floscript': os.path.join(FLO_DIR, 'master.flo'),
'worker_floscript': os.path.join(FLO_DIR, 'worker.flo'),
'maintenance_floscript': os.path.join(FLO_DIR, 'maint.flo'),
@ -2344,8 +2346,9 @@ def get_id(opts, cache_minion_id=False):
try:
with salt.utils.fopen(id_cache) as idf:
name = idf.readline().strip()
if name.startswith(codecs.BOM): # Remove BOM if exists
name = name.replace(codecs.BOM, '', 1)
bname = salt.utils.to_bytes(name)
if bname.startswith(codecs.BOM): # Remove BOM if exists
name = salt.utils.to_str(bname.replace(codecs.BOM, '', 1))
if name:
log.debug('Using cached minion ID from {0}: {1}'.format(id_cache, name))
return name, False

View File

@ -56,7 +56,7 @@ def jobber_check(self):
rms.append(jid)
data = self.shells.value[jid]
stdout, stderr = data['proc'].communicate()
ret = json.loads(stdout, object_hook=salt.utils.decode_dict)['local']
ret = json.loads(salt.utils.to_str(stdout), object_hook=salt.utils.decode_dict)['local']
route = {'src': (self.stack.value.local.name, 'manor', 'jid_ret'),
'dst': (data['msg']['route']['src'][0], None, 'remote_cmd')}
ret['cmd'] = '_return'

View File

@ -199,7 +199,7 @@ def diff_mtime_map(map1, map2):
Is there a change to the mtime map? return a boolean
'''
# check if the mtimes are the same
if cmp(sorted(map1), sorted(map2)) != 0:
if sorted(map1) != sorted(map2):
#log.debug('diff_mtime_map: the maps are different')
return True

View File

@ -548,8 +548,8 @@ def _clean_stale(repo, local_refs=None):
# Rename heads to match the ref names from
# pygit2.Repository.listall_references()
remote_refs.append(
line.split()[-1].replace('refs/heads/',
'refs/remotes/origin/')
line.split()[-1].replace(b'refs/heads/',
b'refs/remotes/origin/')
)
except IndexError:
continue

View File

@ -165,7 +165,7 @@ def _linux_gpu_data():
devs = []
try:
lspci_out = __salt__['cmd.run']('lspci -vmm')
lspci_out = __salt__['cmd.run']('{0} -vmm'.format(lspci))
cur_dev = {}
error = False
@ -510,7 +510,7 @@ def _virtual(osdata):
if not cmd:
continue
cmd = '{0} {1}'.format(command, ' '.join(args))
cmd = '{0} {1}'.format(cmd, ' '.join(args))
ret = __salt__['cmd.run_all'](cmd)

View File

@ -781,19 +781,21 @@ def __global_logging_exception_handler(exc_type, exc_value, exc_traceback):
'''
This function will log all python exceptions.
'''
# Log the exception
logging.getLogger(__name__).error(
'An un-handled exception was caught by salt\'s global exception '
'handler:\n{0}: {1}\n{2}'.format(
exc_type.__name__,
exc_value,
''.join(traceback.format_exception(
exc_type, exc_value, exc_traceback
)).strip()
# Do not log the exception or display the traceback on Keyboard Interrupt
if exc_type.__name__ != "KeyboardInterrupt":
# Log the exception
logging.getLogger(__name__).error(
'An un-handled exception was caught by salt\'s global exception '
'handler:\n{0}: {1}\n{2}'.format(
exc_type.__name__,
exc_value,
''.join(traceback.format_exception(
exc_type, exc_value, exc_traceback
)).strip()
)
)
)
# Call the original sys.excepthook
sys.__excepthook__(exc_type, exc_value, exc_traceback)
# Call the original sys.excepthook
sys.__excepthook__(exc_type, exc_value, exc_traceback)
# Set our own exception handler as the one to use

View File

@ -22,6 +22,10 @@ from stat import S_IMODE
# Import Salt Libs
# pylint: disable=import-error,no-name-in-module,redefined-builtin
import salt.ext.six as six
if six.PY3:
import ipaddress
else:
import salt.ext.ipaddress as ipaddress
from salt.ext.six.moves import range
from salt.utils import reinit_crypto
# pylint: enable=no-name-in-module,redefined-builtin
@ -2296,28 +2300,28 @@ class Matcher(object):
def ipcidr_match(self, tgt):
'''
Matches based on ip address or CIDR notation
Matches based on IP address or CIDR notation
'''
num_parts = len(tgt.split('/'))
if num_parts > 2:
# Target is not valid CIDR
return False
elif num_parts == 2:
# Target is CIDR
return salt.utils.network.in_subnet(
tgt,
addrs=self.opts['grains'].get('ipv4', [])
)
else:
# Target is an IPv4 address
import socket
try:
socket.inet_aton(tgt)
except socket.error:
# Not a valid IPv4 address
try:
tgt = ipaddress.ip_network(tgt)
# Target is a network
proto = 'ipv{0}'.format(tgt.version)
if proto not in self.opts['grains']:
return False
else:
return tgt in self.opts['grains'].get('ipv4', [])
return salt.utils.network.in_subnet(tgt, self.opts['grains'][proto])
except: # pylint: disable=bare-except
try:
# Target should be an address
proto = 'ipv{0}'.format(ipaddress.ip_address(tgt).version)
if proto not in self.opts['grains']:
return False
else:
return tgt in self.opts['grains'][proto]
except: # pylint: disable=bare-except
log.error('Invalid IP/CIDR target {0}"'.format(tgt))
return False
def range_match(self, tgt):
'''

292
salt/modules/bamboohr.py Normal file
View File

@ -0,0 +1,292 @@
# -*- coding: utf-8 -*-
'''
Support for BambooHR
.. versionadded:: Beryllium
Requires a ``subdomain`` and an ``apikey`` in ``/etc/salt/minion``:
.. code-block: yaml
bamboohr:
apikey: 012345678901234567890
subdomain: mycompany
'''
# Import python libs
from __future__ import absolute_import, print_function
import yaml
import logging
# Import salt libs
import salt.utils.http
import salt.ext.six as six
from salt._compat import ElementTree as ET
log = logging.getLogger(__name__)
def __virtual__():
'''
Only load the module if apache is installed
'''
if _apikey():
return True
return False
def _apikey():
'''
Get the API key
'''
return __opts__.get('bamboohr', {}).get('apikey', None)
def list_employees(order_by='id'):
'''
Show all employees for this company.
CLI Example:
salt myminion bamboohr.list_employees
By default, the return data will be keyed by ID. However, it can be ordered
by any other field. Keep in mind that if the field that is chosen contains
duplicate values (i.e., location is used, for a company which only has one
location), then each duplicate value will be overwritten by the previous.
Therefore, it is advisable to only sort by fields that are guaranteed to be
unique.
CLI Examples:
salt myminion bamboohr.list_employees order_by=id
salt myminion bamboohr.list_employees order_by=displayName
salt myminion bamboohr.list_employees order_by=workEmail
'''
ret = {}
status, result = _query(action='employees', command='directory')
root = ET.fromstring(result)
directory = root.getchildren()
for cat in directory:
if cat.tag != 'employees':
continue
for item in cat:
emp_id = item.items()[0][1]
emp_ret = {'id': emp_id}
for details in item.getchildren():
emp_ret[details.items()[0][1]] = details.text
ret[emp_ret[order_by]] = emp_ret
return ret
def show_employee(emp_id, fields=None):
'''
Show all employees for this company.
CLI Example:
salt myminion bamboohr.show_employee 1138
By default, the fields normally returned from bamboohr.list_employees are
returned. These fields are:
- canUploadPhoto
- department
- displayName
- firstName
- id
- jobTitle
- lastName
- location
- mobilePhone
- nickname
- photoUploaded
- photoUrl
- workEmail
- workPhone
- workPhoneExtension
If needed, a different set of fields may be specified, separated by commas:
CLI Example:
salt myminion bamboohr.show_employee 1138 displayName,dateOfBirth
A list of available fields can be found at
http://www.bamboohr.com/api/documentation/employees.php
'''
ret = {}
if fields is None:
fields = ','.join((
'canUploadPhoto',
'department',
'displayName',
'firstName',
'id',
'jobTitle',
'lastName',
'location',
'mobilePhone',
'nickname',
'photoUploaded',
'photoUrl',
'workEmail',
'workPhone',
'workPhoneExtension',
))
status, result = _query(
action='employees',
command=emp_id,
args={'fields': fields}
)
root = ET.fromstring(result)
items = root.getchildren()
ret = {'id': emp_id}
for item in items:
ret[item.items()[0][1]] = item.text
return ret
def update_employee(emp_id, key=None, value=None, items=None):
'''
Update one or more items for this employee. Specifying an empty value will
clear it for that employee.
CLI Examples:
salt myminion bamboohr.update_employee 1138 nickname Curly
salt myminion bamboohr.update_employee 1138 nickname ''
salt myminion bamboohr.update_employee 1138 items='{"nickname": "Curly"}
salt myminion bamboohr.update_employee 1138 items='{"nickname": ""}
'''
if items is None:
if key is None or value is None:
return {'Error': 'At least one key/value pair is required'}
items = {key: value}
elif isinstance(items, six.string_types):
items = yaml.safe_load(items)
xml_items = ''
for pair in items.keys():
xml_items += '<field id="{0}">{1}</field>'.format(pair, items[pair])
xml_items = '<employee>{0}</employee>'.format(xml_items)
status, result = _query(
action='employees',
command=emp_id,
data=xml_items,
method='POST',
)
return show_employee(emp_id, ','.join(items.keys()))
def list_users(order_by='id'):
'''
Show all users for this company.
CLI Example:
salt myminion bamboohr.list_users
By default, the return data will be keyed by ID. However, it can be ordered
by any other field. Keep in mind that if the field that is chosen contains
duplicate values (i.e., location is used, for a company which only has one
location), then each duplicate value will be overwritten by the previous.
Therefore, it is advisable to only sort by fields that are guaranteed to be
unique.
CLI Examples:
salt myminion bamboohr.list_users order_by=id
salt myminion bamboohr.list_users order_by=email
'''
ret = {}
status, result = _query(action='meta', command='users')
root = ET.fromstring(result)
users = root.getchildren()
for user in users:
user_id = None
user_ret = {}
for item in user.items():
user_ret[item[0]] = item[1]
if item[0] == 'id':
user_id = item[1]
for item in user.getchildren():
user_ret[item.tag] = item.text
ret[user_ret[order_by]] = user_ret
return ret
def list_meta_fields():
'''
Show all meta data fields for this company.
CLI Example:
salt myminion bamboohr.list_meta_fields
'''
ret = {}
status, result = _query(action='meta', command='fields')
root = ET.fromstring(result)
fields = root.getchildren()
for field in fields:
field_id = None
field_ret = {'name': field.text}
for item in field.items():
field_ret[item[0]] = item[1]
if item[0] == 'id':
field_id = item[1]
ret[field_id] = field_ret
return ret
def _query(action=None,
command=None,
args=None,
method='GET',
data=None):
'''
Make a web call to BambooHR
The password can be any random text, so we chose Salty text.
'''
subdomain = __opts__.get('bamboohr', {}).get('subdomain', None)
path = 'https://api.bamboohr.com/api/gateway.php/{0}/v1/'.format(
subdomain
)
if action:
path += action
if command:
path += '/{0}'.format(command)
log.debug('BambooHR URL: {0}'.format(path))
if not isinstance(args, dict):
args = {}
return_content = None
result = salt.utils.http.query(
path,
method,
username=_apikey(),
password='saltypork',
params=args,
data=data,
decode=False,
text=True,
status=True,
opts=__opts__,
)
log.debug(
'BambooHR Response Status Code: {0}'.format(
result['status']
)
)
return [result['status'], result['text']]

View File

@ -294,8 +294,8 @@ def _run(cmd,
# Getting the environment for the runas user
# There must be a better way to do this.
py_code = (
'import os, itertools; '
'print \"\\0\".join(itertools.chain(*os.environ.items()))'
'import sys, os, itertools; '
'sys.stdout.write(\"\\0\".join(itertools.chain(*os.environ.items())))'
)
if __grains__['os'] in ['MacOS', 'Darwin']:
env_cmd = ('sudo', '-i', '-u', runas, '--',
@ -434,9 +434,9 @@ def _run(cmd,
if rstrip:
if out is not None:
out = out.rstrip()
out = salt.utils.to_str(out).rstrip()
if err is not None:
err = err.rstrip()
err = salt.utils.to_str(err).rstrip()
ret['pid'] = proc.process.pid
ret['retcode'] = proc.process.returncode
ret['stdout'] = out

View File

@ -15,25 +15,24 @@ from salt.ext.six.moves.urllib.parse import urljoin as _urljoin
import salt.ext.six.moves.http_client
# pylint: enable=import-error,no-name-in-module
import base64
# Import salt libs
import salt.utils.http
try:
import requests
from requests.exceptions import ConnectionError
ENABLED = True
except ImportError:
ENABLED = False
import base64
import json
import logging
log = logging.getLogger(__name__)
from salt.exceptions import SaltInvocationError
__virtualname__ = 'consul'
def _query(function,
consul_url,
api_version='v1',
method='GET',
api_version='v1',
data=None,
query_params=None):
'''
@ -50,39 +49,29 @@ def _query(function,
if not query_params:
query_params = {}
if data is None:
data = {}
ret = {'data': '',
'res': True}
base_url = _urljoin(consul_url, '{0}/'.format(api_version))
url = _urljoin(base_url, function, False)
try:
result = requests.request(
method=method,
url=url,
headers=headers,
params=query_params,
data=data,
verify=True,
)
except ConnectionError as e:
ret['data'] = e
ret['res'] = False
return ret
result = salt.utils.http.query(
url,
method=method,
params=query_params,
data=data,
decode=True,
status=True,
header_dict=headers,
opts=__opts__,
)
if result.status_code == salt.ext.six.moves.http_client.OK:
result = result.json()
if result:
ret['data'] = result
ret['res'] = True
else:
ret['res'] = False
elif result.status_code == salt.ext.six.moves.http_client.NO_CONTENT:
if result.get('status', None) == salt.ext.six.moves.http_client.OK:
ret['data'] = result['dict']
ret['res'] = True
elif result.get('status', None) == salt.ext.six.moves.http_client.NO_CONTENT:
ret['res'] = False
elif result.status_code == salt.ext.six.moves.http_client.NOT_FOUND:
elif result.get('status', None) == salt.ext.six.moves.http_client.NOT_FOUND:
ret['data'] = 'Key not found.'
ret['res'] = False
else:
@ -95,7 +84,7 @@ def _query(function,
return ret
def list(consul_url, key=None, **kwargs):
def list(consul_url=None, key=None, **kwargs):
'''
List keys in Consul
@ -144,7 +133,7 @@ def list(consul_url, key=None, **kwargs):
return ret
def get(consul_url, key, recurse=False, decode=False, raw=False):
def get(consul_url=None, key=None, recurse=False, decode=False, raw=False):
'''
Get key from Consul
@ -187,6 +176,9 @@ def get(consul_url, key, recurse=False, decode=False, raw=False):
ret['res'] = False
return ret
if not key:
raise SaltInvocationError('Required argument "key" is missing.')
query_params = {}
function = 'kv/{0}'.format(key)
if recurse:
@ -204,7 +196,7 @@ def get(consul_url, key, recurse=False, decode=False, raw=False):
return ret
def put(consul_url, key, value, **kwargs):
def put(consul_url=None, key=None, value=None, **kwargs):
'''
Put values into Consul
@ -247,6 +239,9 @@ def put(consul_url, key, value, **kwargs):
ret['res'] = False
return ret
if not key:
raise SaltInvocationError('Required argument "key" is missing.')
query_params = {}
available_sessions = session_list(consul_url=consul_url, return_list=True)
@ -304,11 +299,13 @@ def put(consul_url, key, value, **kwargs):
data = value
function = 'kv/{0}'.format(key)
method = 'PUT'
ret = _query(consul_url=consul_url,
function=function,
method='PUT',
data=data,
method=method,
data=json.dumps(data),
query_params=query_params)
if ret['res']:
ret['res'] = True
ret['data'] = 'Added key {0} with value {1}.'.format(key, value)
@ -318,7 +315,7 @@ def put(consul_url, key, value, **kwargs):
return ret
def delete(consul_url, key, **kwargs):
def delete(consul_url=None, key=None, **kwargs):
'''
Delete values from Consul
@ -350,6 +347,9 @@ def delete(consul_url, key, **kwargs):
ret['res'] = False
return ret
if not key:
raise SaltInvocationError('Required argument "key" is missing.')
query_params = {}
if 'recurse' in kwargs:
@ -379,7 +379,7 @@ def delete(consul_url, key, **kwargs):
return ret
def agent_checks(consul_url):
def agent_checks(consul_url=None):
'''
Returns the checks the local agent is managing
@ -412,7 +412,7 @@ def agent_checks(consul_url):
return ret
def agent_services(consul_url):
def agent_services(consul_url=None):
'''
Returns the services the local agent is managing
@ -445,7 +445,7 @@ def agent_services(consul_url):
return ret
def agent_members(consul_url, **kwargs):
def agent_members(consul_url=None, **kwargs):
'''
Returns the members as seen by the local serf agent
@ -483,7 +483,7 @@ def agent_members(consul_url, **kwargs):
return ret
def agent_self(consul_url):
def agent_self(consul_url=None):
'''
Returns the local node configuration
@ -518,7 +518,7 @@ def agent_self(consul_url):
return ret
def agent_maintenance(consul_url, **kwargs):
def agent_maintenance(consul_url=None, **kwargs):
'''
Manages node maintenance mode
@ -577,7 +577,7 @@ def agent_maintenance(consul_url, **kwargs):
return ret
def agent_join(consul_url, address, **kwargs):
def agent_join(consul_url=None, address=None, **kwargs):
'''
Triggers the local agent to join a node
@ -606,6 +606,9 @@ def agent_join(consul_url, address, **kwargs):
ret['res'] = False
return ret
if not address:
raise SaltInvocationError('Required argument "address" is missing.')
if 'wan' in kwargs:
query_params['wan'] = kwargs['wan']
@ -624,7 +627,7 @@ def agent_join(consul_url, address, **kwargs):
return ret
def agent_leave(consul_url, node):
def agent_leave(consul_url=None, node=None):
'''
Used to instruct the agent to force a node into the left state.
@ -652,6 +655,9 @@ def agent_leave(consul_url, node):
ret['res'] = False
return ret
if not node:
raise SaltInvocationError('Required argument "node" is missing.')
function = 'agent/force-leave/{0}'.format(node)
res = _query(consul_url=consul_url,
function=function,
@ -666,7 +672,7 @@ def agent_leave(consul_url, node):
return ret
def agent_check_register(consul_url, **kwargs):
def agent_check_register(consul_url=None, **kwargs):
'''
The register endpoint is used to add a new check to the local agent.
@ -755,7 +761,7 @@ def agent_check_register(consul_url, **kwargs):
return ret
def agent_check_deregister(consul_url, checkid):
def agent_check_deregister(consul_url=None, checkid=None):
'''
The agent will take care of deregistering the check from the Catalog.
@ -767,8 +773,7 @@ def agent_check_deregister(consul_url, checkid):
.. code-block:: bash
salt '*' consul.agent_check_register name='Memory Utilization'
script='/usr/local/bin/check_mem.py' interval='15s'
salt '*' consul.agent_check_deregister checkid='Memory Utilization'
'''
ret = {}
@ -783,6 +788,9 @@ def agent_check_deregister(consul_url, checkid):
ret['res'] = False
return ret
if not checkid:
raise SaltInvocationError('Required argument "checkid" is missing.')
function = 'agent/check/deregister/{0}'.format(checkid)
res = _query(consul_url=consul_url,
function=function,
@ -796,7 +804,7 @@ def agent_check_deregister(consul_url, checkid):
return ret
def agent_check_pass(consul_url, checkid, **kwargs):
def agent_check_pass(consul_url=None, checkid=None, **kwargs):
'''
This endpoint is used with a check that is of the TTL type. When this
is called, the status of the check is set to passing and the TTL
@ -828,6 +836,9 @@ def agent_check_pass(consul_url, checkid, **kwargs):
ret['res'] = False
return ret
if not checkid:
raise SaltInvocationError('Required argument "checkid" is missing.')
if 'note' in kwargs:
query_params['note'] = kwargs['note']
@ -845,7 +856,7 @@ def agent_check_pass(consul_url, checkid, **kwargs):
return ret
def agent_check_warn(consul_url, checkid, **kwargs):
def agent_check_warn(consul_url=None, checkid=None, **kwargs):
'''
This endpoint is used with a check that is of the TTL type. When this
is called, the status of the check is set to warning and the TTL
@ -877,6 +888,9 @@ def agent_check_warn(consul_url, checkid, **kwargs):
ret['res'] = False
return ret
if not checkid:
raise SaltInvocationError('Required argument "checkid" is missing.')
if 'note' in kwargs:
query_params['note'] = kwargs['note']
@ -894,7 +908,7 @@ def agent_check_warn(consul_url, checkid, **kwargs):
return ret
def agent_check_fail(consul_url, checkid, **kwargs):
def agent_check_fail(consul_url=None, checkid=None, **kwargs):
'''
This endpoint is used with a check that is of the TTL type. When this
is called, the status of the check is set to critical and the
@ -926,6 +940,9 @@ def agent_check_fail(consul_url, checkid, **kwargs):
ret['res'] = False
return ret
if not checkid:
raise SaltInvocationError('Required argument "checkid" is missing.')
if 'note' in kwargs:
query_params['note'] = kwargs['note']
@ -943,7 +960,7 @@ def agent_check_fail(consul_url, checkid, **kwargs):
return ret
def agent_service_register(consul_url, **kwargs):
def agent_service_register(consul_url=None, **kwargs):
'''
The used to add a new service, with an optional
health check, to the local agent.
@ -1045,7 +1062,7 @@ def agent_service_register(consul_url, **kwargs):
return ret
def agent_service_deregister(consul_url, serviceid):
def agent_service_deregister(consul_url=None, serviceid=None):
'''
Used to remove a service.
@ -1073,6 +1090,9 @@ def agent_service_deregister(consul_url, serviceid):
ret['res'] = False
return ret
if not serviceid:
raise SaltInvocationError('Required argument "serviceid" is missing.')
function = 'agent/service/deregister/{0}'.format(serviceid)
res = _query(consul_url=consul_url,
function=function,
@ -1087,7 +1107,7 @@ def agent_service_deregister(consul_url, serviceid):
return ret
def agent_service_maintenance(consul_url, serviceid, **kwargs):
def agent_service_maintenance(consul_url=None, serviceid=None, **kwargs):
'''
Used to place a service into maintenance mode.
@ -1119,6 +1139,9 @@ def agent_service_maintenance(consul_url, serviceid, **kwargs):
ret['res'] = False
return ret
if not serviceid:
raise SaltInvocationError('Required argument "serviceid" is missing.')
if 'enable' in kwargs:
query_params['enable'] = kwargs['enable']
else:
@ -1145,7 +1168,7 @@ def agent_service_maintenance(consul_url, serviceid, **kwargs):
return ret
def session_create(consul_url, **kwargs):
def session_create(consul_url=None, **kwargs):
'''
Used to create a session.
@ -1237,7 +1260,7 @@ def session_create(consul_url, **kwargs):
return ret
def session_list(consul_url, return_list=False, **kwargs):
def session_list(consul_url=None, return_list=False, **kwargs):
'''
Used to list sessions.
@ -1286,7 +1309,7 @@ def session_list(consul_url, return_list=False, **kwargs):
return ret
def session_destroy(consul_url, session, **kwargs):
def session_destroy(consul_url=None, session=None, **kwargs):
'''
Destroy session
@ -1315,6 +1338,9 @@ def session_destroy(consul_url, session, **kwargs):
ret['res'] = False
return ret
if not session:
raise SaltInvocationError('Required argument "session" is missing.')
query_params = {}
if 'dc' in kwargs:
@ -1333,7 +1359,7 @@ def session_destroy(consul_url, session, **kwargs):
return ret
def session_info(consul_url, session, **kwargs):
def session_info(consul_url=None, session=None, **kwargs):
'''
Information about a session
@ -1362,6 +1388,9 @@ def session_info(consul_url, session, **kwargs):
ret['res'] = False
return ret
if not session:
raise SaltInvocationError('Required argument "session" is missing.')
query_params = {}
if 'dc' in kwargs:
@ -1374,7 +1403,7 @@ def session_info(consul_url, session, **kwargs):
return ret
def catalog_register(consul_url, **kwargs):
def catalog_register(consul_url=None, **kwargs):
'''
Registers a new node, service, or check
@ -1491,7 +1520,7 @@ def catalog_register(consul_url, **kwargs):
return ret
def catalog_deregister(consul_url, **kwargs):
def catalog_deregister(consul_url=None, **kwargs):
'''
Deregisters a node, service, or check
@ -1555,7 +1584,7 @@ def catalog_deregister(consul_url, **kwargs):
return ret
def catalog_datacenters(consul_url):
def catalog_datacenters(consul_url=None):
'''
Return list of available datacenters from catalog.
@ -1587,7 +1616,7 @@ def catalog_datacenters(consul_url):
return ret
def catalog_nodes(consul_url, **kwargs):
def catalog_nodes(consul_url=None, **kwargs):
'''
Return list of available nodes from catalog.
@ -1626,7 +1655,7 @@ def catalog_nodes(consul_url, **kwargs):
return ret
def catalog_services(consul_url, **kwargs):
def catalog_services(consul_url=None, **kwargs):
'''
Return list of available services rom catalog.
@ -1665,7 +1694,7 @@ def catalog_services(consul_url, **kwargs):
return ret
def catalog_service(consul_url, service, **kwargs):
def catalog_service(consul_url=None, service=None, **kwargs):
'''
Information about the registered service.
@ -1695,6 +1724,9 @@ def catalog_service(consul_url, service, **kwargs):
ret['res'] = False
return ret
if not service:
raise SaltInvocationError('Required argument "service" is missing.')
if 'dc' in kwargs:
query_params['dc'] = kwargs['dc']
@ -1708,7 +1740,7 @@ def catalog_service(consul_url, service, **kwargs):
return ret
def catalog_node(consul_url, node, **kwargs):
def catalog_node(consul_url=None, node=None, **kwargs):
'''
Information about the registered node.
@ -1738,6 +1770,9 @@ def catalog_node(consul_url, node, **kwargs):
ret['res'] = False
return ret
if not node:
raise SaltInvocationError('Required argument "node" is missing.')
if 'dc' in kwargs:
query_params['dc'] = kwargs['dc']
@ -1748,7 +1783,7 @@ def catalog_node(consul_url, node, **kwargs):
return ret
def health_node(consul_url, node, **kwargs):
def health_node(consul_url=None, node=None, **kwargs):
'''
Health information about the registered node.
@ -1778,6 +1813,9 @@ def health_node(consul_url, node, **kwargs):
ret['res'] = False
return ret
if not node:
raise SaltInvocationError('Required argument "node" is missing.')
if 'dc' in kwargs:
query_params['dc'] = kwargs['dc']
@ -1788,7 +1826,7 @@ def health_node(consul_url, node, **kwargs):
return ret
def health_checks(consul_url, service, **kwargs):
def health_checks(consul_url=None, service=None, **kwargs):
'''
Health information about the registered service.
@ -1818,6 +1856,9 @@ def health_checks(consul_url, service, **kwargs):
ret['res'] = False
return ret
if not service:
raise SaltInvocationError('Required argument "service" is missing.')
if 'dc' in kwargs:
query_params['dc'] = kwargs['dc']
@ -1828,7 +1869,7 @@ def health_checks(consul_url, service, **kwargs):
return ret
def health_service(consul_url, service, **kwargs):
def health_service(consul_url=None, service=None, **kwargs):
'''
Health information about the registered service.
@ -1863,6 +1904,9 @@ def health_service(consul_url, service, **kwargs):
ret['res'] = False
return ret
if not service:
raise SaltInvocationError('Required argument "service" is missing.')
if 'dc' in kwargs:
query_params['dc'] = kwargs['dc']
@ -1879,7 +1923,7 @@ def health_service(consul_url, service, **kwargs):
return ret
def health_state(consul_url, state, **kwargs):
def health_state(consul_url=None, state=None, **kwargs):
'''
Returns the checks in the state provided on the path.
@ -1896,9 +1940,9 @@ def health_state(consul_url, state, **kwargs):
.. code-block:: bash
salt '*' consul.health_service service='redis1'
salt '*' consul.health_state state='redis1'
salt '*' consul.health_service service='redis1' passing='True'
salt '*' consul.health_state service='redis1' passing='True'
'''
ret = {}
@ -1914,6 +1958,9 @@ def health_state(consul_url, state, **kwargs):
ret['res'] = False
return ret
if not state:
raise SaltInvocationError('Required argument "state" is missing.')
if 'dc' in kwargs:
query_params['dc'] = kwargs['dc']
@ -1929,7 +1976,7 @@ def health_state(consul_url, state, **kwargs):
return ret
def status_leader(consul_url):
def status_leader(consul_url=None):
'''
Returns the current Raft leader
@ -1994,7 +2041,7 @@ def status_peers(consul_url):
return ret
def acl_create(consul_url, **kwargs):
def acl_create(consul_url=None, **kwargs):
'''
Create a new ACL token.
@ -2052,7 +2099,7 @@ def acl_create(consul_url, **kwargs):
return ret
def acl_update(consul_url, **kwargs):
def acl_update(consul_url=None, **kwargs):
'''
Update an ACL token.
@ -2119,7 +2166,7 @@ def acl_update(consul_url, **kwargs):
return ret
def acl_delete(consul_url, **kwargs):
def acl_delete(consul_url=None, **kwargs):
'''
Delete an ACL token.
@ -2169,7 +2216,7 @@ def acl_delete(consul_url, **kwargs):
return ret
def acl_info(consul_url, **kwargs):
def acl_info(consul_url=None, **kwargs):
'''
Information about an ACL token.
@ -2210,7 +2257,7 @@ def acl_info(consul_url, **kwargs):
return ret
def acl_clone(consul_url, **kwargs):
def acl_clone(consul_url=None, **kwargs):
'''
Information about an ACL token.
@ -2260,7 +2307,7 @@ def acl_clone(consul_url, **kwargs):
return ret
def acl_list(consul_url, **kwargs):
def acl_list(consul_url=None, **kwargs):
'''
List the ACL tokens.
@ -2300,7 +2347,7 @@ def acl_list(consul_url, **kwargs):
return ret
def event_fire(consul_url, name, **kwargs):
def event_fire(consul_url=None, name=None, **kwargs):
'''
List the ACL tokens.
@ -2333,10 +2380,8 @@ def event_fire(consul_url, name, **kwargs):
ret['res'] = False
return ret
if not 'name':
ret['message'] = 'Required paramter "name" is missing.'
ret['res'] = False
return ret
if not name:
raise SaltInvocationError('Required argument "name" is missing.')
if 'dc' in kwargs:
query_params = kwargs['dc']
@ -2367,7 +2412,7 @@ def event_fire(consul_url, name, **kwargs):
return ret
def event_list(consul_url, **kwargs):
def event_list(consul_url=None, **kwargs):
'''
List the recent events.

View File

@ -6,13 +6,14 @@ Common resources for LXC and systemd-nspawn containers
These functions are not designed to be called directly, but instead from the
:mod:`lxc <salt.modules.lxc>`, :mod:`nspawn <salt.modules.nspawn>`, and
:mod:`docker-ng <salt.modules.dockerng>` execution modules. They provide for
:mod:`dockerng <salt.modules.dockerng>` execution modules. They provide for
common logic to be re-used for common actions.
'''
# Import python libs
from __future__ import absolute_import
import functools
import copy
import logging
import os
import pipes
@ -39,7 +40,7 @@ def _validate(wrapped):
container_type = kwargs.get('container_type')
exec_driver = kwargs.get('exec_driver')
valid_driver = {
'docker-ng': ('lxc-attach', 'nsenter', 'docker-exec'),
'dockerng': ('lxc-attach', 'nsenter', 'docker-exec'),
'lxc': ('lxc-attach',),
'nspawn': ('nsenter',),
}
@ -123,11 +124,18 @@ def run(name,
python_shell=True,
output_loglevel='debug',
ignore_retcode=False,
path=None,
use_vt=False,
keep_env=None):
'''
Common logic for running shell commands in containers
path
path to the container parent (for LXC only)
default: /var/lib/lxc (system default)
.. versionadded:: Beryllium
CLI Example:
.. code-block:: bash
@ -158,6 +166,8 @@ def run(name,
if exec_driver == 'lxc-attach':
full_cmd = 'lxc-attach '
if path:
full_cmd += '-P {0} '.format(pipes.quote(path))
if keep_env is not True:
full_cmd += '--clear-env '
if 'PATH' not in to_keep:
@ -262,12 +272,19 @@ def copy_to(name,
source,
dest,
container_type=None,
path=None,
exec_driver=None,
overwrite=False,
makedirs=False):
'''
Common logic for copying files to containers
path
path to the container parent (for LXC only)
default: /var/lib/lxc (system default)
.. versionadded:: Beryllium
CLI Example:
.. code-block:: bash
@ -276,9 +293,27 @@ def copy_to(name,
'''
# Get the appropriate functions
state = __salt__['{0}.state'.format(container_type)]
run_all = __salt__['{0}.run_all'.format(container_type)]
c_state = state(name)
def run_all(*args, **akwargs):
akwargs = copy.deepcopy(akwargs)
if container_type in ['lxc'] and 'path' not in akwargs:
akwargs['path'] = path
return __salt__['{0}.run_all'.format(container_type)](
*args, **akwargs)
state_kwargs = {}
cmd_kwargs = {'ignore_retcode': True}
if container_type in ['lxc']:
cmd_kwargs['path'] = path
state_kwargs['path'] = path
def _state(name):
if state_kwargs:
return state(name, **state_kwargs)
else:
return state(name)
c_state = _state(name)
if c_state != 'running':
raise CommandExecutionError(
'Container \'{0}\' is not running'.format(name)
@ -302,7 +337,7 @@ def copy_to(name,
raise SaltInvocationError('Destination path must be absolute')
if run_all(name,
'test -d {0}'.format(pipes.quote(dest)),
ignore_retcode=True)['retcode'] == 0:
**cmd_kwargs)['retcode'] == 0:
# Destination is a directory, full path to dest file will include the
# basename of the source file.
dest = os.path.join(dest, source_name)
@ -313,10 +348,11 @@ def copy_to(name,
dest_dir, dest_name = os.path.split(dest)
if run_all(name,
'test -d {0}'.format(pipes.quote(dest_dir)),
ignore_retcode=True)['retcode'] != 0:
**cmd_kwargs)['retcode'] != 0:
if makedirs:
result = run_all(name,
'mkdir -p {0}'.format(pipes.quote(dest_dir)))
'mkdir -p {0}'.format(pipes.quote(dest_dir)),
**cmd_kwargs)
if result['retcode'] != 0:
error = ('Unable to create destination directory {0} in '
'container \'{1}\''.format(dest_dir, name))
@ -330,7 +366,7 @@ def copy_to(name,
)
if not overwrite and run_all(name,
'test -e {0}'.format(pipes.quote(dest)),
ignore_retcode=True)['retcode'] == 0:
**cmd_kwargs)['retcode'] == 0:
raise CommandExecutionError(
'Destination path {0} already exists. Use overwrite=True to '
'overwrite it'.format(dest)
@ -350,9 +386,12 @@ def copy_to(name,
# and passing it as stdin to run(). This will keep down memory
# usage for the minion and make the operation run quicker.
if exec_driver == 'lxc-attach':
lxcattach = 'lxc-attach'
if path:
lxcattach += ' -P {0}'.format(pipes.quote(path))
copy_cmd = (
'cat "{0}" | lxc-attach --clear-env --set-var {1} -n {2} -- '
'tee "{3}"'.format(local_file, PATH, name, dest)
'cat "{0}" | {4} --clear-env --set-var {1} -n {2} -- '
'tee "{3}"'.format(local_file, PATH, name, dest, lxcattach)
)
elif exec_driver == 'nsenter':
pid = __salt__['{0}.pid'.format(container_type)](name)

View File

@ -10,7 +10,6 @@ import random
# Import salt libs
import salt.utils
import salt.ext.six as six
from salt.ext.six.moves import range
@ -27,10 +26,11 @@ def __virtual__():
def _encode(string):
if isinstance(string, six.text_type):
string = string.encode('utf-8')
elif not string:
string = ''
try:
string = salt.utils.to_str(string)
except TypeError:
if not string:
string = ''
return "{0}".format(string)

View File

@ -122,7 +122,7 @@ def show_config(jail):
ret = {}
if subprocess.call(["jls", "-nq", "-j", jail]) == 0:
jls = subprocess.check_output(["jls", "-nq", "-j", jail]) # pylint: disable=minimum-python-version
jailopts = shlex.split(jls)
jailopts = shlex.split(salt.utils.to_str(jls))
for jailopt in jailopts:
if '=' not in jailopt:
ret[jailopt.strip().rstrip(";")] = '1'

View File

@ -40,6 +40,8 @@ __outputter__ = {
# http://stackoverflow.com/a/12414913/127816
_infinitedict = lambda: collections.defaultdict(_infinitedict)
_non_existent_key = 'NonExistentValueMagicNumberSpK3hnufdHfeBUXCfqVK'
log = logging.getLogger(__name__)
@ -167,7 +169,7 @@ def item(*args, **kwargs):
'''
ret = {}
default = kwargs.get('default', '')
delimiter = kwargs.get('delimiter', ':')
delimiter = kwargs.get('delimiter', DEFAULT_TARGET_DELIM)
try:
for arg in args:
@ -274,7 +276,7 @@ def setval(key, val, destructive=False):
return setvals({key: val}, destructive)
def append(key, val, convert=False, delimiter=':'):
def append(key, val, convert=False, delimiter=DEFAULT_TARGET_DELIM):
'''
.. versionadded:: 0.17.0
@ -545,8 +547,8 @@ def get_or_set_hash(name,
if ret is None:
val = ''.join([random.SystemRandom().choice(chars) for _ in range(length)])
if ':' in name:
root, rest = name.split(':', 1)
if DEFAULT_TARGET_DELIM in name:
root, rest = name.split(DEFAULT_TARGET_DELIM, 1)
curr = get(root, _infinitedict())
val = _dict_from_path(rest, val)
curr.update(val)
@ -555,3 +557,115 @@ def get_or_set_hash(name,
setval(name, val)
return get(name)
def set(key,
val='',
force=False,
destructive=False,
delimiter=DEFAULT_TARGET_DELIM):
'''
Set a key to an arbitrary value. It is used like setval but works
with nested keys.
This function is conservative. It will only overwrite an entry if
its value and the given one are not a list or a dict. The `force`
parameter is used to allow overwriting in all cases.
.. versionadded:: FIXME
:param force: Force writing over existing entry if given or existing
values are list or dict. Defaults to False.
:param destructive: If an operation results in a key being removed,
delete the key, too. Defaults to False.
:param delimiter:
Specify an alternate delimiter to use when traversing a nested dict
CLI Example:
.. code-block:: bash
salt '*' grains.set 'apps:myApp:port' 2209
salt '*' grains.set 'apps:myApp' '{port: 2209}'
'''
ret = {'comment': [],
'changes': {},
'result': True}
# Get val type
_new_value_type = 'simple'
if isinstance(val, dict):
_new_value_type = 'complex'
elif isinstance(val, list):
_new_value_type = 'complex'
_existing_value = get(key, _non_existent_key, delimiter)
_value = _existing_value
_existing_value_type = 'simple'
if _existing_value == _non_existent_key:
_existing_value_type = None
elif isinstance(_existing_value, dict):
_existing_value_type = 'complex'
elif isinstance(_existing_value, list):
_existing_value_type = 'complex'
if _existing_value_type is not None and _existing_value == val:
ret['comment'] = 'The value \'{0}\' was already set for key \'{1}\''.format(val, key)
return ret
if _existing_value is not None and not force:
if _existing_value_type == 'complex':
ret['comment'] = 'The key \'{0}\' exists but is a dict or a list. '.format(key) \
+ 'Use \'force=True\' to overwrite.'
ret['result'] = False
return ret
elif _new_value_type == 'complex' and _existing_value_type is not None:
ret['comment'] = 'The key \'{0}\' exists and the given value is a '.format(key) \
+ 'dict or a list. Use \'force=True\' to overwrite.'
ret['result'] = False
return ret
else:
_value = val
else:
_value = val
# Process nested grains
while delimiter in key:
key, rest = key.rsplit(delimiter, 1)
_existing_value = get(key, {}, delimiter)
if isinstance(_existing_value, dict):
if _value is None and destructive:
if rest in _existing_value.keys():
_existing_value.pop(rest)
else:
_existing_value.update({rest: _value})
elif isinstance(_existing_value, list):
_list_updated = False
for _index, _item in enumerate(_existing_value):
if _item == rest:
_existing_value[_index] = {rest: _value}
_list_updated = True
elif isinstance(_item, dict) and rest in _item:
_item.update({rest: _value})
_list_updated = True
if not _list_updated:
_existing_value.append({rest: _value})
elif _existing_value == rest or force:
_existing_value = {rest: _value}
else:
ret['comment'] = 'The key \'{0}\' value is \'{1}\', '.format(key, _existing_value) \
+ 'which is different from the provided key \'{0}\'. '.format(rest) \
+ 'Use \'force=True\' to overwrite.'
ret['result'] = False
return ret
_value = _existing_value
_setval_ret = setval(key, _value, destructive=destructive)
if isinstance(_setval_ret, dict):
ret['changes'] = _setval_ret
else:
ret['comment'] = _setval_ret
ret['result'] = False
return ret

View File

@ -3,10 +3,16 @@
A collection of hashing and encoding functions
'''
from __future__ import absolute_import
# Import python libs
import base64
import hashlib
import hmac
# Import third-party libs
import salt.utils
import salt.ext.six as six
def base64_encodestring(instr):
'''
@ -20,6 +26,10 @@ def base64_encodestring(instr):
salt '*' hashutil.base64_encodestring 'get salted'
'''
if six.PY3:
b = salt.utils.to_bytes(instr)
b64 = base64.encodebytes(b)
return salt.utils.to_str(b64)
return base64.encodestring(instr)
@ -35,6 +45,13 @@ def base64_decodestring(instr):
salt '*' hashutil.base64_decodestring 'Z2V0IHNhbHRlZA==\\n'
'''
if six.PY3:
b = salt.utils.to_bytes(instr)
data = base64.decodebytes(b)
try:
return salt.utils.to_str(data)
except UnicodeDecodeError:
return data
return base64.decodestring(instr)
@ -50,6 +67,9 @@ def md5_digest(instr):
salt '*' hashutil.md5_digest 'get salted'
'''
if six.PY3:
b = salt.utils.to_bytes(instr)
return hashlib.md5(b).hexdigest()
return hashlib.md5(instr).hexdigest()
@ -65,6 +85,9 @@ def sha256_digest(instr):
salt '*' hashutil.sha256_digest 'get salted'
'''
if six.PY3:
b = salt.utils.to_bytes(instr)
return hashlib.sha256(b).hexdigest()
return hashlib.sha256(instr).hexdigest()
@ -80,6 +103,9 @@ def sha512_digest(instr):
salt '*' hashutil.sha512_digest 'get salted'
'''
if six.PY3:
b = salt.utils.to_bytes(instr)
return hashlib.sha512(b).hexdigest()
return hashlib.sha512(instr).hexdigest()
@ -95,8 +121,16 @@ def hmac_signature(string, shared_secret, challenge_hmac):
.. code-block:: bash
salt '*' hashutil.hmac_signature 'get salted' 'shared secret' 'NS2BvKxFRk+rndAlFbCYIFNVkPtI/3KiIYQw4okNKU8='
salt '*' hashutil.hmac_signature 'get salted' 'shared secret' 'eBWf9bstXg+NiP5AOwppB5HMvZiYMPzEM9W5YMm/AmQ='
'''
hmac_hash = hmac.new(string, shared_secret, hashlib.sha256)
if six.PY3:
msg = salt.utils.to_bytes(string)
key = salt.utils.to_bytes(shared_secret)
challenge = salt.utils.to_bytes(challenge_hmac)
else:
msg = string
key = shared_secret
challenge = challenge_hmac
hmac_hash = hmac.new(key, msg, hashlib.sha256)
valid_hmac = base64.b64encode(hmac_hash.digest())
return valid_hmac == challenge_hmac
return valid_hmac == challenge

View File

@ -24,6 +24,7 @@ from subprocess import Popen, PIPE, STDOUT
# Import Salt Libs
from salt.modules.inspectlib.dbhandle import DBHandle
from salt.modules.inspectlib.exceptions import (InspectorSnapshotException)
import salt.utils
from salt.utils import fsutils
from salt.utils import reinit_crypto
@ -73,6 +74,7 @@ class Inspector(object):
pkg_name = None
pkg_configs = []
out = salt.utils.to_str(out)
for line in out.split(os.linesep):
line = line.strip()
if not line:
@ -99,6 +101,7 @@ class Inspector(object):
cfgs = list()
out, err = self._syscall("rpm", None, None, '-V', '--nodeps', '--nodigest',
'--nosignature', '--nomtime', '--nolinkto', pkg_name)
out = salt.utils.to_str(out)
for line in out.split(os.linesep):
line = line.strip()
if not line or line.find(" c ") < 0 or line.split(" ")[0].find("5") < 0:

View File

@ -1,30 +1,34 @@
# -*- coding: utf-8 -*-
'''
Support IPMI commands over LAN
Support IPMI commands over LAN. This module does not talk to the local
systems hardware through IPMI drivers. It uses a python module `pyghmi`.
:depends: Python module pyghmi
:depends: Python module pyghmi.
You can install pyghmi using pip:
:warning: pyghmi version >= 0.6.21
.. code-block:: bash
you can install pyghmi using somthing like:
pip install pyghmi
git clone https://github.com/stackforge/pyghmi.git
sudo mv pyghmi/pyghmi /usr/lib/python2.6/site-packages/pyghmi
:configuration: The following configuration defaults can be
define (pillar or config files):
The following configuration defaults can be define in the pillar:
.. code-block:: python
ipmi.config:
api_host: 127.0.0.1
api_user: admin
api_pass: apassword
api_port: 623
api_kg: None
ipmi.config:
api_host: 127.0.0.1
api_user: admin
api_pass: apassword
api_port: 623
api_kg: None
most calls can override the api connection config defaults:
Usage can override the config defaults:
salt-call ipmi.get_user api_host=myipmienabled.system
api_user=admin api_pass=pass
uid=1
.. code-block:: bash
salt-call ipmi.get_user api_host=myipmienabled.system
api_user=admin api_pass=pass
uid=1
'''
# Import Python Libs
@ -54,7 +58,7 @@ def _get_config(**kwargs):
'api_port': 623,
'api_user': 'admin',
'api_pass': '',
'api_key': None,
'api_kg': None,
'api_login_timeout': 2,
}
if '__salt__' in globals():
@ -65,7 +69,7 @@ def _get_config(**kwargs):
return config
class IpmiCommand(object):
class _IpmiCommand(object):
o = None
def __init__(self, **kwargs):
@ -73,7 +77,7 @@ class IpmiCommand(object):
config = _get_config(**kwargs)
self.o = command.Command(bmc=config['api_host'], userid=config['api_user'],
password=config['api_pass'], port=config['api_port'],
kg=config['api_key'])
kg=config['api_kg'])
def __enter__(self):
return self.o
@ -83,7 +87,7 @@ class IpmiCommand(object):
self.o.ipmi_session.logout()
class IpmiSession(object):
class _IpmiSession(object):
o = None
def _onlogon(self, response):
@ -97,7 +101,7 @@ class IpmiSession(object):
userid=config['api_user'],
password=config['api_pass'],
port=config['api_port'],
kg=config['api_key'],
kg=config['api_kg'],
onlogon=self._onlogon)
while not self.o.logged:
# override timeout
@ -126,10 +130,11 @@ def raw_command(netfn, command, bridge_request=None, data=(), retry=True, delay_
the bridge request.
:param data: Command data as a tuple or list
:param kwargs:
api_host=127.0.0.1
api_user=admin
api_pass=example
api_port=623
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
:returns: dict -- The response from IPMI device
@ -140,7 +145,7 @@ def raw_command(netfn, command, bridge_request=None, data=(), retry=True, delay_
salt-call ipmi.raw_command netfn=0x06 command=0x46 data=[0x02]
# this will return the name of the user with id 2 in bytes
'''
with IpmiSession(**kwargs) as s:
with _IpmiSession(**kwargs) as s:
r = s.raw_command(netfn=int(netfn),
command=int(command),
bridge_request=bridge_request,
@ -156,10 +161,11 @@ def fast_connect_test(**kwargs):
This uses an aggressive timeout value!
:param kwargs:
api_host=127.0.0.1
api_user=admin
api_pass=example
api_port=623
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
CLI Examples:
@ -170,7 +176,7 @@ def fast_connect_test(**kwargs):
try:
if 'api_login_timeout' not in kwargs:
kwargs['api_login_timeout'] = 0
with IpmiSession(**kwargs) as s:
with _IpmiSession(**kwargs) as s:
# TODO: should a test command be fired?
#s.raw_command(netfn=6, command=1, retry=False)
return True
@ -187,51 +193,50 @@ def set_channel_access(channel=14, access_update_mode='non_volatile',
Set channel access
:param channel: number [1:7]
:param access_update_mode:
dont_change = don't set or change Channel Access
non_volatile = set non-volatile Channel Access
volatile = set volatile (active) setting of Channel Access
:param access_update_mode: one of
- 'dont_change' = don't set or change Channel Access
- 'non_volatile' = set non-volatile Channel Access
- 'volatile' = set volatile (active) setting of Channel Access
:param alerting: PEF Alerting Enable/Disable
True = enable PEF Alerting
False = disable PEF Alerting on this channel
(Alert Immediate command can still be used to generate alerts)
- True = enable PEF Alerting
- False = disable PEF Alerting on this channel
(Alert Immediate command can still be used to generate alerts)
:param per_msg_auth: Per-message Authentication
True = enable
False = disable Per-message Authentication. [Authentication required to
activate any session on this channel, but authentication not
used on subsequent packets for the session.]
- True = enable
- False = disable Per-message Authentication. [Authentication required to
activate any session on this channel, but authentication not
used on subsequent packets for the session.]
:param user_level_auth: User Level Authentication Enable/Disable.
True = enable User Level Authentication. All User Level commands are
to be authenticated per the Authentication Type that was
negotiated when the session was activated.
False = disable User Level Authentication. Allow User Level commands to
be executed without being authenticated.
If the option to disable User Level Command authentication is
accepted, the BMC will accept packets with Authentication Type
set to None if they contain user level commands.
For outgoing packets, the BMC returns responses with the same
Authentication Type that was used for the request.
- True = enable User Level Authentication. All User Level commands are
to be authenticated per the Authentication Type that was
negotiated when the session was activated.
- False = disable User Level Authentication. Allow User Level commands to
be executed without being authenticated.
If the option to disable User Level Command authentication is
accepted, the BMC will accept packets with Authentication Type
set to None if they contain user level commands.
For outgoing packets, the BMC returns responses with the same
Authentication Type that was used for the request.
:param access_mode: Access Mode for IPMI messaging
(PEF Alerting is enabled/disabled separately from IPMI messaging)
disabled = disabled for IPMI messaging
pre_boot = pre-boot only channel only available when system is in a
powered down state or in BIOS prior to start of boot.
always = channel always available regardless of system mode.
BIOS typically dedicates the serial connection to the BMC.
shared = same as always available, but BIOS typically leaves the
serial port available for software use.
(PEF Alerting is enabled/disabled separately from IPMI messaging)
* disabled = disabled for IPMI messaging
* pre_boot = pre-boot only channel only available when system is in a
powered down state or in BIOS prior to start of boot.
* always = channel always available regardless of system mode.
BIOS typically dedicates the serial connection to the BMC.
* shared = same as always available, but BIOS typically leaves the
serial port available for software use.
:param privilege_update_mode: Channel Privilege Level Limit.
This value sets the maximum privilege level
that can be accepted on the specified channel.
dont_change = don't set or change channel Privilege Level Limit
non_volatile = non-volatile Privilege Level Limit according
volatile = volatile setting of Privilege Level Limit
* dont_change = don't set or change channel Privilege Level Limit
* non_volatile = non-volatile Privilege Level Limit according
* volatile = volatile setting of Privilege Level Limit
:param privilege_level: Channel Privilege Level Limit
* reserved = unused
@ -242,10 +247,11 @@ def set_channel_access(channel=14, access_update_mode='non_volatile',
* proprietary = used by OEM
:param kwargs:
api_host=127.0.0.1
api_user=admin
api_pass=example
api_port=623
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
CLI Examples:
@ -253,7 +259,7 @@ def set_channel_access(channel=14, access_update_mode='non_volatile',
salt-call ipmi.set_channel_access privilege_level='administrator'
'''
with IpmiCommand(**kwargs) as s:
with _IpmiCommand(**kwargs) as s:
return s.set_channel_access(channel, access_update_mode, alerting, per_msg_auth, user_level_auth,
access_mode, privilege_update_mode, privilege_level)
@ -264,33 +270,35 @@ def get_channel_access(channel=14, read_mode='non_volatile', **kwargs):
:param channel: number [1:7]
:param read_mode:
non_volatile = get non-volatile Channel Access
volatile = get present volatile (active) setting of Channel Access
- non_volatile = get non-volatile Channel Access
- volatile = get present volatile (active) setting of Channel Access
:param kwargs:
api_host=127.0.0.1
api_user=admin
api_pass=example
api_port=623
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
:return: A Python dict with the following keys/values::
{
- alerting:
- per_msg_auth:
- user_level_auth:
- access_mode:{
0: 'disabled',
1: 'pre_boot',
2: 'always',
3: 'shared'
:return: A Python dict with the following keys/values:
.. code-block:: python
{
alerting:
per_msg_auth:
user_level_auth:
access_mode:{ (ONE OF)
0: 'disabled',
1: 'pre_boot',
2: 'always',
3: 'shared'
}
privilege_level: { (ONE OF)
1: 'callback',
2: 'user',
3: 'operator',
4: 'administrator',
5: 'proprietary',
}
}
- privilege_level: {
1: 'callback',
2: 'user',
3: 'operator',
4: 'administrator',
5: 'proprietary'
}
}
CLI Examples:
@ -298,7 +306,7 @@ def get_channel_access(channel=14, read_mode='non_volatile', **kwargs):
salt-call ipmi.get_channel_access channel=1
'''
with IpmiCommand(**kwargs) as s:
with _IpmiCommand(**kwargs) as s:
return s.get_channel_access(channel)
@ -308,10 +316,11 @@ def get_channel_info(channel=14, **kwargs):
:param channel: number [1:7]
:param kwargs:
api_host=127.0.0.1
api_user=admin
api_pass=example
api_port=623
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
:return:
session_support:
@ -328,7 +337,7 @@ def get_channel_info(channel=14, **kwargs):
salt-call ipmi.get_channel_info
'''
with IpmiCommand(**kwargs) as s:
with _IpmiCommand(**kwargs) as s:
return s.get_channel_info(channel)
@ -373,18 +382,19 @@ def set_user_access(uid, channel=14, callback=True, link_auth=True, ipmi_msg=Tru
:param privilege_level:
User Privilege Limit. (Determines the maximum privilege level that the
user is allowed to switch to on the specified channel.)
* callback
* user
* operator
* administrator
* proprietary
* no_access
- callback
- user
- operator
- administrator
- proprietary
- no_access
:param kwargs:
api_host=127.0.0.1
api_user=admin
api_pass=example
api_port=623
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
CLI Examples:
@ -392,7 +402,7 @@ def set_user_access(uid, channel=14, callback=True, link_auth=True, ipmi_msg=Tru
salt-call ipmi.set_user_access uid=2 privilege_level='operator'
'''
with IpmiCommand(**kwargs) as s:
with _IpmiCommand(**kwargs) as s:
return s.set_user_access(uid, channel, callback, link_auth, ipmi_msg, privilege_level)
@ -403,22 +413,23 @@ def get_user_access(uid, channel=14, **kwargs):
:param uid: user number [1:16]
:param channel: number [1:7]
:param kwargs:
api_host=127.0.0.1
api_user=admin
api_pass=example
api_port=623
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
:return:
channel_info:
max_user_count = maximum number of user IDs on this channel
enabled_users = count of User ID slots presently in use
users_with_fixed_names = count of user IDs with fixed names
- max_user_count = maximum number of user IDs on this channel
- enabled_users = count of User ID slots presently in use
- users_with_fixed_names = count of user IDs with fixed names
access:
callback
link_auth
ipmi_msg
privilege_level: [reserved, callback, user, operator
administrator, proprietary, no_access]
- callback
- link_auth
- ipmi_msg
- privilege_level: [reserved, callback, user, operator
administrator, proprietary, no_access]
@ -429,7 +440,7 @@ def get_user_access(uid, channel=14, **kwargs):
salt-call ipmi.get_user_access uid=2
'''
## user access available during call-in or callback direct connection
with IpmiCommand(**kwargs) as s:
with _IpmiCommand(**kwargs) as s:
return s.get_user_access(uid, channel=channel)
@ -440,10 +451,11 @@ def set_user_name(uid, name, **kwargs):
:param uid: user number [1:16]
:param name: username (limit of 16bytes)
:param kwargs:
api_host=127.0.0.1
api_user=admin
api_pass=example
api_port=623
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
CLI Examples:
@ -451,7 +463,7 @@ def set_user_name(uid, name, **kwargs):
salt-call ipmi.set_user_name uid=2 name='steverweber'
'''
with IpmiCommand(**kwargs) as s:
with _IpmiCommand(**kwargs) as s:
return s.set_user_name(uid, name)
@ -462,10 +474,11 @@ def get_user_name(uid, return_none_on_error=True, **kwargs):
:param uid: user number [1:16]
:param return_none_on_error: return None on error
:param kwargs:
api_host=127.0.0.1
api_user=admin
api_pass=example
api_port=623
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
CLI Examples:
@ -473,7 +486,7 @@ def get_user_name(uid, return_none_on_error=True, **kwargs):
salt-call ipmi.get_user_name uid=2
'''
with IpmiCommand(**kwargs) as s:
with _IpmiCommand(**kwargs) as s:
return s.get_user_name(uid, return_none_on_error=True)
@ -484,17 +497,18 @@ def set_user_password(uid, mode='set_password', password=None, **kwargs):
:param uid: id number of user. see: get_names_uid()['name']
:param mode:
disable = disable user connections
enable = enable user connections
set_password = set or ensure password
test_password = test password is correct
- disable = disable user connections
- enable = enable user connections
- set_password = set or ensure password
- test_password = test password is correct
:param password: max 16 char string
(optional when mode is [disable or enable])
:param kwargs:
api_host=127.0.0.1
api_user=admin
api_pass=example
api_port=623
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
:return:
True on success
@ -508,7 +522,7 @@ def set_user_password(uid, mode='set_password', password=None, **kwargs):
uid=1 password=newPass
salt-call ipmi.set_user_password uid=1 mode=enable
'''
with IpmiCommand(**kwargs) as s:
with _IpmiCommand(**kwargs) as s:
s.set_user_password(uid, mode='set_password', password=password)
return True
@ -524,10 +538,11 @@ def get_health(**kwargs):
good health: {'badreadings': [], 'health': 0}
:param kwargs:
api_host=127.0.0.1
api_user=admin
api_pass=example
api_port=623
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
CLI Example:
@ -535,7 +550,7 @@ def get_health(**kwargs):
salt-call ipmi.get_health api_host=127.0.0.1 api_user=admin api_pass=pass
'''
with IpmiCommand(**kwargs) as s:
with _IpmiCommand(**kwargs) as s:
return s.get_health()
@ -547,10 +562,11 @@ def get_power(**kwargs):
either 'on' or 'off' to indicate current state.
:param kwargs:
api_host=127.0.0.1
api_user=admin
api_pass=example
api_port=623
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
CLI Example:
@ -558,7 +574,7 @@ def get_power(**kwargs):
salt-call ipmi.get_power api_host=127.0.0.1 api_user=admin api_pass=pass
'''
with IpmiCommand(**kwargs) as s:
with _IpmiCommand(**kwargs) as s:
return s.get_power()['powerstate']
@ -569,10 +585,11 @@ def get_sensor_data(**kwargs):
Iterates sensor reading objects
:param kwargs:
api_host=127.0.0.1
api_user=admin
api_pass=example
api_port=623
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
CLI Example:
@ -580,7 +597,7 @@ def get_sensor_data(**kwargs):
salt-call ipmi.get_sensor_data api_host=127.0.0.1 api_user=admin api_pass=pass
'''
with IpmiCommand(**kwargs) as s:
with _IpmiCommand(**kwargs) as s:
data = {}
for reading in s.get_sensor_data():
data[reading['name']] = reading
@ -597,10 +614,11 @@ def get_bootdev(**kwargs):
next reboot.
:param kwargs:
api_host=127.0.0.1
api_user=admin
api_pass=example
api_port=623
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
CLI Example:
@ -608,7 +626,7 @@ def get_bootdev(**kwargs):
salt-call ipmi.get_bootdev api_host=127.0.0.1 api_user=admin api_pass=pass
'''
with IpmiCommand(**kwargs) as s:
with _IpmiCommand(**kwargs) as s:
return s.get_bootdev()
@ -619,19 +637,20 @@ def set_power(state='power_on', wait=True, **kwargs):
:param name:
* power_on -- system turn on
* power_off -- system turn off (without waiting for OS)
* shutdown' -- request OS proper shutdown
* reset' -- reset (without waiting for OS)
* boot' -- If system is off, then 'on', else 'reset'
* shutdown -- request OS proper shutdown
* reset -- reset (without waiting for OS)
* boot -- If system is off, then 'on', else 'reset'
:param ensure: If (bool True), do not return until system actually completes
requested state change for 300 seconds.
If a non-zero (int), adjust the wait time to the
requested number of seconds
:param kwargs:
api_host=127.0.0.1
api_user=admin
api_pass=example
api_port=623
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
:returns: dict -- A dict describing the response retrieved
@ -645,7 +664,7 @@ def set_power(state='power_on', wait=True, **kwargs):
state = 'on'
if state is False or state == 'power_off':
state = 'off'
with IpmiCommand(**kwargs) as s:
with _IpmiCommand(**kwargs) as s:
return s.set_power(state, wait=wait)
@ -670,10 +689,11 @@ def set_bootdev(bootdev='default', persist=False, uefiboot=False, **kwargs):
In practice, this flag not being set does not preclude
UEFI boot on any system I've encountered.
:param kwargs:
api_host=127.0.0.1
api_user=admin
api_pass=example
api_port=623
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
:returns: dict or True -- If callback is not provided, the response
@ -683,7 +703,7 @@ def set_bootdev(bootdev='default', persist=False, uefiboot=False, **kwargs):
salt-call ipmi.set_bootdev bootdev=network persist=True
'''
with IpmiCommand(**kwargs) as s:
with _IpmiCommand(**kwargs) as s:
return s.set_bootdev(bootdev)
@ -698,10 +718,11 @@ def set_identify(on=True, duration=600, **kwargs):
:param duration: Set if wanting to request turn on for a duration
in seconds, None = indefinitely.
:param kwargs:
api_host=127.0.0.1
api_user=admin
api_pass=example
api_port=623
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
CLI Examples:
@ -709,7 +730,7 @@ def set_identify(on=True, duration=600, **kwargs):
salt-call ipmi.set_identify
'''
with IpmiCommand(**kwargs) as s:
with _IpmiCommand(**kwargs) as s:
return s.set_identify(on=on, duration=duration)
@ -718,6 +739,12 @@ def get_channel_max_user_count(channel=14, **kwargs):
Get max users in channel
:param channel: number [1:7]
:param kwargs:
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
:return: int -- often 16
CLI Examples:
@ -734,22 +761,25 @@ def get_user(uid, channel=14, **kwargs):
'''
Get user from uid and access on channel
:param uid: user number [1:16]
:param channel: number [1:7]
:param kwargs:
api_host=127.0.0.1
api_user=admin
api_pass=example
api_port=623
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
:return:
name: (str)
uid: (int)
channel: (int)
access:
callback (bool)
link_auth (bool)
ipmi_msg (bool)
privilege_level: (str)[callback, user, operatorm administrator, proprietary, no_access]
- callback (bool)
- link_auth (bool)
- ipmi_msg (bool)
- privilege_level: (str)[callback, user, operatorm administrator,
proprietary, no_access]
CLI Examples:
.. code-block:: bash
@ -766,23 +796,23 @@ def get_users(channel=14, **kwargs):
'''
get list of users and access information
:param uid: user number [1:16]
:param channel: number [1:7]
:param kwargs:
api_host=127.0.0.1
api_user=admin
api_pass=example
api_port=623
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
:return:
name: (str)
uid: (int)
channel: (int)
access:
callback (bool)
link_auth (bool)
ipmi_msg (bool)
privilege_level: (str)[callback, user, operatorm administrator,
- callback (bool)
- link_auth (bool)
- ipmi_msg (bool)
- privilege_level: (str)[callback, user, operatorm administrator,
proprietary, no_access]
CLI Examples:
@ -791,7 +821,7 @@ def get_users(channel=14, **kwargs):
salt-call ipmi.get_users api_host=172.168.0.7
'''
with IpmiCommand(**kwargs) as c:
with _IpmiCommand(**kwargs) as c:
return c.get_users(channel)
@ -810,10 +840,11 @@ def create_user(uid, name, password, channel=14, callback=False,
* proprietary
* no_access
:param kwargs:
api_host=127.0.0.1
api_user=admin
api_pass=example
api_port=623
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
CLI Examples:
@ -821,7 +852,7 @@ def create_user(uid, name, password, channel=14, callback=False,
salt-call ipmi.create_user uid=2 name=steverweber api_host=172.168.0.7 api_pass=nevertell
'''
with IpmiCommand(**kwargs) as c:
with _IpmiCommand(**kwargs) as c:
return c.create_user(uid, name, password, channel, callback,
link_auth, ipmi_msg, privilege_level)
@ -833,10 +864,11 @@ def user_delete(uid, channel=14, **kwargs):
:param uid: user number [1:16]
:param channel: number [1:7]
:param kwargs:
api_host=127.0.0.1
api_user=admin
api_pass=example
api_port=623
- api_host=127.0.0.1
- api_user=admin
- api_pass=example
- api_port=623
- api_kg=None
CLI Examples:
@ -844,5 +876,5 @@ def user_delete(uid, channel=14, **kwargs):
salt-call ipmi.user_delete uid=2
'''
with IpmiCommand(**kwargs) as c:
with _IpmiCommand(**kwargs) as c:
return c.user_delete(uid, channel)

View File

@ -66,7 +66,10 @@ def _available_services():
# the system provided plutil program to do the conversion
cmd = '/usr/bin/plutil -convert xml1 -o - -- "{0}"'.format(true_path)
plist_xml = __salt__['cmd.run_all'](cmd, python_shell=False)['stdout']
plist = plistlib.readPlistFromString(plist_xml)
if six.PY2:
plist = plistlib.readPlistFromString(plist_xml)
else:
plist = plistlib.readPlistFromBytes(salt.utils.to_bytes(plist_xml))
available_services[plist.Label.lower()] = {
'filename': filename,

File diff suppressed because it is too large Load Diff

View File

@ -7,6 +7,7 @@ Provides access to randomness generators.
from __future__ import absolute_import
# Import python libs
import hashlib
import random
# Import salt libs
import salt.utils.pycrypto
@ -122,3 +123,24 @@ def shadow_hash(crypt_salt=None, password=None, algorithm='sha512'):
salt '*' random.shadow_hash 'My5alT' 'MyP@asswd' md5
'''
return salt.utils.pycrypto.gen_hash(crypt_salt, password, algorithm)
def rand_int(start=1, end=10):
'''
Returns a random integer number between the start and end number.
.. versionadded: 2015.5.3
start : 1
Any valid integer number
end : 10
Any valid integer number
CLI Example:
.. code-block:: bash
salt '*' random.rand_int 1 10
'''
return random.randint(start, end)

View File

@ -678,7 +678,7 @@ def interface_ip(iface):
def subnets():
'''
Returns a list of subnets to which the host belongs
Returns a list of IPv4 subnets to which the host belongs
CLI Example:
@ -689,6 +689,19 @@ def subnets():
return salt.utils.network.subnets()
def subnets6():
'''
Returns a list of IPv6 subnets to which the host belongs
CLI Example:
.. code-block:: bash
salt '*' network.subnets
'''
return salt.utils.network.subnets6()
def in_subnet(cidr):
'''
Returns True if host is within specified subnet, otherwise False.
@ -712,22 +725,25 @@ def ip_in_subnet(ip_addr, cidr):
salt '*' network.ip_in_subnet 172.17.0.4 172.16.0.0/12
'''
return salt.utils.network.ip_in_subnet(ip_addr, cidr)
return salt.utils.network.in_subnet(cidr, ip_addr)
def calculate_subnet(ip_addr, netmask):
def calc_net(ip_addr, netmask=None):
'''
Returns the CIDR of a subnet based on an IP address and network.
Returns the CIDR of a subnet based on
an IP address (CIDR notation supported)
and optional netmask.
CLI Example:
.. code-block:: bash
salt '*' network.calculate_subnet 172.17.0.5 255.255.255.240
salt '*' network.calc_net 172.17.0.5 255.255.255.240
salt '*' network.calc_net 2a02:f6e:a000:80:84d8:8332:7866:4e07/64
.. versionadded:: Beryllium
'''
return salt.utils.network.calculate_subnet(ip_addr, netmask)
return salt.utils.network.calc_net(ip_addr, netmask)
def ip_addrs(interface=None, include_loopback=False, cidr=None):

View File

@ -7,6 +7,7 @@ Resources needed by pkg providers
from __future__ import absolute_import
import fnmatch
import logging
import os
import pprint
# Import third party libs
@ -15,17 +16,21 @@ import salt.ext.six as six
# Import salt libs
import salt.utils
from salt.exceptions import SaltInvocationError
log = logging.getLogger(__name__)
__SUFFIX_NOT_NEEDED = ('x86_64', 'noarch')
def _repack_pkgs(pkgs):
def _repack_pkgs(pkgs, normalize=True):
'''
Repack packages specified using "pkgs" argument to pkg states into a single
dictionary
'''
_normalize_name = __salt__.get('pkg.normalize_name', lambda pkgname: pkgname)
if normalize and 'pkg.normalize_name' in __salt__:
_normalize_name = __salt__['pkg.normalize_name']
else:
_normalize_name = lambda pkgname: pkgname
return dict(
[
(_normalize_name(str(x)), str(y) if y is not None else y)
@ -34,7 +39,7 @@ def _repack_pkgs(pkgs):
)
def pack_sources(sources):
def pack_sources(sources, normalize=True):
'''
Accepts list of dicts (or a string representing a list of dicts) and packs
the key/value pairs into a single dict.
@ -42,13 +47,27 @@ def pack_sources(sources):
``'[{"foo": "salt://foo.rpm"}, {"bar": "salt://bar.rpm"}]'`` would become
``{"foo": "salt://foo.rpm", "bar": "salt://bar.rpm"}``
normalize : True
Normalize the package name by removing the architecture, if the
architecture of the package is different from the architecture of the
operating system. The ability to disable this behavior is useful for
poorly-created packages which include the architecture as an actual
part of the name, such as kernel modules which match a specific kernel
version.
.. versionadded:: Beryllium
CLI Example:
.. code-block:: bash
salt '*' pkg_resource.pack_sources '[{"foo": "salt://foo.rpm"}, {"bar": "salt://bar.rpm"}]'
'''
_normalize_name = __salt__.get('pkg.normalize_name', lambda pkgname: pkgname)
if normalize and 'pkg.normalize_name' in __salt__:
_normalize_name = __salt__['pkg.normalize_name']
else:
_normalize_name = lambda pkgname: pkgname
if isinstance(sources, six.string_types):
try:
sources = yaml.safe_load(sources)
@ -102,32 +121,38 @@ def parse_targets(name=None,
return None, None
elif pkgs:
pkgs = _repack_pkgs(pkgs)
pkgs = _repack_pkgs(pkgs, normalize=normalize)
if not pkgs:
return None, None
else:
return pkgs, 'repository'
elif sources and __grains__['os'] != 'MacOS':
sources = pack_sources(sources)
sources = pack_sources(sources, normalize=normalize)
if not sources:
return None, None
srcinfo = []
for pkg_name, pkg_src in six.iteritems(sources):
if __salt__['config.valid_fileproto'](pkg_src):
# Cache package from remote source (salt master, HTTP, FTP)
srcinfo.append((pkg_name,
pkg_src,
__salt__['cp.cache_file'](pkg_src, saltenv),
'remote'))
# Cache package from remote source (salt master, HTTP, FTP) and
# append a tuple containing the cached path along with the
# specified version.
srcinfo.append((
__salt__['cp.cache_file'](pkg_src[0], saltenv),
pkg_src[1]
))
else:
# Package file local to the minion
srcinfo.append((pkg_name, pkg_src, pkg_src, 'local'))
# Package file local to the minion, just append the tuple from
# the pack_sources() return data.
if not os.path.isabs(pkg_src[0]):
raise SaltInvocationError(
'Path {0} for package {1} is either not absolute or '
'an invalid protocol'.format(pkg_src[0], pkg_name)
)
srcinfo.append(pkg_src)
# srcinfo is a 4-tuple (pkg_name,pkg_uri,pkg_path,pkg_type), so grab
# the package path (3rd element of tuple).
return [x[2] for x in srcinfo], 'file'
return srcinfo, 'file'
elif name:
if normalize:
@ -180,7 +205,7 @@ def version(*names, **kwargs):
return ret
def add_pkg(pkgs, name, version):
def add_pkg(pkgs, name, pkgver):
'''
Add a package to a dict of installed packages.
@ -191,7 +216,7 @@ def add_pkg(pkgs, name, version):
salt '*' pkg_resource.add_pkg '{}' bind 9
'''
try:
pkgs.setdefault(name, []).append(version)
pkgs.setdefault(name, []).append(pkgver)
except AttributeError as exc:
log.exception(exc)
@ -237,7 +262,7 @@ def stringify(pkgs):
log.exception(exc)
def version_clean(version):
def version_clean(verstr):
'''
Clean the version string removing extra data.
This function will simply try to call ``pkg.version_clean``.
@ -248,16 +273,16 @@ def version_clean(version):
salt '*' pkg_resource.version_clean <version_string>
'''
if version and 'pkg.version_clean' in __salt__:
return __salt__['pkg.version_clean'](version)
return version
if verstr and 'pkg.version_clean' in __salt__:
return __salt__['pkg.version_clean'](verstr)
return verstr
def check_extra_requirements(pkgname, pkgver):
'''
Check if the installed package already has the given requirements.
This function will simply try to call "pkg.check_extra_requirements".
This function will return the result of ``pkg.check_extra_requirements`` if
this function exists for the minion, otherwise it will return True.
CLI Example:

274
salt/modules/rallydev.py Normal file
View File

@ -0,0 +1,274 @@
# -*- coding: utf-8 -*-
'''
Support for RallyDev
.. versionadded:: Beryllium
Requires a ``username`` and a ``password`` in ``/etc/salt/minion``:
.. code-block: yaml
rallydev:
username: myuser@example.com
password: 123pass
'''
# Import python libs
from __future__ import absolute_import, print_function
import json
import logging
# Import salt libs
from salt.exceptions import SaltInvocationError
import salt.utils.http
log = logging.getLogger(__name__)
def __virtual__():
'''
Only load the module if apache is installed
'''
if not __opts__.get('rallydev', {}).get('username', None):
return False
if not __opts__.get('rallydev', {}).get('password', None):
return False
return True
def _get_token():
'''
Get an auth token
'''
username = __opts__.get('rallydev', {}).get('username', None)
password = __opts__.get('rallydev', {}).get('password', None)
path = 'https://rally1.rallydev.com/slm/webservice/v2.0/security/authorize'
result = salt.utils.http.query(
path,
decode=True,
decode_type='json',
text=True,
status=True,
username=username,
password=password,
cookies=True,
persist_session=True,
opts=__opts__,
)
if 'dict' not in result:
return None
return result['dict']['OperationResult']['SecurityToken']
def _query(action=None,
command=None,
args=None,
method='GET',
header_dict=None,
data=None):
'''
Make a web call to Stormpath
.. versionadded:: Beryllium
'''
token = _get_token()
username = __opts__.get('rallydev', {}).get('username', None)
password = __opts__.get('rallydev', {}).get('password', None)
path = 'https://rally1.rallydev.com/slm/webservice/v2.0/'
if action:
path += action
if command:
path += '/{0}'.format(command)
log.debug('RallyDev URL: {0}'.format(path))
if not isinstance(args, dict):
args = {}
args['key'] = token
if header_dict is None:
header_dict = {'Content-type': 'application/json'}
if method != 'POST':
header_dict['Accept'] = 'application/json'
decode = True
if method == 'DELETE':
decode = False
return_content = None
result = salt.utils.http.query(
path,
method,
params=args,
data=data,
header_dict=header_dict,
decode=decode,
decode_type='json',
text=True,
status=True,
username=username,
password=password,
cookies=True,
persist_session=True,
opts=__opts__,
)
log.debug('RallyDev Response Status Code: {0}'.format(result['status']))
if 'error' in result:
log.error(result['error'])
return [result['status'], result['error']]
return [result['status'], result.get('dict', {})]
def list_items(name):
'''
List items of a particular type
CLI Examples:
.. code-block:: bash
salt myminion rallydev.list_<item name>s
salt myminion rallydev.list_users
salt myminion rallydev.list_artifacts
'''
status, result = _query(action=name)
return result
def query_item(name, query_string, order='Rank'):
'''
Query a type of record for one or more items. Requires a valid query string.
See https://rally1.rallydev.com/slm/doc/webservice/introduction.jsp for
information on query syntax.
CLI Example:
.. code-block:: bash
salt myminion rallydev.query_<item name> <query string> [<order>]
salt myminion rallydev.query_task '(Name contains github)'
salt myminion rallydev.query_task '(Name contains reactor)' Rank
'''
status, result = _query(
action=name,
args={'query': query_string,
'order': order}
)
return result
def show_item(name, id_):
'''
Show an item
CLI Example:
.. code-block:: bash
salt myminion rallydev.show_<item name> <item id>
'''
status, result = _query(action=name, command=id_)
return result
def update_item(name, id_, field=None, value=None, postdata=None):
'''
Update an item. Either a field and a value, or a chunk of POST data, may be
used, but not both.
CLI Example:
.. code-block:: bash
salt myminion rallydev.update_<item name> <item id> field=<field> value=<value>
salt myminion rallydev.update_<item name> <item id> postdata=<post data>
'''
if field and value:
if postdata:
raise SaltInvocationError('Either a field and a value, or a chunk '
'of POST data, may be specified, but not both.')
postdata = {name.title(): {field: value}}
if postdata is None:
raise SaltInvocationError('Either a field and a value, or a chunk of '
'POST data must be specified.')
status, result = _query(
action=name,
command=id_,
method='POST',
data=json.dumps(postdata),
)
return result
def show_artifact(id_):
'''
Show an artifact
CLI Example:
.. code-block:: bash
salt myminion rallydev.show_artifact <artifact id>
'''
return show_item('artifact', id_)
def list_users():
'''
List the users
CLI Example:
.. code-block:: bash
salt myminion rallydev.list_users
'''
return list_items('user')
def show_user(id_):
'''
Show a user
CLI Example:
.. code-block:: bash
salt myminion rallydev.show_user <user id>
'''
return show_item('user', id_)
def update_user(id_, field, value):
'''
Update a user
CLI Example:
.. code-block:: bash
salt myminion rallydev.update_user <user id> <field> <new value>
'''
return update_item('user', id_, field, value)
def query_user(query_string, order='UserName'):
'''
Update a user
CLI Example:
.. code-block:: bash
salt myminion rallydev.query_user '(Name contains Jo)'
'''
return query_item('user', query_string, order)

View File

@ -27,12 +27,6 @@ import json
from salt.ext.six.moves.urllib.parse import urljoin as _urljoin
import salt.ext.six.moves.http_client
try:
import requests
from requests.exceptions import ConnectionError
ENABLED = True
except ImportError:
ENABLED = False
# pylint: enable=import-error,no-name-in-module,redefined-builtin
log = logging.getLogger(__name__)
@ -71,8 +65,6 @@ def __virtual__():
:return: The virtual name of the module.
'''
if not ENABLED:
return False
return __virtualname__
@ -96,29 +88,26 @@ def _query(api_version=None, data=None):
data = json.dumps(data)
try:
result = requests.request(
method='POST',
url=base_url,
headers={},
params={},
data=data,
verify=True,
)
except ConnectionError as e:
ret['message'] = e
ret['res'] = False
return ret
result = salt.utils.http.query(
base_url,
method='POST',
params={},
data=data,
decode=True,
status=True,
header_dict={},
opts=__opts__,
)
if result.status_code == salt.ext.six.moves.http_client.OK:
result = result.json()
if result.get('result'):
return result.get('result')
if result.get('error'):
return result.get('error')
return ret
elif result.status_code == salt.ext.six.moves.http_client.NO_CONTENT:
return True
if result.get('status', None) == salt.ext.six.moves.http_client.OK:
_result = result['dict']
if _result.get('result'):
return _result.get('result')
if _result.get('error'):
return _result.get('error')
return False
elif result.get('status', None) == salt.ext.six.moves.http_client.NO_CONTENT:
return False
else:
log.debug('base_url {0}'.format(base_url))
log.debug('data {0}'.format(data))
@ -284,8 +273,8 @@ def generateIntegers(api_key=None,
}
result = _query(api_version=api_version, data=data)
log.debug('result {0}'.format(result))
if result:
log.debug('result {0}'.format(result))
if 'random' in result:
random_data = result.get('random').get('data')
ret['data'] = random_data

View File

@ -6,12 +6,18 @@ Support for rpm
# Import python libs
from __future__ import absolute_import
import logging
import os
import re
# Import Salt libs
import salt.utils
import salt.utils.decorators as decorators
from salt.ext.six.moves import zip # pylint: disable=import-error,redefined-builtin
import salt.utils.pkg.rpm
# pylint: disable=import-error,redefined-builtin
from salt.ext.six.moves import shlex_quote as _cmd_quote
from salt.ext.six.moves import zip
# pylint: enable=import-error,redefined-builtin
from salt.exceptions import CommandExecutionError, SaltInvocationError
log = logging.getLogger(__name__)
@ -38,6 +44,67 @@ def __virtual__():
return False
def bin_pkg_info(path, saltenv='base'):
'''
.. versionadded:: Beryllium
Parses RPM metadata and returns a dictionary of information about the
package (name, version, etc.).
path
Path to the file. Can either be an absolute path to a file on the
minion, or a salt fileserver URL (e.g. ``salt://path/to/file.rpm``).
If a salt fileserver URL is passed, the file will be cached to the
minion so that it can be examined.
saltenv : base
Salt fileserver envrionment from which to retrieve the package. Ignored
if ``path`` is a local file path on the minion.
CLI Example:
.. code-block:: bash
salt '*' lowpkg.bin_pkg_info /root/salt-2015.5.1-2.el7.noarch.rpm
salt '*' lowpkg.bin_pkg_info salt://salt-2015.5.1-2.el7.noarch.rpm
'''
# If the path is a valid protocol, pull it down using cp.cache_file
if __salt__['config.valid_fileproto'](path):
newpath = __salt__['cp.cache_file'](path, saltenv)
if not newpath:
raise CommandExecutionError(
'Unable to retrieve {0} from saltenv \'{1}'
.format(path, saltenv)
)
path = newpath
else:
if not os.path.exists(path):
raise CommandExecutionError(
'{0} does not exist on minion'.format(path)
)
elif not os.path.isabs(path):
raise SaltInvocationError(
'{0} does not exist on minion'.format(path)
)
# REPOID is not a valid tag for the rpm command. Remove it and replace it
# with 'none'
queryformat = salt.utils.pkg.rpm.QUERYFORMAT.replace('%{REPOID}', 'none')
output = __salt__['cmd.run_stdout'](
'rpm -qp --queryformat {0} {1}'.format(_cmd_quote(queryformat), path),
output_loglevel='trace',
ignore_retcode=True
)
ret = {}
pkginfo = salt.utils.pkg.rpm.parse_pkginfo(
output,
osarch=__grains__['osarch']
)
for field in pkginfo._fields:
ret[field] = getattr(pkginfo, field)
return ret
def list_pkgs(*packages):
'''
List the packages currently installed in a dict::

View File

@ -421,6 +421,27 @@ def sync_utils(saltenv=None, refresh=True):
return ret
def sync_log_handlers(saltenv=None, refresh=True):
'''
.. versionadded:: Beryllium
Sync utility source files from the _log_handlers directory on the salt master file
server. This function is environment aware, pass the desired environment
to grab the contents of the _log_handlers directory, base is the default
environment.
CLI Example:
.. code-block:: bash
salt '*' saltutil.sync_log_handlers
'''
ret = _sync('log_handlers', saltenv)
if refresh:
refresh_modules()
return ret
def sync_all(saltenv=None, refresh=True):
'''
Sync down all of the dynamic modules from the file server for a specific
@ -463,6 +484,7 @@ def sync_all(saltenv=None, refresh=True):
ret['returners'] = sync_returners(saltenv, False)
ret['outputters'] = sync_outputters(saltenv, False)
ret['utils'] = sync_utils(saltenv, False)
ret['log_handlers'] = sync_log_handlers(saltenv, False)
if refresh:
refresh_modules()
return ret
@ -510,7 +532,7 @@ def refresh_modules(async=True):
'''
Signal the minion to refresh the module and grain data
The default is to refresh module asyncrhonously. To block
The default is to refresh module asynchronously. To block
until the module refresh is complete, set the 'async' flag
to False.

View File

@ -19,6 +19,7 @@ Module for sending messages to Slack
# Import Python libs
from __future__ import absolute_import
import logging
import urllib
# Import 3rd-party libs
# pylint: disable=import-error,no-name-in-module,redefined-builtin
@ -27,13 +28,6 @@ from salt.ext.six.moves import range
import salt.ext.six.moves.http_client
# pylint: enable=import-error,no-name-in-module
try:
import requests
from requests.exceptions import ConnectionError
ENABLED = True
except ImportError:
ENABLED = False
log = logging.getLogger(__name__)
__virtualname__ = 'slack'
@ -44,12 +38,15 @@ def __virtual__():
:return: The virtual name of the module.
'''
if not ENABLED:
return False
return __virtualname__
def _query(function, api_key=None, method='GET', data=None):
def _query(function,
api_key=None,
args=None,
method='GET',
header_dict=None,
data=None):
'''
Slack object method function to construct and execute on the API URL.
@ -59,12 +56,8 @@ def _query(function, api_key=None, method='GET', data=None):
:param data: The data to be sent for POST method.
:return: The json response from the API call or False.
'''
headers = {}
query_params = {}
if data is None:
data = {}
ret = {'message': '',
'res': True}
@ -98,43 +91,50 @@ def _query(function, api_key=None, method='GET', data=None):
base_url = _urljoin(api_url, '/api/')
path = slack_functions.get(function).get('request')
url = _urljoin(base_url, path, False)
if not isinstance(args, dict):
query_params = {}
query_params['token'] = api_key
try:
result = requests.request(
method=method,
url=url,
headers=headers,
params=query_params,
data=data,
verify=True,
)
except ConnectionError as e:
ret['message'] = e
ret['res'] = False
return ret
if header_dict is None:
header_dict = {}
if result.status_code == salt.ext.six.moves.http_client.OK:
result = result.json()
if method != 'POST':
header_dict['Accept'] = 'application/json'
result = salt.utils.http.query(
url,
method,
params=query_params,
data=data,
decode=True,
status=True,
header_dict=header_dict,
opts=__opts__,
)
if result.get('status', None) == salt.ext.six.moves.http_client.OK:
_result = result['dict']
response = slack_functions.get(function).get('response')
if 'error' in result:
ret['message'] = result['error']
if 'error' in _result:
ret['message'] = _result['error']
ret['res'] = False
return ret
ret['message'] = result.get(response)
ret['message'] = _result.get(response)
return ret
elif result.status_code == salt.ext.six.moves.http_client.NO_CONTENT:
elif result.get('status', None) == salt.ext.six.moves.http_client.NO_CONTENT:
return True
else:
log.debug(url)
log.debug(query_params)
log.debug(data)
log.debug(result)
if 'error' in result:
_result = result['dict']
if 'error' in _result:
ret['message'] = result['error']
ret['res'] = False
return ret
ret['message'] = result
ret['message'] = _result.get(response)
return ret
@ -265,15 +265,18 @@ def post_message(channel,
if not from_name:
log.error('from_name is a required option.')
parameters = dict()
parameters['channel'] = channel
parameters['username'] = from_name
parameters['text'] = message
parameters = {
'channel': channel,
'username': from_name,
'text': message
}
# Slack wants the body on POST to be urlencoded.
result = _query(function='message',
api_key=api_key,
method='POST',
data=parameters)
header_dict={'Content-Type': 'application/x-www-form-urlencoded'},
data=urllib.urlencode(parameters))
if result['res']:
return True

View File

@ -14,6 +14,7 @@ from __future__ import absolute_import
import plistlib
import subprocess
import salt.utils
from salt.ext import six
PROFILER_BINARY = '/usr/sbin/system_profiler'
@ -41,7 +42,10 @@ def _call_system_profiler(datatype):
'-xml', datatype], stdout=subprocess.PIPE)
(sysprofresults, sysprof_stderr) = p.communicate(input=None)
plist = plistlib.readPlistFromString(sysprofresults)
if six.PY2:
plist = plistlib.readPlistFromString(sysprofresults)
else:
plist = plistlib.readPlistFromBytes(sysprofresults)
try:
apps = plist[0]['_items']

View File

@ -33,7 +33,7 @@ def _subprocess(cmd):
try:
proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
ret = proc.communicate()[0].strip()
ret = utils.to_str(proc.communicate()[0]).strip()
retcode = proc.wait()
if ret:

206
salt/modules/victorops.py Normal file
View File

@ -0,0 +1,206 @@
# -*- coding: utf-8 -*-
'''
Support for VictorOps
.. versionadded:: Beryllium
Requires an ``api_key`` in ``/etc/salt/minion``:
.. code-block: yaml
victorops:
api_key: '280d4699-a817-4719-ba6f-ca56e573e44f'
'''
# Import python libs
from __future__ import absolute_import, print_function
import datetime
import json
import logging
import time
# Import salt libs
from salt.exceptions import SaltInvocationError
import salt.utils.http
log = logging.getLogger(__name__)
def __virtual__():
'''
Only load the module if apache is installed
'''
if not __opts__.get('victorops', {}).get('api_key', None):
return False
return True
def _query(action=None,
routing_key=None,
args=None,
method='GET',
header_dict=None,
data=None):
'''
Make a web call to VictorOps
.. versionadded:: Beryllium
'''
api_key = __opts__.get('victorops', {}).get('api_key', None)
path = 'https://alert.victorops.com/integrations/generic/20131114/'
if action:
path += '{0}/'.format(action)
if api_key:
path += '{0}/'.format(api_key)
if routing_key:
path += routing_key
log.debug('VictorOps URL: {0}'.format(path))
if not isinstance(args, dict):
args = {}
if header_dict is None:
header_dict = {'Content-type': 'application/json'}
if method != 'POST':
header_dict['Accept'] = 'application/json'
decode = True
if method == 'DELETE':
decode = False
result = salt.utils.http.query(
path,
method,
params=args,
data=data,
header_dict=header_dict,
decode=decode,
decode_type='json',
text=True,
status=True,
cookies=True,
persist_session=True,
opts=__opts__,
)
if 'error' in result:
log.error(result['error'])
return [result['status'], result['error']]
return [result['status'], result.get('dict', {})]
def create_event(message_type=None, routing_key='everybody', **kwargs):
'''
Create an event in VictorOps. Designed for use in states.
The following parameters are required:
:param message_type: One of the following values: INFO, WARNING, ACKNOWLEDGEMENT, CRITICAL, RECOVERY.
The following parameters are optional:
:param routing_key: The key for where messages should be routed. By default, sent to
'everyone' route.
:param entity_id: The name of alerting entity. If not provided, a random name will be assigned.
:param timestamp: Timestamp of the alert in seconds since epoch. Defaults to the
time the alert is received at VictorOps.
:param timestamp_fmt The date format for the timestamp parameter.
:param state_start_time: The time this entity entered its current state
(seconds since epoch). Defaults to the time alert is received.
:param state_start_time_fmt: The date format for the timestamp parameter.
:param state_message: Any additional status information from the alert item.
:param entity_is_host: Used within VictorOps to select the appropriate
display format for the incident.
:param entity_display_name: Used within VictorOps to display a human-readable name for the entity.
:param ack_message: A user entered comment for the acknowledgment.
:param ack_author: The user that acknowledged the incident.
:return: A dictionary with result, entity_id, and message if result was failure.
CLI Example:
.. code-block:: yaml
salt myminion victorops.create_event message_type='CRITICAL' routing_key='everyone' \
entity_id='hostname/diskspace'
salt myminion victorops.create_event message_type='ACKNOWLEDGEMENT' routing_key='everyone' \
entity_id='hostname/diskspace' ack_message='Acknowledged' ack_author='username'
salt myminion victorops.create_event message_type='RECOVERY' routing_key='everyone' \
entity_id='hostname/diskspace'
The following parameters are required:
message_type
'''
keyword_args = {'entity_id': str,
'state_message': str,
'entity_is_host': bool,
'entity_display_name': str,
'ack_message': str,
'ack_author': str
}
data = {}
if not message_type:
raise SaltInvocationError('Required argument "message_type" is missing.')
if message_type.upper() not in ['INFO', 'WARNING', 'ACKNOWLEDGEMENT', 'CRITICAL', 'RECOVERY']:
raise SaltInvocationError('"message_type" must be INFO, WARNING, ACKNOWLEDGEMENT, CRITICAL, or RECOVERY.')
data['message_type'] = message_type
data['monitoring_tool'] = 'SaltStack'
if 'timestamp' in kwargs:
timestamp_fmt = kwargs.get('timestamp_fmt', '%Y-%m-%dT%H:%M:%S')
try:
timestamp = datetime.datetime.strptime(kwargs['timestamp'], timestamp_fmt)
data['timestamp'] = int(time.mktime(timestamp.timetuple()))
except (TypeError, ValueError):
raise SaltInvocationError('Date string could not be parsed: %s, %s',
kwargs['timestamp'], timestamp_fmt)
if 'state_start_time' in kwargs:
state_start_time_fmt = kwargs.get('state_start_time_fmt', '%Y-%m-%dT%H:%M:%S')
try:
state_start_time = datetime.datetime.strptime(kwargs['state_start_time'], state_start_time_fmt)
data['state_start_time'] = int(time.mktime(state_start_time.timetuple()))
except (TypeError, ValueError):
raise SaltInvocationError('Date string could not be parsed: %s, %s',
kwargs['state_start_time'], state_start_time_fmt)
for kwarg in keyword_args:
if kwarg in kwargs:
if isinstance(kwargs[kwarg], keyword_args[kwarg]):
data[kwarg] = kwargs[kwarg]
else:
# Should this faile on the wrong type.
log.error('Wrong type, skipping {0}'.format(kwarg))
status, result = _query(action='alert',
routing_key=routing_key,
data=json.dumps(data),
method='POST'
)
return result

View File

@ -173,15 +173,17 @@ def _libvirt_creds():
g_cmd = 'grep ^\\s*group /etc/libvirt/qemu.conf'
u_cmd = 'grep ^\\s*user /etc/libvirt/qemu.conf'
try:
group = subprocess.Popen(g_cmd,
shell=True,
stdout=subprocess.PIPE).communicate()[0].split('"')[1]
stdout = subprocess.Popen(g_cmd,
shell=True,
stdout=subprocess.PIPE).communicate()[0]
group = salt.utils.to_str(stdout).split('"')[1]
except IndexError:
group = 'root'
try:
user = subprocess.Popen(u_cmd,
shell=True,
stdout=subprocess.PIPE).communicate()[0].split('"')[1]
stdout = subprocess.Popen(u_cmd,
shell=True,
stdout=subprocess.PIPE).communicate()[0]
user = salt.utils.to_str(stdout).split('"')[1]
except IndexError:
user = 'root'
return {'user': user, 'group': group}
@ -908,10 +910,11 @@ def get_disks(vm_):
break
output = []
qemu_output = subprocess.Popen(['qemu-img', 'info',
disks[dev]['file']],
shell=False,
stdout=subprocess.PIPE).communicate()[0]
stdout = subprocess.Popen(
['qemu-img', 'info', disks[dev]['file']],
shell=False,
stdout=subprocess.PIPE).communicate()[0]
qemu_output = salt.utils.to_str(stdout)
snapshots = False
columns = None
lines = qemu_output.strip().split('\n')
@ -1359,9 +1362,10 @@ def migrate_non_shared(vm_, target, ssh=False):
cmd = _get_migrate_command() + ' --copy-storage-all ' + vm_\
+ _get_target(target, ssh)
return subprocess.Popen(cmd,
shell=True,
stdout=subprocess.PIPE).communicate()[0]
stdout = subprocess.Popen(cmd,
shell=True,
stdout=subprocess.PIPE).communicate()[0]
return salt.utils.to_str(stdout)
def migrate_non_shared_inc(vm_, target, ssh=False):
@ -1377,9 +1381,10 @@ def migrate_non_shared_inc(vm_, target, ssh=False):
cmd = _get_migrate_command() + ' --copy-storage-inc ' + vm_\
+ _get_target(target, ssh)
return subprocess.Popen(cmd,
shell=True,
stdout=subprocess.PIPE).communicate()[0]
stdout = subprocess.Popen(cmd,
shell=True,
stdout=subprocess.PIPE).communicate()[0]
return salt.utils.to_str(stdout)
def migrate(vm_, target, ssh=False):
@ -1395,9 +1400,10 @@ def migrate(vm_, target, ssh=False):
cmd = _get_migrate_command() + ' ' + vm_\
+ _get_target(target, ssh)
return subprocess.Popen(cmd,
shell=True,
stdout=subprocess.PIPE).communicate()[0]
stdout = subprocess.Popen(cmd,
shell=True,
stdout=subprocess.PIPE).communicate()[0]
return salt.utils.to_str(stdout)
def seed_non_shared_migrate(disks, force=False):

View File

@ -344,7 +344,7 @@ def master(master=None, connected=True):
log.error('Failed netstat')
raise
lines = data.split('\n')
lines = salt.utils.to_str(data).split('\n')
for line in lines:
if 'ESTABLISHED' not in line:
continue

View File

@ -12,7 +12,6 @@ import os
import logging
import hashlib
import glob
import M2Crypto
import random
import ctypes
import tempfile
@ -29,6 +28,15 @@ from salt.utils.odict import OrderedDict
from salt.ext.six.moves import range # pylint: disable=import-error,redefined-builtin
from salt.state import STATE_INTERNAL_KEYWORDS as _STATE_INTERNAL_KEYWORDS
# Import 3rd Party Libs
try:
import M2Crypto
HAS_M2 = True
except ImportError:
HAS_M2 = False
__virtualname__ = 'x509'
log = logging.getLogger(__name__)
EXT_NAME_MAPPINGS = OrderedDict([
@ -54,6 +62,13 @@ EXT_NAME_MAPPINGS = OrderedDict([
CERT_DEFAULTS = {'days_valid': 365, 'version': 3, 'serial_bits': 64, 'algorithm': 'sha256'}
def __virtual__():
if HAS_M2:
return __virtualname__
else:
return False
class _Ctx(ctypes.Structure):
'''
This is part of an ugly hack to fix an ancient bug in M2Crypto

View File

@ -38,32 +38,13 @@ except ImportError:
# Import salt libs
import salt.utils
import salt.utils.decorators as decorators
import salt.utils.pkg.rpm
from salt.exceptions import (
CommandExecutionError, MinionError, SaltInvocationError
)
log = logging.getLogger(__name__)
__QUERYFORMAT = '%{NAME}_|-%{VERSION}_|-%{RELEASE}_|-%{ARCH}_|-%{REPOID}'
# These arches compiled from the rpmUtils.arch python module source
__ARCHES_64 = ('x86_64', 'athlon', 'amd64', 'ia32e', 'ia64', 'geode')
__ARCHES_32 = ('i386', 'i486', 'i586', 'i686')
__ARCHES_PPC = ('ppc', 'ppc64', 'ppc64iseries', 'ppc64pseries')
__ARCHES_S390 = ('s390', 's390x')
__ARCHES_SPARC = (
'sparc', 'sparcv8', 'sparcv9', 'sparcv9v', 'sparc64', 'sparc64v'
)
__ARCHES_ALPHA = (
'alpha', 'alphaev4', 'alphaev45', 'alphaev5', 'alphaev56',
'alphapca56', 'alphaev6', 'alphaev67', 'alphaev68', 'alphaev7'
)
__ARCHES_ARM = ('armv5tel', 'armv5tejl', 'armv6l', 'armv7l')
__ARCHES_SH = ('sh3', 'sh4', 'sh4a')
__ARCHES = __ARCHES_64 + __ARCHES_32 + __ARCHES_PPC + __ARCHES_S390 + \
__ARCHES_ALPHA + __ARCHES_ARM + __ARCHES_SH
# Define the module's virtual name
__virtualname__ = 'pkg'
@ -87,41 +68,16 @@ def __virtual__():
return False
def _parse_pkginfo(line):
'''
A small helper to parse a repoquery; returns a namedtuple
'''
# Importing `collections` here since this function is re-namespaced into
# another module
import collections
pkginfo = collections.namedtuple(
'PkgInfo',
('name', 'version', 'arch', 'repoid')
)
try:
name, pkg_version, release, arch, repoid = line.split('_|-')
# Handle unpack errors (should never happen with the queryformat we are
# using, but can't hurt to be careful).
except ValueError:
return None
if not _check_32(arch):
if arch not in (__grains__['osarch'], 'noarch'):
name += '.{0}'.format(arch)
if release:
pkg_version += '-{0}'.format(release)
return pkginfo(name, pkg_version, arch, repoid)
def _repoquery_pkginfo(repoquery_args):
'''
Wrapper to call repoquery and parse out all the tuples
'''
ret = []
for line in _repoquery(repoquery_args, ignore_stderr=True):
pkginfo = _parse_pkginfo(line)
pkginfo = salt.utils.pkg.rpm.parse_pkginfo(
line,
osarch=__grains__['osarch']
)
if pkginfo is not None:
ret.append(pkginfo)
return ret
@ -143,7 +99,9 @@ def _check_repoquery():
raise CommandExecutionError('Unable to install yum-utils')
def _repoquery(repoquery_args, query_format=__QUERYFORMAT, ignore_stderr=False):
def _repoquery(repoquery_args,
query_format=salt.utils.pkg.rpm.QUERYFORMAT,
ignore_stderr=False):
'''
Runs a repoquery command and returns a list of namedtuples
'''
@ -154,8 +112,8 @@ def _repoquery(repoquery_args, query_format=__QUERYFORMAT, ignore_stderr=False):
call = __salt__['cmd.run_all'](cmd, output_loglevel='trace')
if call['retcode'] != 0:
comment = ''
# when checking for packages some yum modules return data via
# stderr that don't cause non-zero return codes. A perfect
# When checking for packages some yum modules return data via
# stderr that don't cause non-zero return codes. j perfect
# example of this is when spacewalk is installed but not yet
# registered. We should ignore those when getting pkginfo.
if 'stderr' in call and not salt.utils.is_true(ignore_stderr):
@ -241,40 +199,6 @@ def _get_branch_option(**kwargs):
return branch_arg
def _check_32(arch):
'''
Returns True if both the OS arch and the passed arch are 32-bit
'''
return all(x in __ARCHES_32 for x in (__grains__['osarch'], arch))
def _rpm_pkginfo(name):
'''
Parses RPM metadata and returns a pkginfo namedtuple
'''
# REPOID is not a valid tag for the rpm command. Remove it and replace it
# with 'none'
queryformat = __QUERYFORMAT.replace('%{REPOID}', 'none')
output = __salt__['cmd.run_stdout'](
'rpm -qp --queryformat {0!r} {1}'.format(_cmd_quote(queryformat), name),
output_loglevel='trace',
ignore_retcode=True
)
return _parse_pkginfo(output)
def _rpm_installed(name):
'''
Parses RPM metadata to determine if the RPM target is already installed.
Returns the name of the installed package if found, otherwise None.
'''
pkg = _rpm_pkginfo(name)
try:
return pkg.name if pkg.name in list_pkgs() else None
except AttributeError:
return None
def _get_yum_config():
'''
Returns a dict representing the yum config options and values.
@ -391,11 +315,12 @@ def normalize_name(name):
'''
try:
arch = name.rsplit('.', 1)[-1]
if arch not in __ARCHES + ('noarch',):
if arch not in salt.utils.pkg.rpm.ARCHES + ('noarch',):
return name
except ValueError:
return name
if arch in (__grains__['osarch'], 'noarch') or _check_32(arch):
if arch in (__grains__['osarch'], 'noarch') \
or salt.utils.pkg.rpm.check_32(arch, osarch=__grains__['osarch']):
return name[:-(len(arch) + 1)]
return name
@ -449,7 +374,7 @@ def latest_version(*names, **kwargs):
ret[name] = ''
try:
arch = name.rsplit('.', 1)[-1]
if arch not in __ARCHES:
if arch not in salt.utils.pkg.rpm.ARCHES:
arch = __grains__['osarch']
except ValueError:
arch = __grains__['osarch']
@ -476,7 +401,7 @@ def latest_version(*names, **kwargs):
for name in names:
for pkg in (x for x in updates if x.name == name):
if pkg.arch == 'noarch' or pkg.arch == namearch_map[name] \
or _check_32(pkg.arch):
or salt.utils.pkg.rpm.check_32(pkg.arch):
ret[name] = pkg.version
# no need to check another match, if there was one
break
@ -926,14 +851,12 @@ def install(name=None,
architecture as an actual part of the name such as kernel modules
which match a specific kernel version.
.. code-block:: bash
salt -G role:nsd pkg.install gpfs.gplbin-2.6.32-279.31.1.el6.x86_64 normalize=False
.. versionadded:: 2014.7.0
Example:
.. code-block:: bash
salt -G role:nsd pkg.install gpfs.gplbin-2.6.32-279.31.1.el6.x86_64 normalize=False
Returns a dict containing the new package names and versions::
@ -976,36 +899,63 @@ def install(name=None,
else:
pkg_params_items = []
for pkg_source in pkg_params:
rpm_info = _rpm_pkginfo(pkg_source)
if rpm_info is not None:
pkg_params_items.append([rpm_info.name, rpm_info.version, pkg_source])
if 'lowpkg.bin_pkg_info' in __salt__:
rpm_info = __salt__['lowpkg.bin_pkg_info'](pkg_source)
else:
pkg_params_items.append([pkg_source, None, pkg_source])
rpm_info = None
if rpm_info is None:
log.error(
'pkg.install: Unable to get rpm information for {0}. '
'Version comparisons will be unavailable, and return '
'data may be inaccurate if reinstall=True.'
.format(pkg_source)
)
pkg_params_items.append([pkg_source])
else:
pkg_params_items.append(
[rpm_info['name'], pkg_source, rpm_info['version']]
)
for pkg_item_list in pkg_params_items:
pkgname = pkg_item_list[0]
version_num = pkg_item_list[1]
if version_num is None:
if reinstall and pkg_type == 'repository' and pkgname in old:
to_reinstall[pkgname] = pkgname
else:
targets.append(pkgname)
if pkg_type == 'repository':
pkgname, version_num = pkg_item_list
else:
cver = old.get(pkgname, '')
arch = ''
try:
namepart, archpart = pkgname.rsplit('.', 1)
pkgname, pkgpath, version_num = pkg_item_list
except ValueError:
pass
else:
if archpart in __ARCHES:
arch = '.' + archpart
pkgname = namepart
pkgname = None
pkgpath = pkg_item_list[0]
version_num = None
if version_num is None:
if pkg_type == 'repository':
if reinstall and pkgname in old:
to_reinstall[pkgname] = pkgname
else:
targets.append(pkgname)
else:
targets.append(pkgpath)
else:
# If we are installing a package file and not one from the repo,
# and version_num is not None, then we can assume that pkgname is
# not None, since the only way version_num is not None is if RPM
# metadata parsing was successful.
if pkg_type == 'repository':
arch = ''
try:
namepart, archpart = pkgname.rsplit('.', 1)
except ValueError:
pass
else:
if archpart in salt.utils.pkg.rpm.ARCHES:
arch = '.' + archpart
pkgname = namepart
pkgstr = '"{0}-{1}{2}"'.format(pkgname, version_num, arch)
else:
pkgstr = pkg_item_list[2]
pkgstr = pkgpath
cver = old.get(pkgname, '')
if reinstall and cver \
and salt.utils.compare_versions(ver1=version_num,
oper='==',
@ -1051,25 +1001,14 @@ def install(name=None,
__context__.pop('pkg.list_pkgs', None)
new = list_pkgs()
versionName = pkgname
ret = salt.utils.compare_dicts(old, new)
if sources is not None:
versionName = pkgname + '-' + new.get(pkgname, '')
if pkgname in ret:
ret[versionName] = ret.pop(pkgname)
for pkgname in to_reinstall:
if not versionName not in old:
ret.update({versionName: {'old': old.get(pkgname, ''),
if pkgname not in ret or pkgname in old:
ret.update({pkgname: {'old': old.get(pkgname, ''),
'new': new.get(pkgname, '')}})
else:
if versionName not in ret:
ret.update({versionName: {'old': old.get(pkgname, ''),
'new': new.get(pkgname, '')}})
if ret:
__context__.pop('pkg._avail', None)
elif sources is not None:
ret = {versionName: {}}
return ret

View File

@ -90,7 +90,9 @@ def __virtual__():
else:
cmd = 'ls /sys/module/zfs'
if cmd and __salt__['cmd.retcode'](cmd, output_loglevel='quiet') == 0:
if cmd and __salt__['cmd.retcode'](
cmd, output_loglevel='quiet', ignore_retcode=True
) == 0:
# Build dynamic functions and allow loading module
_build_zfs_cmd_list()
return 'zfs'

View File

@ -18,6 +18,7 @@ try:
ForceRetryError
)
import kazoo.recipe.lock
import kazoo.recipe.party
from kazoo.exceptions import CancelledError
from kazoo.exceptions import NoNodeError
@ -271,3 +272,26 @@ def unlock(path,
else:
logging.error('Unable to find lease for path {0}'.format(path))
return False
def party_members(path,
zk_hosts,
):
'''
Get the List of identifiers in a particular party
path
The path in zookeeper where the lock is
zk_hosts
zookeeper connect string
Example:
... code-block: bash
salt minion zk_concurrency.party_members /lock/path host1:1234,host2:1234
'''
zk = _get_zk_conn(zk_hosts)
party = kazoo.recipe.party.ShallowParty(zk, path)
return list(party)

View File

@ -128,7 +128,7 @@ The token may be sent in one of two ways:
.. code-block:: bash
curl -sSk https://localhost:8000 \
curl -sSk https://localhost:8000 \
-H 'Accept: application/x-yaml' \
-H 'X-Auth-Token: 697adbdc8fe971d09ae4c2a3add7248859c87079'\
-d client=local \

View File

@ -97,6 +97,7 @@ import sys
import salt.exceptions
import salt.ext.six as six
import salt.utils
HAS_VIRTUALENV = False
@ -177,14 +178,14 @@ def ext_pillar(minion_id, # pylint: disable=W0613
base_env = {}
proc = subprocess.Popen(['bash', '-c', 'env'], stdout=subprocess.PIPE)
for line in proc.stdout:
(key, _, value) = line.partition('=')
(key, _, value) = salt.utils.to_str(line).partition('=')
base_env[key] = value
command = ['bash', '-c', 'source {0} && env'.format(env_file)]
proc = subprocess.Popen(command, stdout=subprocess.PIPE)
for line in proc.stdout:
(key, _, value) = line.partition('=')
(key, _, value) = salt.utils.to_str(line).partition('=')
# only add a key if it is different or doesn't already exist
if key not in base_env or base_env[key] != value:
os.environ[key] = value.rstrip('\n')

View File

@ -5,160 +5,34 @@ Retrieve Pillar data by doing a MySQL query
MariaDB provides Python support through the MySQL Python package.
Therefore, you may use this module with both MySQL or MariaDB.
This module is a concrete implementation of the sql_base ext_pillar for MySQL.
:maturity: new
:depends: python-mysqldb
:platform: all
Theory of mysql ext_pillar
Legacy compatibility
=====================================
Ok, here's the theory for how this works...
This module has an extra addition for backward compatibility.
- If there's a keyword arg of mysql_query, that'll go first.
- Then any non-keyword args are processed in order.
- Finally, remaining keywords are processed.
If there's a keyword arg of mysql_query, that'll go first before other args.
This legacy compatibility translates to depth 1.
We do this so that it's backward compatible with older configs.
Keyword arguments are sorted before being appended, so that they're predictable,
but they will always be applied last so overall it's moot.
For each of those items we process, it depends on the object type:
- Strings are executed as is and the pillar depth is determined by the number
of fields returned.
- A list has the first entry used as the query, the second as the pillar depth.
- A mapping uses the keys "query" and "depth" as the tuple
You can retrieve as many fields as you like, how they get used depends on the
exact settings.
This is deprecated and slated to be removed in Boron.
Configuring the mysql ext_pillar
=====================================
First an example of how legacy queries were specified.
.. code-block:: yaml
ext_pillar:
- mysql:
mysql_query: "SELECT pillar,value FROM pillars WHERE minion_id = %s"
Alternatively, a list of queries can be passed in
.. code-block:: yaml
ext_pillar:
- mysql:
- "SELECT pillar,value FROM pillars WHERE minion_id = %s"
- "SELECT pillar,value FROM more_pillars WHERE minion_id = %s"
Or you can pass in a mapping
.. code-block:: yaml
ext_pillar:
- mysql:
main: "SELECT pillar,value FROM pillars WHERE minion_id = %s"
extras: "SELECT pillar,value FROM more_pillars WHERE minion_id = %s"
The query can be provided as a string as we have just shown, but they can be
provided as lists
.. code-block:: yaml
ext_pillar:
- mysql:
- "SELECT pillar,value FROM pillars WHERE minion_id = %s"
2
Or as a mapping
.. code-block:: yaml
ext_pillar:
- mysql:
- query: "SELECT pillar,value FROM pillars WHERE minion_id = %s"
depth: 2
The depth defines how the dicts are constructed.
Essentially if you query for fields a,b,c,d for each row you'll get:
- With depth 1: {a: {"b": b, "c": c, "d": d}}
- With depth 2: {a: {b: {"c": c, "d": d}}}
- With depth 3: {a: {b: {c: d}}}
Depth greater than 3 wouldn't be different from 3 itself.
Depth of 0 translates to the largest depth needed, so 3 in this case.
(max depth == key count - 1)
The legacy compatibility translates to depth 1.
Then they are merged in a similar way to plain pillar data, in the order
returned by MySQL.
Thus subsequent results overwrite previous ones when they collide.
The ignore_null option can be used to change the overwrite behavior so that
only non-NULL values in subsequent results will overwrite. This can be used
to selectively overwrite default values.
.. code-block:: yaml
ext_pillar:
- mysql:
- query: "SELECT pillar,value FROM pillars WHERE minion_id = 'default' and minion_id != %s"
depth: 2
- query: "SELECT pillar,value FROM pillars WHERE minion_id = %s"
depth: 2
ignore_null: True
If you specify `as_list: True` in the mapping expression it will convert
collisions to lists.
If you specify `with_lists: '...'` in the mapping expression it will
convert the specified depths to list. The string provided is a sequence
numbers that are comma separated. The string '1,3' will result in::
a,b,c,d,e,1 # field 1 same, field 3 differs
a,b,c,f,g,2 # ^^^^
a,z,h,y,j,3 # field 1 same, field 3 same
a,z,h,y,k,4 # ^^^^
^ ^
These columns define list grouping
.. code-block:: python
{a: [
{c: [
{e: 1},
{g: 2}
]
},
{h: [
{j: 3, k: 4 }
]
}
]}
The range for with_lists is 1 to number_of_fields, inclusive.
Numbers outside this range are ignored.
Finally, if you pass the queries in via a mapping, the key will be the
first level name where as passing them in as a list will place them in the
root. This isolates the query results into their own subtrees.
This may be a help or hindrance to your aims and can be used as such.
You can basically use any SELECT query that gets you the information, you
could even do joins or subqueries in case your minion_id is stored elsewhere.
It is capable of handling single rows or multiple rows per minion.
Use the 'mysql' key under ext_pillar for configuration of queries.
MySQL configuration of the MySQL returner is being used (mysql.db, mysql.user,
mysql.pass, mysql.port, mysql.host)
mysql.pass, mysql.port, mysql.host) for database connection info.
Required python modules: MySQLdb
More complete example
Complete example
=====================================
.. code-block:: yaml
@ -180,18 +54,13 @@ More complete example
'''
from __future__ import absolute_import
# Please don't strip redundant parentheses from this file.
# I have added some for clarity.
# tests/unit/pillar/mysql_test.py may help understand this code.
# Import python libs
from contextlib import contextmanager
import logging
# Import Salt libs
from salt.utils.odict import OrderedDict
from salt.ext.six.moves import range
import salt.utils
from salt.pillar.sql_base import SqlBaseExtPillar
# Set up logging
log = logging.getLogger(__name__)
@ -210,289 +79,74 @@ def __virtual__():
return True
def _get_options():
class MySQLExtPillar(SqlBaseExtPillar):
'''
Returns options used for the MySQL connection.
This class receives and processes the database rows from MySQL.
'''
defaults = {'host': 'localhost',
'user': 'salt',
'pass': 'salt',
'db': 'salt',
'port': 3306}
_options = {}
_opts = __opts__.get('mysql', {})
for attr in defaults:
if attr not in _opts:
log.debug('Using default for MySQL {0}'.format(attr))
_options[attr] = defaults[attr]
continue
_options[attr] = _opts[attr]
return _options
@classmethod
def _db_name(cls):
return 'MySQL'
def _get_options(self):
'''
Returns options used for the MySQL connection.
'''
defaults = {'host': 'localhost',
'user': 'salt',
'pass': 'salt',
'db': 'salt',
'port': 3306}
_options = {}
_opts = __opts__.get('mysql', {})
for attr in defaults:
if attr not in _opts:
log.debug('Using default for MySQL {0}'.format(attr))
_options[attr] = defaults[attr]
continue
_options[attr] = _opts[attr]
return _options
@contextmanager
def _get_serv():
'''
Return a mysql cursor
'''
_options = _get_options()
conn = MySQLdb.connect(host=_options['host'],
user=_options['user'],
passwd=_options['pass'],
db=_options['db'], port=_options['port'])
cursor = conn.cursor()
try:
yield cursor
except MySQLdb.DatabaseError as err:
log.exception('Error in ext_pillar MySQL: {0}'.format(err.args))
finally:
conn.close()
class Merger(object):
'''
This class receives and processes the database rows in a database
agnostic way.
'''
result = None
focus = None
field_names = None
num_fields = 0
depth = 0
as_list = False
with_lists = None
ignore_null = False
def __init__(self):
self.result = self.focus = {}
@contextmanager
def _get_cursor(self):
'''
Yield a MySQL cursor
'''
_options = self._get_options()
conn = MySQLdb.connect(host=_options['host'],
user=_options['user'],
passwd=_options['pass'],
db=_options['db'], port=_options['port'])
cursor = conn.cursor()
try:
yield cursor
except MySQLdb.DatabaseError as err:
log.exception('Error in ext_pillar MySQL: {0}'.format(err.args))
finally:
conn.close()
def extract_queries(self, args, kwargs):
'''
This function normalizes the config block into a set of queries we
can use. The return is a list of consistently laid out dicts.
'''
# Please note the function signature is NOT an error. Neither args, nor
# kwargs should have asterisks. We are passing in a list and dict,
# rather than receiving variable args. Adding asterisks WILL BREAK the
# function completely.
# First, this is the query buffer. Contains lists of [base,sql]
qbuffer = []
# Is there an old style mysql_query?
# Handle legacy query specification
if 'mysql_query' in kwargs:
qbuffer.append([None, kwargs.pop('mysql_query')])
salt.utils.warn_until(
'Boron',
'The legacy mysql_query configuration parameter is deprecated.'
'See the docs for the new styel of configuration.'
'This functionality will be removed in Salt Boron.'
)
args.insert(0, kwargs.pop('mysql_query'))
# Add on the non-keywords...
qbuffer.extend([[None, s] for s in args])
# And then the keywords...
# They aren't in definition order, but they can't conflict each other.
klist = list(kwargs.keys())
klist.sort()
qbuffer.extend([[k, kwargs[k]] for k in klist])
# Filter out values that don't have queries.
qbuffer = [x for x in qbuffer if (
(isinstance(x[1], str) and len(x[1]))
or
(isinstance(x[1], (list, tuple)) and (len(x[1]) > 0) and x[1][0])
or
(isinstance(x[1], dict) and 'query' in x[1] and len(x[1]['query']))
)]
# Next, turn the whole buffer into full dicts.
for qb in qbuffer:
defaults = {'query': '',
'depth': 0,
'as_list': False,
'with_lists': None,
'ignore_null': False
}
if isinstance(qb[1], str):
defaults['query'] = qb[1]
elif isinstance(qb[1], (list, tuple)):
defaults['query'] = qb[1][0]
if len(qb[1]) > 1:
defaults['depth'] = qb[1][1]
# May set 'as_list' from qb[1][2].
else:
defaults.update(qb[1])
if defaults['with_lists']:
defaults['with_lists'] = [
int(i) for i in defaults['with_lists'].split(',')
]
qb[1] = defaults
return qbuffer
def enter_root(self, root):
'''
Set self.focus for kwarg queries
'''
# There is no collision protection on root name isolation
if root:
self.result[root] = self.focus = {}
else:
self.focus = self.result
def process_fields(self, field_names, depth):
'''
The primary purpose of this function is to store the sql field list
and the depth to which we process.
'''
# List of field names in correct order.
self.field_names = field_names
# number of fields.
self.num_fields = len(field_names)
# Constrain depth.
if (depth == 0) or (depth >= self.num_fields):
self.depth = self.num_fields - 1
else:
self.depth = depth
def process_results(self, rows):
'''
This function takes a list of database results and iterates over,
merging them into a dict form.
'''
listify = OrderedDict()
listify_dicts = OrderedDict()
for ret in rows:
# crd is the Current Return Data level, to make this non-recursive.
crd = self.focus
# Walk and create dicts above the final layer
for i in range(0, self.depth-1):
# At the end we'll use listify to find values to make a list of
if i+1 in self.with_lists:
if id(crd) not in listify:
listify[id(crd)] = []
listify_dicts[id(crd)] = crd
if ret[i] not in listify[id(crd)]:
listify[id(crd)].append(ret[i])
if ret[i] not in crd:
# Key missing
crd[ret[i]] = {}
crd = crd[ret[i]]
else:
# Check type of collision
ty = type(crd[ret[i]])
if ty is list:
# Already made list
temp = {}
crd[ret[i]].append(temp)
crd = temp
elif ty is not dict:
# Not a list, not a dict
if self.as_list:
# Make list
temp = {}
crd[ret[i]] = [crd[ret[i]], temp]
crd = temp
else:
# Overwrite
crd[ret[i]] = {}
crd = crd[ret[i]]
else:
# dict, descend.
crd = crd[ret[i]]
# If this test is true, the penultimate field is the key
if self.depth == self.num_fields - 1:
nk = self.num_fields-2 # Aka, self.depth-1
# Should we and will we have a list at the end?
if ((self.as_list and (ret[nk] in crd)) or
(nk+1 in self.with_lists)):
if ret[nk] in crd:
if not isinstance(crd[ret[nk]], list):
crd[ret[nk]] = [crd[ret[nk]]]
# if it's already a list, do nothing
else:
crd[ret[nk]] = []
crd[ret[nk]].append(ret[self.num_fields-1])
else:
if not self.ignore_null or ret[self.num_fields-1] is not None:
crd[ret[nk]] = ret[self.num_fields-1]
else:
# Otherwise, the field name is the key but we have a spare.
# The spare results because of {c: d} vs {c: {"d": d, "e": e }}
# So, make that last dict
if ret[self.depth-1] not in crd:
crd[ret[self.depth-1]] = {}
# This bit doesn't escape listify
if self.depth in self.with_lists:
if id(crd) not in listify:
listify[id(crd)] = []
listify_dicts[id(crd)] = crd
if ret[self.depth-1] not in listify[id(crd)]:
listify[id(crd)].append(ret[self.depth-1])
crd = crd[ret[self.depth-1]]
# Now for the remaining keys, we put them into the dict
for i in range(self.depth, self.num_fields):
nk = self.field_names[i]
# Listify
if i+1 in self.with_lists:
if id(crd) not in listify:
listify[id(crd)] = []
listify_dicts[id(crd)] = crd
if nk not in listify[id(crd)]:
listify[id(crd)].append(nk)
# Collision detection
if self.as_list and (nk in crd):
# Same as before...
if isinstance(crd[nk], list):
crd[nk].append(ret[i])
else:
crd[nk] = [crd[nk], ret[i]]
else:
if not self.ignore_null or ret[i] is not None:
crd[nk] = ret[i]
# Get key list and work backwards. This is inner-out processing
ks = list(listify_dicts.keys())
ks.reverse()
for i in ks:
d = listify_dicts[i]
for k in listify[i]:
if isinstance(d[k], dict):
d[k] = list(d[k].values())
elif isinstance(d[k], list):
d[k] = [d[k]]
return super(MySQLExtPillar, self).extract_queries(args, kwargs)
def ext_pillar(minion_id,
pillar, # pylint: disable=W0613
pillar,
*args,
**kwargs):
'''
Execute queries, merge and return as a dict
Execute queries against MySQL, merge and return as a dict
'''
log.info('Querying MySQL for information for {0}'.format(minion_id, ))
#
# log.debug('ext_pillar MySQL args: {0}'.format(args))
# log.debug('ext_pillar MySQL kwargs: {0}'.format(kwargs))
#
# Most of the heavy lifting is in this class for ease of testing.
return_data = Merger()
qbuffer = return_data.extract_queries(args, kwargs)
with _get_serv() as cur:
for root, details in qbuffer:
# Run the query
cur.execute(details['query'], (minion_id,))
# Extract the field names MySQL has returned and process them
# All heavy lifting is done in the Merger class to decouple the
# logic from MySQL. Makes it easier to test.
return_data.process_fields([row[0] for row in cur.description],
details['depth'])
return_data.enter_root(root)
return_data.as_list = details['as_list']
if details['with_lists']:
return_data.with_lists = details['with_lists']
else:
return_data.with_lists = []
return_data.ignore_null = details['ignore_null']
return_data.process_results(cur.fetchall())
log.debug('ext_pillar MySQL: Return data: {0}'.format(
return_data))
return return_data.result
return MySQLExtPillar().fetch(minion_id, pillar, *args, **kwargs)

456
salt/pillar/sql_base.py Normal file
View File

@ -0,0 +1,456 @@
# -*- coding: utf-8 -*-
'''
Retrieve Pillar data by doing a SQL query
This module is not meant to be used directly as an ext_pillar.
It is a place to put code common to PEP 249 compliant SQL database adapters.
It exposes a python ABC that can be subclassed for new database providers.
:maturity: new
:platform: all
Theory of sql_base ext_pillar
=====================================
Ok, here's the theory for how this works...
- First, any non-keyword args are processed in order.
- Then, remaining keywords are processed.
We do this so that it's backward compatible with older configs.
Keyword arguments are sorted before being appended, so that they're predictable,
but they will always be applied last so overall it's moot.
For each of those items we process, it depends on the object type:
- Strings are executed as is and the pillar depth is determined by the number
of fields returned.
- A list has the first entry used as the query, the second as the pillar depth.
- A mapping uses the keys "query" and "depth" as the tuple
You can retrieve as many fields as you like, how they get used depends on the
exact settings.
Configuring a sql_base ext_pillar
=====================================
The sql_base ext_pillar cannot be used directly, but shares query configuration
with its implementations. These examples use a fake 'sql_base' adapter, which
should be replaced with the name of the adapter you are using.
A list of queries can be passed in
.. code-block:: yaml
ext_pillar:
- sql_base:
- "SELECT pillar,value FROM pillars WHERE minion_id = %s"
- "SELECT pillar,value FROM more_pillars WHERE minion_id = %s"
Or you can pass in a mapping
.. code-block:: yaml
ext_pillar:
- sql_base:
main: "SELECT pillar,value FROM pillars WHERE minion_id = %s"
extras: "SELECT pillar,value FROM more_pillars WHERE minion_id = %s"
The query can be provided as a string as we have just shown, but they can be
provided as lists
.. code-block:: yaml
ext_pillar:
- sql_base:
- "SELECT pillar,value FROM pillars WHERE minion_id = %s"
2
Or as a mapping
.. code-block:: yaml
ext_pillar:
- sql_base:
- query: "SELECT pillar,value FROM pillars WHERE minion_id = %s"
depth: 2
The depth defines how the dicts are constructed.
Essentially if you query for fields a,b,c,d for each row you'll get:
- With depth 1: {a: {"b": b, "c": c, "d": d}}
- With depth 2: {a: {b: {"c": c, "d": d}}}
- With depth 3: {a: {b: {c: d}}}
Depth greater than 3 wouldn't be different from 3 itself.
Depth of 0 translates to the largest depth needed, so 3 in this case.
(max depth == key count - 1)
Then they are merged in a similar way to plain pillar data, in the order
returned by the SQL database.
Thus subsequent results overwrite previous ones when they collide.
The ignore_null option can be used to change the overwrite behavior so that
only non-NULL values in subsequent results will overwrite. This can be used
to selectively overwrite default values.
.. code-block:: yaml
ext_pillar:
- sql_base:
- query: "SELECT pillar,value FROM pillars WHERE minion_id = 'default' and minion_id != %s"
depth: 2
- query: "SELECT pillar,value FROM pillars WHERE minion_id = %s"
depth: 2
ignore_null: True
If you specify `as_list: True` in the mapping expression it will convert
collisions to lists.
If you specify `with_lists: '...'` in the mapping expression it will
convert the specified depths to list. The string provided is a sequence
numbers that are comma separated. The string '1,3' will result in::
a,b,c,d,e,1 # field 1 same, field 3 differs
a,b,c,f,g,2 # ^^^^
a,z,h,y,j,3 # field 1 same, field 3 same
a,z,h,y,k,4 # ^^^^
^ ^
These columns define list grouping
.. code-block:: python
{a: [
{c: [
{e: 1},
{g: 2}
]
},
{h: [
{j: 3, k: 4 }
]
}
]}
The range for with_lists is 1 to number_of_fields, inclusive.
Numbers outside this range are ignored.
Finally, if you pass the queries in via a mapping, the key will be the
first level name where as passing them in as a list will place them in the
root. This isolates the query results into their own subtrees.
This may be a help or hindrance to your aims and can be used as such.
You can basically use any SELECT query that gets you the information, you
could even do joins or subqueries in case your minion_id is stored elsewhere.
It is capable of handling single rows or multiple rows per minion.
Configuration of the connection depends on the adapter in use.
More complete example for MySQL (to also show configuration)
=====================================
.. code-block:: yaml
mysql:
user: 'salt'
pass: 'super_secret_password'
db: 'salt_db'
ext_pillar:
- mysql:
fromdb:
query: 'SELECT col1,col2,col3,col4,col5,col6,col7
FROM some_random_table
WHERE minion_pattern LIKE %s'
depth: 5
as_list: True
with_lists: [1,3]
'''
from __future__ import absolute_import
# Please don't strip redundant parentheses from this file.
# I have added some for clarity.
# tests/unit/pillar/mysql_test.py may help understand this code.
# Import python libs
import logging
import abc # Added in python2.6 so always available
# Import Salt libs
from salt.utils.odict import OrderedDict
from salt.ext.six.moves import range
from salt.ext import six
# Set up logging
log = logging.getLogger(__name__)
# This ext_pillar is abstract and cannot be used directory
def __virtual__():
return False
class SqlBaseExtPillar(six.with_metaclass(abc.ABCMeta, object)):
'''
This class receives and processes the database rows in a database
agnostic way.
'''
result = None
focus = None
field_names = None
num_fields = 0
depth = 0
as_list = False
with_lists = None
ignore_null = False
def __init__(self):
self.result = self.focus = {}
@classmethod
@abc.abstractmethod
def _db_name(cls):
'''
Return a friendly name for the database, e.g. 'MySQL' or 'SQLite'.
Used in logging output.
'''
pass
@abc.abstractmethod
def _get_cursor(self):
'''
Yield a PEP 249 compliant Cursor as a context manager.
'''
pass
def extract_queries(self, args, kwargs):
'''
This function normalizes the config block into a set of queries we
can use. The return is a list of consistently laid out dicts.
'''
# Please note the function signature is NOT an error. Neither args, nor
# kwargs should have asterisks. We are passing in a list and dict,
# rather than receiving variable args. Adding asterisks WILL BREAK the
# function completely.
# First, this is the query buffer. Contains lists of [base,sql]
qbuffer = []
# Add on the non-keywords...
qbuffer.extend([[None, s] for s in args])
# And then the keywords...
# They aren't in definition order, but they can't conflict each other.
klist = list(kwargs.keys())
klist.sort()
qbuffer.extend([[k, kwargs[k]] for k in klist])
# Filter out values that don't have queries.
qbuffer = [x for x in qbuffer if (
(isinstance(x[1], str) and len(x[1]))
or
(isinstance(x[1], (list, tuple)) and (len(x[1]) > 0) and x[1][0])
or
(isinstance(x[1], dict) and 'query' in x[1] and len(x[1]['query']))
)]
# Next, turn the whole buffer into full dicts.
for qb in qbuffer:
defaults = {'query': '',
'depth': 0,
'as_list': False,
'with_lists': None,
'ignore_null': False
}
if isinstance(qb[1], str):
defaults['query'] = qb[1]
elif isinstance(qb[1], (list, tuple)):
defaults['query'] = qb[1][0]
if len(qb[1]) > 1:
defaults['depth'] = qb[1][1]
# May set 'as_list' from qb[1][2].
else:
defaults.update(qb[1])
if defaults['with_lists']:
defaults['with_lists'] = [
int(i) for i in defaults['with_lists'].split(',')
]
qb[1] = defaults
return qbuffer
def enter_root(self, root):
'''
Set self.focus for kwarg queries
'''
# There is no collision protection on root name isolation
if root:
self.result[root] = self.focus = {}
else:
self.focus = self.result
def process_fields(self, field_names, depth):
'''
The primary purpose of this function is to store the sql field list
and the depth to which we process.
'''
# List of field names in correct order.
self.field_names = field_names
# number of fields.
self.num_fields = len(field_names)
# Constrain depth.
if (depth == 0) or (depth >= self.num_fields):
self.depth = self.num_fields - 1
else:
self.depth = depth
def process_results(self, rows):
'''
This function takes a list of database results and iterates over,
merging them into a dict form.
'''
listify = OrderedDict()
listify_dicts = OrderedDict()
for ret in rows:
# crd is the Current Return Data level, to make this non-recursive.
crd = self.focus
# Walk and create dicts above the final layer
for i in range(0, self.depth-1):
# At the end we'll use listify to find values to make a list of
if i+1 in self.with_lists:
if id(crd) not in listify:
listify[id(crd)] = []
listify_dicts[id(crd)] = crd
if ret[i] not in listify[id(crd)]:
listify[id(crd)].append(ret[i])
if ret[i] not in crd:
# Key missing
crd[ret[i]] = {}
crd = crd[ret[i]]
else:
# Check type of collision
ty = type(crd[ret[i]])
if ty is list:
# Already made list
temp = {}
crd[ret[i]].append(temp)
crd = temp
elif ty is not dict:
# Not a list, not a dict
if self.as_list:
# Make list
temp = {}
crd[ret[i]] = [crd[ret[i]], temp]
crd = temp
else:
# Overwrite
crd[ret[i]] = {}
crd = crd[ret[i]]
else:
# dict, descend.
crd = crd[ret[i]]
# If this test is true, the penultimate field is the key
if self.depth == self.num_fields - 1:
nk = self.num_fields-2 # Aka, self.depth-1
# Should we and will we have a list at the end?
if ((self.as_list and (ret[nk] in crd)) or
(nk+1 in self.with_lists)):
if ret[nk] in crd:
if not isinstance(crd[ret[nk]], list):
crd[ret[nk]] = [crd[ret[nk]]]
# if it's already a list, do nothing
else:
crd[ret[nk]] = []
crd[ret[nk]].append(ret[self.num_fields-1])
else:
if not self.ignore_null or ret[self.num_fields-1] is not None:
crd[ret[nk]] = ret[self.num_fields-1]
else:
# Otherwise, the field name is the key but we have a spare.
# The spare results because of {c: d} vs {c: {"d": d, "e": e }}
# So, make that last dict
if ret[self.depth-1] not in crd:
crd[ret[self.depth-1]] = {}
# This bit doesn't escape listify
if self.depth in self.with_lists:
if id(crd) not in listify:
listify[id(crd)] = []
listify_dicts[id(crd)] = crd
if ret[self.depth-1] not in listify[id(crd)]:
listify[id(crd)].append(ret[self.depth-1])
crd = crd[ret[self.depth-1]]
# Now for the remaining keys, we put them into the dict
for i in range(self.depth, self.num_fields):
nk = self.field_names[i]
# Listify
if i+1 in self.with_lists:
if id(crd) not in listify:
listify[id(crd)] = []
listify_dicts[id(crd)] = crd
if nk not in listify[id(crd)]:
listify[id(crd)].append(nk)
# Collision detection
if self.as_list and (nk in crd):
# Same as before...
if isinstance(crd[nk], list):
crd[nk].append(ret[i])
else:
crd[nk] = [crd[nk], ret[i]]
else:
if not self.ignore_null or ret[i] is not None:
crd[nk] = ret[i]
# Get key list and work backwards. This is inner-out processing
ks = list(listify_dicts.keys())
ks.reverse()
for i in ks:
d = listify_dicts[i]
for k in listify[i]:
if isinstance(d[k], dict):
d[k] = list(d[k].values())
elif isinstance(d[k], list):
d[k] = [d[k]]
def fetch(self,
minion_id,
pillar, # pylint: disable=W0613
*args,
**kwargs):
'''
Execute queries, merge and return as a dict.
'''
db_name = self._db_name()
log.info('Querying {0} for information for {1}'.format(db_name, minion_id))
#
# log.debug('ext_pillar {0} args: {1}'.format(db_name, args))
# log.debug('ext_pillar {0} kwargs: {1}'.format(db_name, kwargs))
#
# Most of the heavy lifting is in this class for ease of testing.
qbuffer = self.extract_queries(args, kwargs)
with self._get_cursor() as cursor:
for root, details in qbuffer:
# Run the query
cursor.execute(details['query'], (minion_id,))
# Extract the field names the db has returned and process them
self.process_fields([row[0] for row in cursor.description],
details['depth'])
self.enter_root(root)
self.as_list = details['as_list']
if details['with_lists']:
self.with_lists = details['with_lists']
else:
self.with_lists = []
self.ignore_null = details['ignore_null']
self.process_results(cursor.fetchall())
log.debug('ext_pillar {0}: Return data: {1}'.format(db_name, self))
return self.result
# To extend this module you must define a top level ext_pillar procedure
# See mysql.py for an example

114
salt/pillar/sqlite3.py Normal file
View File

@ -0,0 +1,114 @@
# -*- coding: utf-8 -*-
'''
Retrieve Pillar data by doing a SQLite3 query
sqlite3 is included in the stdlib since python2.5.
This module is a concrete implementation of the sql_base ext_pillar for SQLite3.
:maturity: new
:platform: all
Configuring the sqlite3 ext_pillar
=====================================
Use the 'sqlite3' key under ext_pillar for configuration of queries.
SQLite3 database connection configuration requires the following values
configured in the master config:
Note, timeout is in seconds.
.. code-block:: yaml
pillar.sqlite3.database: /var/lib/salt/pillar.db
pillar.sqlite3.timeout: 5.0
Complete example
=====================================
.. code-block:: yaml
pillar:
sqlite3:
database: '/var/lib/salt/pillar.db'
timeout: 5.0
ext_pillar:
- sqlite3:
fromdb:
query: 'SELECT col1,col2,col3,col4,col5,col6,col7
FROM some_random_table
WHERE minion_pattern LIKE %s'
depth: 5
as_list: True
with_lists: [1,3]
'''
from __future__ import absolute_import
# Import python libs
from contextlib import contextmanager
import logging
import sqlite3
# Import Salt libs
from salt.pillar.sql_base import SqlBaseExtPillar
# Set up logging
log = logging.getLogger(__name__)
def __virtual__():
return True
class SQLite3ExtPillar(SqlBaseExtPillar):
'''
This class receives and processes the database rows from SQLite3.
'''
@classmethod
def _db_name(cls):
return 'SQLite3'
def _get_options(self):
'''
Returns options used for the SQLite3 connection.
'''
defaults = {'database': '/var/lib/salt/pillar.db',
'timeout': 5.0}
_options = {}
_opts = __opts__.get('pillar', {}).get('sqlite3', {})
for attr in defaults:
if attr not in _opts:
log.debug('Using default for SQLite3 pillar {0}'.format(attr))
_options[attr] = defaults[attr]
continue
_options[attr] = _opts[attr]
return _options
@contextmanager
def _get_cursor(self):
'''
Yield a SQLite3 cursor
'''
_options = self._get_options()
conn = sqlite3.connect(_options.get('database'),
timeout=float(_options.get('timeout')))
cursor = conn.cursor()
try:
yield cursor
except sqlite3.Error as err:
log.exception('Error in ext_pillar SQLite3: {0}'.format(err.args))
finally:
conn.close()
def ext_pillar(minion_id,
pillar,
*args,
**kwargs):
'''
Execute queries against SQLite3, merge and return as a dict
'''
return SQLite3ExtPillar().fetch(minion_id, pillar, *args, **kwargs)

View File

@ -63,15 +63,8 @@ from __future__ import absolute_import
# Import Python libs
import pprint
import logging
import urllib
# Import 3rd-party libs
try:
import requests
HAS_REQUESTS = True
except ImportError:
HAS_REQUESTS = False
from requests.exceptions import ConnectionError
# pylint: disable=import-error,no-name-in-module,redefined-builtin
from salt.ext.six.moves.urllib.parse import urljoin as _urljoin # pylint: disable=import-error,no-name-in-module
import salt.ext.six.moves.http_client
@ -123,13 +116,15 @@ def __virtual__():
:return: The virtual name of the module.
'''
if not HAS_REQUESTS:
return False
return __virtualname__
def _query(function, api_key=None, method='GET', data=None):
def _query(function,
api_key=None,
args=None,
method='GET',
header_dict=None,
data=None):
'''
Slack object method function to construct and execute on the API URL.
@ -139,12 +134,8 @@ def _query(function, api_key=None, method='GET', data=None):
:param data: The data to be sent for POST method.
:return: The json response from the API call or False.
'''
headers = {}
query_params = {}
if data is None:
data = {}
ret = {'message': '',
'res': True}
@ -178,43 +169,50 @@ def _query(function, api_key=None, method='GET', data=None):
base_url = _urljoin(api_url, '/api/')
path = slack_functions.get(function).get('request')
url = _urljoin(base_url, path, False)
if not isinstance(args, dict):
query_params = {}
query_params['token'] = api_key
try:
result = requests.request(
method=method,
url=url,
headers=headers,
params=query_params,
data=data,
verify=True,
)
except ConnectionError as e:
ret['message'] = e
ret['res'] = False
return ret
if header_dict is None:
header_dict = {}
if result.status_code == salt.ext.six.moves.http_client.OK:
result = result.json()
if method != 'POST':
header_dict['Accept'] = 'application/json'
result = salt.utils.http.query(
url,
method,
params=query_params,
data=data,
decode=True,
status=True,
header_dict=header_dict,
opts=__opts__,
)
if result.get('status', None) == salt.ext.six.moves.http_client.OK:
_result = result['dict']
response = slack_functions.get(function).get('response')
if 'error' in result:
ret['message'] = result['error']
if 'error' in _result:
ret['message'] = _result['error']
ret['res'] = False
return ret
ret['message'] = result.get(response)
ret['message'] = _result.get(response)
return ret
elif result.status_code == salt.ext.six.moves.http_client.NO_CONTENT:
elif result.get('status', None) == salt.ext.six.moves.http_client.NO_CONTENT:
return True
else:
log.debug(url)
log.debug(query_params)
log.debug(data)
log.debug(result)
if 'error' in result:
ret['message'] = result['error']
_result = result['dict']
if 'error' in _result:
ret['message'] = _result['error']
ret['res'] = False
return ret
ret['message'] = result
ret['message'] = _result.get(response)
return ret
@ -240,10 +238,12 @@ def _post_message(channel,
parameters['as_user'] = as_user
parameters['text'] = '```' + message + '```' # pre-formatted, fixed-width text
# Slack wants the body on POST to be urlencoded.
result = _query(function='message',
api_key=api_key,
method='POST',
data=parameters)
header_dict={'Content-Type': 'application/x-www-form-urlencoded'},
data=urllib.urlencode(parameters))
log.debug('result {0}'.format(result))
if result:

View File

@ -249,7 +249,7 @@ class Script(Target):
self.tgt = tgt
self.tgt_type = tgt_type
inventory, error = subprocess.Popen([inventory_file], shell=True, stdout=subprocess.PIPE).communicate()
self.inventory = json.loads(inventory)
self.inventory = json.loads(salt.utils.to_str(inventory))
self.meta = self.inventory.get('_meta', {})
self.groups = dict()
self.hostvars = dict()

View File

@ -1,18 +1,15 @@
# -*- coding: utf-8 -*-
'''
:requires: clustershell
https://github.com/cea-hpc/clustershell
This roster resolves hostname in a pdsh/clustershell style.
When you want to use host globs for target matching, use --roster clustershell.
:depends: clustershell, https://github.com/cea-hpc/clustershell
Example:
When you want to use host globs for target matching, use ``--roster clustershell``. For example:
.. code-block:: bash
salt-ssh --roster clustershell 'server_[1-10,21-30],test_server[5,7,9]' test.ping
'''
# Import python libs

View File

@ -31,11 +31,17 @@ __func_alias__ = {
}
def _do(name, fun):
def _do(name, fun, path=None):
'''
Invoke a function in the lxc module with no args
path
path to the container parent
default: /var/lib/lxc (system default)
.. versionadded:: Beryllium
'''
host = find_guest(name, quiet=True)
host = find_guest(name, quiet=True, path=path)
if not host:
return False
@ -44,6 +50,7 @@ def _do(name, fun):
host,
'lxc.{0}'.format(fun),
[name],
kwarg={'path': path},
timeout=60)
data = next(cmd_ret)
data = data.get(host, {}).get('ret', None)
@ -52,12 +59,18 @@ def _do(name, fun):
return data
def _do_names(names, fun):
def _do_names(names, fun, path=None):
'''
Invoke a function in the lxc module with no args
path
path to the container parent
default: /var/lib/lxc (system default)
.. versionadded:: Beryllium
'''
ret = {}
hosts = find_guests(names)
hosts = find_guests(names, path=path)
if not hosts:
return False
@ -69,6 +82,7 @@ def _do_names(names, fun):
host,
'lxc.{0}'.format(fun),
[name],
kwarg={'path': path},
timeout=60))
for cmd in cmds:
data = next(cmd)
@ -78,33 +92,51 @@ def _do_names(names, fun):
return ret
def find_guest(name, quiet=False):
def find_guest(name, quiet=False, path=None):
'''
Returns the host for a container.
path
path to the container parent
default: /var/lib/lxc (system default)
.. versionadded:: Beryllium
.. code-block:: bash
salt-run lxc.find_guest name
'''
if quiet:
log.warn('\'quiet\' argument is being deprecated. Please migrate to --quiet')
for data in _list_iter():
log.warn('\'quiet\' argument is being deprecated.'
' Please migrate to --quiet')
for data in _list_iter(path=path):
host, l = next(six.iteritems(data))
for x in 'running', 'frozen', 'stopped':
if name in l[x]:
if not quiet:
__jid_event__.fire_event({'data': host, 'outputter': 'lxc_find_host'}, 'progress')
__jid_event__.fire_event(
{'data': host,
'outputter': 'lxc_find_host'},
'progress')
return host
return None
def find_guests(names):
def find_guests(names, path=None):
'''
Return a dict of hosts and named guests
path
path to the container parent
default: /var/lib/lxc (system default)
.. versionadded:: Beryllium
'''
ret = {}
names = names.split(',')
for data in _list_iter():
for data in _list_iter(path=path):
host, stat = next(six.iteritems(data))
for state in stat:
for name in stat[state]:
@ -139,6 +171,12 @@ def init(names, host=None, saltcloud_mode=False, quiet=False, **kwargs):
host
Minion on which to initialize the container **(required)**
path
path to the container parent
default: /var/lib/lxc (system default)
.. versionadded:: Beryllium
saltcloud_mode
init the container with the saltcloud opts format instead
See lxc.init_interface module documentation
@ -169,7 +207,7 @@ def init(names, host=None, saltcloud_mode=False, quiet=False, **kwargs):
network_profile
Network profile to use for the container
.. versionadded:: 2015.5.0
.. versionadded:: 2015.5.2
nic
.. deprecated:: 2015.5.0
@ -194,8 +232,10 @@ def init(names, host=None, saltcloud_mode=False, quiet=False, **kwargs):
Optional config parameters. By default, the id is set to
the name of the container.
'''
path = kwargs.get('path', None)
if quiet:
log.warn('\'quiet\' argument is being deprecated. Please migrate to --quiet')
log.warn('\'quiet\' argument is being deprecated.'
' Please migrate to --quiet')
ret = {'comment': '', 'result': True}
if host is None:
# TODO: Support selection of host based on available memory/cpu/etc.
@ -222,7 +262,7 @@ def init(names, host=None, saltcloud_mode=False, quiet=False, **kwargs):
return ret
log.info('Searching for LXC Hosts')
data = __salt__['lxc.list'](host, quiet=True)
data = __salt__['lxc.list'](host, quiet=True, path=path)
for host, containers in six.iteritems(data):
for name in names:
if name in sum(six.itervalues(containers), []):
@ -372,6 +412,12 @@ def cloud_init(names, host=None, quiet=False, **kwargs):
host
Minion to start the container on. Required.
path
path to the container parent
default: /var/lib/lxc (system default)
.. versionadded:: Beryllium
saltcloud_mode
init the container with the saltcloud opts format instead
'''
@ -381,13 +427,21 @@ def cloud_init(names, host=None, quiet=False, **kwargs):
saltcloud_mode=True, quiet=quiet, **kwargs)
def _list_iter(host=None):
def _list_iter(host=None, path=None):
'''
Return a generator iterating over hosts
path
path to the container parent
default: /var/lib/lxc (system default)
.. versionadded:: Beryllium
'''
tgt = host or '*'
client = salt.client.get_local_client(__opts__['conf_file'])
for container_info in client.cmd_iter(tgt, 'lxc.list'):
for container_info in client.cmd_iter(
tgt, 'lxc.list', kwarg={'path': path}
):
if not container_info:
continue
if not isinstance(container_info, dict):
@ -406,34 +460,47 @@ def _list_iter(host=None):
yield chunk
def list_(host=None, quiet=False):
def list_(host=None, quiet=False, path=None):
'''
List defined containers (running, stopped, and frozen) for the named
(or all) host(s).
path
path to the container parent
default: /var/lib/lxc (system default)
.. versionadded:: Beryllium
.. code-block:: bash
salt-run lxc.list [host=minion_id]
'''
it = _list_iter(host)
it = _list_iter(host, path=path)
ret = {}
for chunk in it:
ret.update(chunk)
if not quiet:
__jid_event__.fire_event({'data': chunk, 'outputter': 'lxc_list'}, 'progress')
__jid_event__.fire_event(
{'data': chunk, 'outputter': 'lxc_list'}, 'progress')
return ret
def purge(name, delete_key=True, quiet=False):
def purge(name, delete_key=True, quiet=False, path=None):
'''
Purge the named container and delete its minion key if present.
WARNING: Destroys all data associated with the container.
path
path to the container parent
default: /var/lib/lxc (system default)
.. versionadded:: Beryllium
.. code-block:: bash
salt-run lxc.purge name
'''
data = _do_names(name, 'destroy')
data = _do_names(name, 'destroy', path=path)
if data is False:
return data
@ -445,75 +512,111 @@ def purge(name, delete_key=True, quiet=False):
return
if not quiet:
__jid_event__.fire_event({'data': data, 'outputter': 'lxc_purge'}, 'progress')
__jid_event__.fire_event(
{'data': data, 'outputter': 'lxc_purge'}, 'progress')
return data
def start(name, quiet=False):
def start(name, quiet=False, path=None):
'''
Start the named container.
path
path to the container parent
default: /var/lib/lxc (system default)
.. versionadded:: Beryllium
.. code-block:: bash
salt-run lxc.start name
'''
data = _do_names(name, 'start')
data = _do_names(name, 'start', path=path)
if data and not quiet:
__jid_event__.fire_event({'data': data, 'outputter': 'lxc_start'}, 'progress')
__jid_event__.fire_event(
{'data': data, 'outputter': 'lxc_start'}, 'progress')
return data
def stop(name, quiet=False):
def stop(name, quiet=False, path=None):
'''
Stop the named container.
path
path to the container parent
default: /var/lib/lxc (system default)
.. versionadded:: Beryllium
.. code-block:: bash
salt-run lxc.stop name
'''
data = _do_names(name, 'stop')
data = _do_names(name, 'stop', path=path)
if data and not quiet:
__jid_event__.fire_event({'data': data, 'outputter': 'lxc_force_off'}, 'progress')
__jid_event__.fire_event(
{'data': data, 'outputter': 'lxc_force_off'}, 'progress')
return data
def freeze(name, quiet=False):
def freeze(name, quiet=False, path=None):
'''
Freeze the named container
path
path to the container parent
default: /var/lib/lxc (system default)
.. versionadded:: Beryllium
.. code-block:: bash
salt-run lxc.freeze name
'''
data = _do_names(name, 'freeze')
if data and not quiet:
__jid_event__.fire_event({'data': data, 'outputter': 'lxc_pause'}, 'progress')
__jid_event__.fire_event(
{'data': data, 'outputter': 'lxc_pause'}, 'progress')
return data
def unfreeze(name, quiet=False):
def unfreeze(name, quiet=False, path=None):
'''
Unfreeze the named container
path
path to the container parent
default: /var/lib/lxc (system default)
.. versionadded:: Beryllium
.. code-block:: bash
salt-run lxc.unfreeze name
'''
data = _do_names(name, 'unfreeze')
data = _do_names(name, 'unfreeze', path=path)
if data and not quiet:
__jid_event__.fire_event({'data': data, 'outputter': 'lxc_resume'}, 'progress')
__jid_event__.fire_event(
{'data': data, 'outputter': 'lxc_resume'}, 'progress')
return data
def info(name, quiet=False):
def info(name, quiet=False, path=None):
'''
Returns information about a container.
path
path to the container parent
default: /var/lib/lxc (system default)
.. versionadded:: Beryllium
.. code-block:: bash
salt-run lxc.info name
'''
data = _do_names(name, 'info')
data = _do_names(name, 'info', path=path)
if data and not quiet:
__jid_event__.fire_event({'data': data, 'outputter': 'lxc_info'}, 'progress')
__jid_event__.fire_event(
{'data': data, 'outputter': 'lxc_info'}, 'progress')
return data

View File

@ -1,25 +1,28 @@
# -*- coding: utf-8 -*-
'''
:requires: libnacl
https://github.com/saltstack/libnacl
This runner helps create encrypted passwords that can be included in pillars.
This is often usefull if you wish to store your pillars in source control or
:depends: libnacl, https://github.com/saltstack/libnacl
This is often useful if you wish to store your pillars in source control or
share your pillar data with others that you trust. I dont advise making your pillars public
regardless if they are encrypted or not.
The following configurations can be defined in the master config
so your users can create encrypted passwords using the runner nacl::
so your users can create encrypted passwords using the runner nacl:
.. code-block:: bash
cat /etc/salt/master.d/nacl.conf
nacl.config:
key: None
keyfile: /root/.nacl
Now with the config in the master you can use the runner nacl like::
Now with the config in the master you can use the runner nacl like:
.. code-block:: bash
salt-run nacl.enc 'data'
'''
from __future__ import absolute_import

View File

@ -2,7 +2,7 @@
'''
Package helper functions using ``salt.modules.pkg``
.. versionadded:: FIXME
.. versionadded:: Beryllium
'''
# Import python libs

View File

@ -66,6 +66,7 @@ STATE_REQUISITE_IN_KEYWORDS = frozenset([
'listen_in',
])
STATE_RUNTIME_KEYWORDS = frozenset([
'name', # name of the highstate running
'fun',
'state',
'check_cmd',

View File

@ -74,8 +74,8 @@ with the role. This is the default behavior of the AWS console.
keyid: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs
If ``delete_policies: False`` is specified, existing policies that are not in
the given list of policies will not be deleted. These allow manual
modifications on the IAM role to be persistent.
the given list of policies will not be deleted. This allows manual modifications
on the IAM role to be persistent. This functionality was added in Beryllium.
'''
from __future__ import absolute_import
import salt.utils.dictupdate as dictupdate
@ -141,6 +141,13 @@ def present(
profile
A dict with region, key and keyid, or a pillar key (string)
that contains a dict with region, key and keyid.
delete_policies
Deletes existing policies that are not in the given list of policies. Default
value is ``True``. If ``False`` is specified, existing policies will not be deleted
allowing manual modifications on the IAM role to be persistent.
.. versionadded:: Beryllium
'''
ret = {'name': name, 'result': True, 'comment': '', 'changes': {}}
_ret = _role_present(name, policy_document, path, region, key, keyid,

Some files were not shown because too many files have changed in this diff Show More