Merge branch 'develop' into fix-iptables-negation

This commit is contained in:
Nicole Thomas 2017-08-18 16:52:29 -04:00 committed by GitHub
commit 9ac043d997
49 changed files with 1964 additions and 195 deletions

View File

@ -97,3 +97,14 @@
# #
#delete_sshkeys: False #delete_sshkeys: False
# Whether or not to include grains information in the /etc/salt/minion file
# which is generated when the minion is provisioned. For example...
# grains:
# salt-cloud:
# driver: ec2
# provider: my_ec2:ec2
# profile: micro_ec2
#
# Default: 'True'
#
#enable_cloud_grains: 'True'

View File

@ -0,0 +1,6 @@
===========================
salt.cloud.clouds.oneandone
===========================
.. automodule:: salt.cloud.clouds.oneandone
:members:

View File

@ -21,7 +21,7 @@ Or you may specify a map which includes all VMs to perform the action on:
$ salt-cloud -a reboot -m /path/to/mapfile $ salt-cloud -a reboot -m /path/to/mapfile
The following is a list of actions currently supported by salt-cloud: The following is an example list of actions currently supported by ``salt-cloud``:
.. code-block:: yaml .. code-block:: yaml
@ -36,5 +36,5 @@ The following is a list of actions currently supported by salt-cloud:
- start - start
- stop - stop
Another useful reference for viewing more salt-cloud actions is the Another useful reference for viewing more ``salt-cloud`` actions is the
:ref:Salt Cloud Feature Matrix <salt-cloud-feature-matrix> :ref:`Salt Cloud Feature Matrix <salt-cloud-feature-matrix>`.

View File

@ -56,6 +56,24 @@ settings can be placed in the provider or profile:
sls_list: sls_list:
- web - web
When salt cloud creates a new minon, it can automatically add grain information
to the minion configuration file identifying the sources originally used
to define it.
The generated grain information will appear similar to:
.. code-block:: yaml
grains:
salt-cloud:
driver: ec2
provider: my_ec2:ec2
profile: ec2-web
The generation of the salt-cloud grain can be surpressed by the
option ``enable_cloud_grains: 'False'`` in the cloud configuration file.
Cloud Configuration Syntax Cloud Configuration Syntax
========================== ==========================

View File

@ -26,5 +26,5 @@ gathering information about instances on a provider basis:
$ salt-cloud -f list_nodes_full linode $ salt-cloud -f list_nodes_full linode
$ salt-cloud -f list_nodes_select linode $ salt-cloud -f list_nodes_select linode
Another useful reference for viewing salt-cloud functions is the Another useful reference for viewing ``salt-cloud`` functions is the
:ref:`Salt Cloud Feature Matrix <salt-cloud-feature-matrix>`. :ref:`Salt Cloud Feature Matrix <salt-cloud-feature-matrix>`.

View File

@ -119,6 +119,7 @@ Cloud Provider Specifics
Getting Started With Libvirt <libvirt> Getting Started With Libvirt <libvirt>
Getting Started With Linode <linode> Getting Started With Linode <linode>
Getting Started With LXC <lxc> Getting Started With LXC <lxc>
Getting Started With OneAndOne <oneandone>
Getting Started With OpenNebula <opennebula> Getting Started With OpenNebula <opennebula>
Getting Started With OpenStack <openstack> Getting Started With OpenStack <openstack>
Getting Started With Parallels <parallels> Getting Started With Parallels <parallels>

View File

@ -407,3 +407,21 @@ configuration file. For example:
- echo 'hello world!' - echo 'hello world!'
These commands will run in sequence **before** the bootstrap script is executed. These commands will run in sequence **before** the bootstrap script is executed.
Force Minion Config
===================
.. versionadded:: Oxygen
The ``force_minion_config`` option requests the bootstrap process to overwrite
an existing minion configuration file and public/private key files.
Default: False
This might be important for drivers (such as ``saltify``) which are expected to
take over a connection from a former salt master.
.. code-block:: yaml
my_saltify_provider:
driver: saltify
force_minion_config: true

View File

@ -0,0 +1,146 @@
==========================
Getting Started With 1and1
==========================
1&1 is one of the worlds leading Web hosting providers. 1&1 currently offers
a wide range of Web hosting products, including email solutions and high-end
servers in 10 different countries including Germany, Spain, Great Britain
and the United States. From domains to 1&1 MyWebsite to eBusiness solutions
like Cloud Hosting and Web servers for complex tasks, 1&1 is well placed to deliver
a high quality service to its customers. All 1&1 products are hosted in
1&1s high-performance, green data centers in the USA and Europe.
Dependencies
============
* 1and1 >= 1.2.0
Configuration
=============
* Using the new format, set up the cloud configuration at
``/etc/salt/cloud.providers`` or
``/etc/salt/cloud.providers.d/oneandone.conf``:
.. code-block:: yaml
my-oneandone-config:
driver: oneandone
# Set the location of the salt-master
#
minion:
master: saltmaster.example.com
# Configure oneandone authentication credentials
#
api_token: <api_token>
ssh_private_key: /path/to/id_rsa
ssh_public_key: /path/to/id_rsa.pub
Authentication
==============
The ``api_key`` is used for API authorization. This token can be obtained
from the CloudPanel in the Management section below Users.
Profiles
========
Here is an example of a profile:
.. code-block:: yaml
oneandone_fixed_size:
provider: my-oneandone-config
description: Small instance size server
fixed_instance_size: S
appliance_id: 8E3BAA98E3DFD37857810E0288DD8FBA
oneandone_custom_size:
provider: my-oneandone-config
description: Custom size server
vcore: 2
cores_per_processor: 2
ram: 8
appliance_id: 8E3BAA98E3DFD37857810E0288DD8FBA
hdds:
-
is_main: true
size: 20
-
is_main: false
size: 20
The following list explains some of the important properties.
fixed_instance_size_id
When creating a server, either ``fixed_instance_size_id`` or custom hardware params
containing ``vcore``, ``cores_per_processor``, ``ram``, and ``hdds`` must be provided.
Can be one of the IDs listed among the output of the following command:
.. code-block:: bash
salt-cloud --list-sizes oneandone
vcore
Total amount of processors.
cores_per_processor
Number of cores per processor.
ram
RAM memory size in GB.
hdds
Hard disks.
appliance_id
ID of the image that will be installed on server.
Can be one of the IDs listed in the output of the following command:
.. code-block:: bash
salt-cloud --list-images oneandone
datacenter_id
ID of the datacenter where the server will be created.
Can be one of the IDs listed in the output of the following command:
.. code-block:: bash
salt-cloud --list-locations oneandone
description
Description of the server.
password
Password of the server. Password must contain more than 8 characters
using uppercase letters, numbers and other special symbols.
power_on
Power on server after creation. Default is set to true.
firewall_policy_id
Firewall policy ID. If it is not provided, the server will assign
the best firewall policy, creating a new one if necessary. If the parameter
is sent with a 0 value, the server will be created with all ports blocked.
ip_id
IP address ID.
load_balancer_id
Load balancer ID.
monitoring_policy_id
Monitoring policy ID.
deploy
Set to False if Salt should not be installed on the node.
wait_for_timeout
The timeout to wait in seconds for provisioning resources such as servers.
The default wait_for_timeout is 15 minutes.
For more information concerning cloud profiles, see :ref:`here
<salt-cloud-profiles>`.

View File

@ -16,7 +16,7 @@ The Saltify driver has no external dependencies.
Configuration Configuration
============= =============
Because the Saltify driver does not use an actual cloud provider host, it has a Because the Saltify driver does not use an actual cloud provider host, it can have a
simple provider configuration. The only thing that is required to be set is the simple provider configuration. The only thing that is required to be set is the
driver name, and any other potentially useful information, like the location of driver name, and any other potentially useful information, like the location of
the salt-master: the salt-master:
@ -31,6 +31,12 @@ the salt-master:
master: 111.222.333.444 master: 111.222.333.444
provider: saltify provider: saltify
However, if you wish to use the more advanced capabilities of salt-cloud, such as
rebooting, listing, and disconnecting machines, then the salt master must fill
the role usually performed by a vendor's cloud management system. In order to do
that, you must configure your salt master as a salt-api server, and supply credentials
to use it. (See ``salt-api setup`` below.)
Profiles Profiles
======== ========
@ -72,6 +78,30 @@ to it can be verified with Salt:
salt my-machine test.ping salt my-machine test.ping
Destroy Options
---------------
For obvious reasons, the ``destroy`` action does not actually vaporize hardware.
If the salt master is connected using salt-api, it can tear down parts of
the client machines. It will remove the client's key from the salt master,
and will attempt the following options:
.. code-block:: yaml
- remove_config_on_destroy: true
# default: true
# Deactivate salt-minion on reboot and
# delete the minion config and key files from its ``/etc/salt`` directory,
# NOTE: If deactivation is unsuccessful (older Ubuntu machines) then when
# salt-minion restarts it will automatically create a new, unwanted, set
# of key files. The ``force_minion_config`` option must be used in that case.
- shutdown_on_destroy: false
# default: false
# send a ``shutdown`` command to the client.
.. versionadded:: Oxygen
Using Map Files Using Map Files
--------------- ---------------
The settings explained in the section above may also be set in a map file. An The settings explained in the section above may also be set in a map file. An
@ -135,3 +165,67 @@ Return values:
- ``True``: Credential verification succeeded - ``True``: Credential verification succeeded
- ``False``: Credential verification succeeded - ``False``: Credential verification succeeded
- ``None``: Credential verification was not attempted. - ``None``: Credential verification was not attempted.
Provisioning salt-api
=====================
In order to query or control minions it created, saltify needs to send commands
to the salt master. It does that using the network interface to salt-api.
The salt-api is not enabled by default. The following example will provide a
simple installation.
.. code-block:: yaml
# file /etc/salt/cloud.profiles.d/my_saltify_profiles.conf
hw_41: # a theoretical example hardware machine
ssh_host: 10.100.9.41 # the hard address of your target
ssh_username: vagrant # a user name which has passwordless sudo
password: vagrant # on your target machine
provider: my_saltify_provider
.. code-block:: yaml
# file /etc/salt/cloud.providers.d/saltify_provider.conf
my_saltify_provider:
driver: saltify
eauth: pam
username: vagrant # supply some sudo-group-member's name
password: vagrant # and password on the salt master
minion:
master: 10.100.9.5 # the hard address of the master
.. code-block:: yaml
# file /etc/salt/master.d/auth.conf
# using salt-api ... members of the 'sudo' group can do anything ...
external_auth:
pam:
sudo%:
- .*
- '@wheel'
- '@runner'
- '@jobs'
.. code-block:: yaml
# file /etc/salt/master.d/api.conf
# see https://docs.saltstack.com/en/latest/ref/netapi/all/salt.netapi.rest_cherrypy.html
rest_cherrypy:
host: localhost
port: 8000
ssl_crt: /etc/pki/tls/certs/localhost.crt
ssl_key: /etc/pki/tls/certs/localhost.key
thread_pool: 30
socket_queue_size: 10
Start your target machine as a Salt minion named "node41" by:
.. code-block:: bash
$ sudo salt-cloud -p hw_41 node41

View File

@ -3,3 +3,13 @@ Salt 2016.11.7 Release Notes
============================ ============================
Version 2016.11.7 is a bugfix release for :ref:`2016.11.0 <release-2016-11-0>`. Version 2016.11.7 is a bugfix release for :ref:`2016.11.0 <release-2016-11-0>`.
Changes for v2016.11.6..v2016.11.7
----------------------------------
Security Fix
============
CVE-2017-12791 Maliciously crafted minion IDs can cause unwanted directory traversals on the Salt-master
Correct a flaw in minion id validation which could allow certain minions to authenticate to a master despite not having the correct credentials. To exploit the vulnerability, an attacker must create a salt-minion with an ID containing characters that will cause a directory traversal. Credit for discovering the security flaw goes to: Vernhk@qq.com

View File

@ -4,6 +4,13 @@ Salt 2017.7.1 Release Notes
Version 2017.7.1 is a bugfix release for :ref:`2017.7.0 <release-2017-7-0>`. Version 2017.7.1 is a bugfix release for :ref:`2017.7.0 <release-2017-7-0>`.
Security Fix
============
CVE-2017-12791 Maliciously crafted minion IDs can cause unwanted directory traversals on the Salt-master
Correct a flaw in minion id validation which could allow certain minions to authenticate to a master despite not having the correct credentials. To exploit the vulnerability, an attacker must create a salt-minion with an ID containing characters that will cause a directory traversal. Credit for discovering the security flaw goes to: Vernhk@qq.com
Changes for v2017.7.0..v2017.7.1 Changes for v2017.7.0..v2017.7.1
-------------------------------- --------------------------------

View File

@ -691,6 +691,18 @@ For ``smartos`` some grains have been deprecated. These grains will be removed i
- The ``hypervisor_uuid`` has been replaced with ``mdata:sdc:server_uuid`` grain. - The ``hypervisor_uuid`` has been replaced with ``mdata:sdc:server_uuid`` grain.
- The ``datacenter`` has been replaced with ``mdata:sdc:datacenter_name`` grain. - The ``datacenter`` has been replaced with ``mdata:sdc:datacenter_name`` grain.
Minion Blackout
---------------
During a blackout, minions will not execute any remote execution commands,
except for :mod:`saltutil.refresh_pillar <salt.modules.saltutil.refresh_pillar>`.
Previously, support was added so that blackouts are enabled using a special
pillar key, ``minion_blackout`` set to ``True`` and an optional pillar key
``minion_blackout_whitelist`` to specify additional functions that are permitted
during blackout. This release adds support for using this feature in the grains
as well, by using special grains keys ``minion_blackout`` and
``minion_blackout_whitelist``.
Utils Deprecations Utils Deprecations
================== ==================

View File

@ -383,8 +383,8 @@ Section -Post
nsExec::Exec "nssm.exe set salt-minion Description Salt Minion from saltstack.com" nsExec::Exec "nssm.exe set salt-minion Description Salt Minion from saltstack.com"
nsExec::Exec "nssm.exe set salt-minion Start SERVICE_AUTO_START" nsExec::Exec "nssm.exe set salt-minion Start SERVICE_AUTO_START"
nsExec::Exec "nssm.exe set salt-minion AppNoConsole 1" nsExec::Exec "nssm.exe set salt-minion AppNoConsole 1"
nsExec::Exec "nssm.exe set salt-minion AppStopMethodConsole 24000"
RMDir /R "$INSTDIR\var\cache\salt" ; removing cache from old version nsExec::Exec "nssm.exe set salt-minion AppStopMethodWindow 2000"
Call updateMinionConfig Call updateMinionConfig

View File

@ -162,7 +162,7 @@ from salt.exceptions import SaltCacheError
__virtualname__ = 'redis' __virtualname__ = 'redis'
__func_alias__ = { __func_alias__ = {
'list_': 'list' 'ls': 'list'
} }
log = logging.getLogger(__file__) log = logging.getLogger(__file__)
@ -196,6 +196,9 @@ def __virtual__():
# helper functions -- will not be exported # helper functions -- will not be exported
# ----------------------------------------------------------------------------- # -----------------------------------------------------------------------------
def init_kwargs(kwargs):
return {}
def _get_redis_cache_opts(): def _get_redis_cache_opts():
''' '''
@ -475,7 +478,7 @@ def flush(bank, key=None):
return True return True
def list_(bank): def ls(bank):
''' '''
Lists entries stored in the specified bank. Lists entries stored in the specified bank.
''' '''

View File

@ -445,6 +445,7 @@ class SyncClientMixin(object):
_use_fnmatch = True _use_fnmatch = True
else: else:
target_mod = arg + u'.' if not arg.endswith(u'.') else arg target_mod = arg + u'.' if not arg.endswith(u'.') else arg
_use_fnmatch = False
if _use_fnmatch: if _use_fnmatch:
docs = [(fun, self.functions[fun].__doc__) docs = [(fun, self.functions[fun].__doc__)
for fun in fnmatch.filter(self.functions, target_mod)] for fun in fnmatch.filter(self.functions, target_mod)]

View File

@ -728,12 +728,18 @@ def request_instance(vm_=None, call=None):
else: else:
pool = floating_ip_conf.get('pool', 'public') pool = floating_ip_conf.get('pool', 'public')
try:
floating_ip = conn.floating_ip_create(pool)['ip']
except Exception:
log.info('A new IP address was unable to be allocated. '
'An IP address will be pulled from the already allocated list, '
'This will cause a race condition when building in parallel.')
for fl_ip, opts in six.iteritems(conn.floating_ip_list()): for fl_ip, opts in six.iteritems(conn.floating_ip_list()):
if opts['fixed_ip'] is None and opts['pool'] == pool: if opts['fixed_ip'] is None and opts['pool'] == pool:
floating_ip = fl_ip floating_ip = fl_ip
break break
if floating_ip is None: if floating_ip is None:
floating_ip = conn.floating_ip_create(pool)['ip'] log.error('No IP addresses available to allocate for this server: {0}'.format(vm_['name']))
def __query_node_data(vm_): def __query_node_data(vm_):
try: try:

View File

@ -0,0 +1,849 @@
# -*- coding: utf-8 -*-
'''
1&1 Cloud Server Module
=======================
=======
The 1&1 SaltStack cloud module allows a 1&1 server to
be automatically deployed and bootstrapped with Salt.
:depends: 1and1 >= 1.2.0
The module requires the 1&1 api_token to be provided.
The server should also be assigned a public LAN, a private LAN,
or both along with SSH key pairs.
...
Set up the cloud configuration at ``/etc/salt/cloud.providers`` or
``/etc/salt/cloud.providers.d/oneandone.conf``:
.. code-block:: yaml
my-oneandone-config:
driver: oneandone
# The 1&1 api token
api_token: <your-token>
# SSH private key filename
ssh_private_key: /path/to/private_key
# SSH public key filename
ssh_public_key: /path/to/public_key
.. code-block:: yaml
my-oneandone-profile:
provider: my-oneandone-config
# Either provide fixed_instance_size_id or vcore, cores_per_processor, ram, and hdds.
# Size of the ID desired for the server
fixed_instance_size: S
# Total amount of processors
vcore: 2
# Number of cores per processor
cores_per_processor: 2
# RAM memory size in GB
ram: 4
# Hard disks
hdds:
-
is_main: true
size: 20
-
is_main: false
size: 20
# ID of the appliance image that will be installed on server
appliance_id: <ID>
# ID of the datacenter where the server will be created
datacenter_id: <ID>
# Description of the server
description: My server description
# Password of the server. Password must contain more than 8 characters
# using uppercase letters, numbers and other special symbols.
password: P4$$w0rD
# Power on server after creation - default True
power_on: true
# Firewall policy ID. If it is not provided, the server will assign
# the best firewall policy, creating a new one if necessary.
# If the parameter is sent with a 0 value, the server will be created with all ports blocked.
firewall_policy_id: <ID>
# IP address ID
ip_id: <ID>
# Load balancer ID
load_balancer_id: <ID>
# Monitoring policy ID
monitoring_policy_id: <ID>
Set ``deploy`` to False if Salt should not be installed on the node.
.. code-block:: yaml
my-oneandone-profile:
deploy: False
'''
# Import python libs
from __future__ import absolute_import
import logging
import os
import pprint
import time
# Import salt libs
import salt.utils
import salt.config as config
from salt.exceptions import (
SaltCloudConfigError,
SaltCloudNotFound,
SaltCloudExecutionFailure,
SaltCloudExecutionTimeout,
SaltCloudSystemExit
)
# Import salt.cloud libs
import salt.utils.cloud
from salt.ext import six
try:
from oneandone.client import (
OneAndOneService, Server, Hdd
)
HAS_ONEANDONE = True
except ImportError:
HAS_ONEANDONE = False
# Get logging started
log = logging.getLogger(__name__)
__virtualname__ = 'oneandone'
# Only load in this module if the 1&1 configurations are in place
def __virtual__():
'''
Check for 1&1 configurations.
'''
if get_configured_provider() is False:
return False
if get_dependencies() is False:
return False
return __virtualname__
def get_configured_provider():
'''
Return the first configured instance.
'''
return config.is_provider_configured(
__opts__,
__active_provider_name__ or __virtualname__,
('api_token',)
)
def get_dependencies():
'''
Warn if dependencies are not met.
'''
return config.check_driver_dependencies(
__virtualname__,
{'oneandone': HAS_ONEANDONE}
)
def get_conn():
'''
Return a conn object for the passed VM data
'''
return OneAndOneService(
api_token=config.get_cloud_config_value(
'api_token',
get_configured_provider(),
__opts__,
search_global=False
)
)
def get_size(vm_):
'''
Return the VM's size object
'''
vm_size = config.get_cloud_config_value(
'fixed_instance_size', vm_, __opts__, default=None,
search_global=False
)
sizes = avail_sizes()
if not vm_size:
size = next((item for item in sizes if item['name'] == 'S'), None)
return size
size = next((item for item in sizes if item['name'] == vm_size or item['id'] == vm_size), None)
if size:
return size
raise SaltCloudNotFound(
'The specified size, \'{0}\', could not be found.'.format(vm_size)
)
def get_image(vm_):
'''
Return the image object to use
'''
vm_image = config.get_cloud_config_value('image', vm_, __opts__).encode(
'ascii', 'salt-cloud-force-ascii'
)
images = avail_images()
for key, value in six.iteritems(images):
if vm_image and vm_image in (images[key]['id'], images[key]['name']):
return images[key]
raise SaltCloudNotFound(
'The specified image, \'{0}\', could not be found.'.format(vm_image)
)
def avail_locations(conn=None, call=None):
'''
List available locations/datacenters for 1&1
'''
if call == 'action':
raise SaltCloudSystemExit(
'The avail_locations function must be called with '
'-f or --function, or with the --list-locations option'
)
datacenters = []
if not conn:
conn = get_conn()
for datacenter in conn.list_datacenters():
datacenters.append({datacenter['country_code']: datacenter})
return {'Locations': datacenters}
def avail_images(conn=None, call=None):
'''
Return a list of the server appliances that are on the provider
'''
if call == 'action':
raise SaltCloudSystemExit(
'The avail_images function must be called with '
'-f or --function, or with the --list-images option'
)
if not conn:
conn = get_conn()
ret = {}
for appliance in conn.list_appliances():
ret[appliance['name']] = appliance
return ret
def avail_sizes(call=None):
'''
Return a dict of all available VM sizes on the cloud provider with
relevant data.
'''
if call == 'action':
raise SaltCloudSystemExit(
'The avail_sizes function must be called with '
'-f or --function, or with the --list-sizes option'
)
conn = get_conn()
sizes = conn.fixed_server_flavors()
return sizes
def script(vm_):
'''
Return the script deployment object
'''
return salt.utils.cloud.os_script(
config.get_cloud_config_value('script', vm_, __opts__),
vm_,
__opts__,
salt.utils.cloud.salt_config_to_yaml(
salt.utils.cloud.minion_config(__opts__, vm_)
)
)
def list_nodes(conn=None, call=None):
'''
Return a list of VMs that are on the provider
'''
if call == 'action':
raise SaltCloudSystemExit(
'The list_nodes function must be called with -f or --function.'
)
if not conn:
conn = get_conn()
ret = {}
nodes = conn.list_servers()
for node in nodes:
public_ips = []
private_ips = []
ret = {}
size = node.get('hardware').get('fixed_instance_size_id', 'Custom size')
if node.get('private_networks') and len(node['private_networks']) > 0:
for private_ip in node['private_networks']:
private_ips.append(private_ip)
if node.get('ips') and len(node['ips']) > 0:
for public_ip in node['ips']:
public_ips.append(public_ip['ip'])
server = {
'id': node['id'],
'image': node['image']['id'],
'size': size,
'state': node['status']['state'],
'private_ips': private_ips,
'public_ips': public_ips
}
ret[node['name']] = server
return ret
def list_nodes_full(conn=None, call=None):
'''
Return a list of the VMs that are on the provider, with all fields
'''
if call == 'action':
raise SaltCloudSystemExit(
'The list_nodes_full function must be called with -f or '
'--function.'
)
if not conn:
conn = get_conn()
ret = {}
nodes = conn.list_servers()
for node in nodes:
ret[node['name']] = node
return ret
def list_nodes_select(conn=None, call=None):
'''
Return a list of the VMs that are on the provider, with select fields
'''
if not conn:
conn = get_conn()
return salt.utils.cloud.list_nodes_select(
list_nodes_full(conn, 'function'),
__opts__['query.selection'],
call,
)
def show_instance(name, call=None):
'''
Show the details from the provider concerning an instance
'''
if call != 'action':
raise SaltCloudSystemExit(
'The show_instance action must be called with -a or --action.'
)
nodes = list_nodes_full()
__utils__['cloud.cache_node'](
nodes[name],
__active_provider_name__,
__opts__
)
return nodes[name]
def _get_server(vm_):
'''
Construct server instance from cloud profile config
'''
description = config.get_cloud_config_value(
'description', vm_, __opts__, default=None,
search_global=False
)
ssh_key = load_public_key(vm_)
vcore = None
cores_per_processor = None
ram = None
fixed_instance_size_id = None
if 'fixed_instance_size' in vm_:
fixed_instance_size = get_size(vm_)
fixed_instance_size_id = fixed_instance_size['id']
elif (vm_['vcore'] and vm_['cores_per_processor'] and
vm_['ram'] and vm_['hdds']):
vcore = config.get_cloud_config_value(
'vcore', vm_, __opts__, default=None,
search_global=False
)
cores_per_processor = config.get_cloud_config_value(
'cores_per_processor', vm_, __opts__, default=None,
search_global=False
)
ram = config.get_cloud_config_value(
'ram', vm_, __opts__, default=None,
search_global=False
)
else:
raise SaltCloudConfigError("'fixed_instance_size' or 'vcore',"
"'cores_per_processor', 'ram', and 'hdds'"
"must be provided.")
appliance_id = config.get_cloud_config_value(
'appliance_id', vm_, __opts__, default=None,
search_global=False
)
password = config.get_cloud_config_value(
'password', vm_, __opts__, default=None,
search_global=False
)
firewall_policy_id = config.get_cloud_config_value(
'firewall_policy_id', vm_, __opts__, default=None,
search_global=False
)
ip_id = config.get_cloud_config_value(
'ip_id', vm_, __opts__, default=None,
search_global=False
)
load_balancer_id = config.get_cloud_config_value(
'load_balancer_id', vm_, __opts__, default=None,
search_global=False
)
monitoring_policy_id = config.get_cloud_config_value(
'monitoring_policy_id', vm_, __opts__, default=None,
search_global=False
)
datacenter_id = config.get_cloud_config_value(
'datacenter_id', vm_, __opts__, default=None,
search_global=False
)
private_network_id = config.get_cloud_config_value(
'private_network_id', vm_, __opts__, default=None,
search_global=False
)
power_on = config.get_cloud_config_value(
'power_on', vm_, __opts__, default=True,
search_global=False
)
# Contruct server object
return Server(
name=vm_['name'],
description=description,
fixed_instance_size_id=fixed_instance_size_id,
vcore=vcore,
cores_per_processor=cores_per_processor,
ram=ram,
appliance_id=appliance_id,
password=password,
power_on=power_on,
firewall_policy_id=firewall_policy_id,
ip_id=ip_id,
load_balancer_id=load_balancer_id,
monitoring_policy_id=monitoring_policy_id,
datacenter_id=datacenter_id,
rsa_key=ssh_key,
private_network_id=private_network_id
)
def _get_hdds(vm_):
'''
Construct VM hdds from cloud profile config
'''
_hdds = config.get_cloud_config_value(
'hdds', vm_, __opts__, default=None,
search_global=False
)
hdds = []
for hdd in _hdds:
hdds.append(
Hdd(
size=hdd['size'],
is_main=hdd['is_main']
)
)
return hdds
def create(vm_):
'''
Create a single VM from a data dict
'''
try:
# Check for required profile parameters before sending any API calls.
if (vm_['profile'] and
config.is_profile_configured(__opts__,
(__active_provider_name__ or
'oneandone'),
vm_['profile']) is False):
return False
except AttributeError:
pass
data = None
conn = get_conn()
hdds = []
# Assemble the composite server object.
server = _get_server(vm_)
if not bool(server.specs['hardware']['fixed_instance_size_id']):
# Assemble the hdds object.
hdds = _get_hdds(vm_)
__utils__['cloud.fire_event'](
'event',
'requesting instance',
'salt/cloud/{0}/requesting'.format(vm_['name']),
args={'name': vm_['name']},
sock_dir=__opts__['sock_dir'],
transport=__opts__['transport']
)
try:
data = conn.create_server(server=server, hdds=hdds)
_wait_for_completion(conn,
get_wait_timeout(vm_),
data['id'])
except Exception as exc: # pylint: disable=W0703
log.error(
'Error creating {0} on 1and1\n\n'
'The following exception was thrown by the 1and1 library '
'when trying to run the initial deployment: \n{1}'.format(
vm_['name'], exc
),
exc_info_on_loglevel=logging.DEBUG
)
return False
vm_['server_id'] = data['id']
password = data['first_password']
def __query_node_data(vm_, data):
'''
Query node data until node becomes available.
'''
running = False
try:
data = show_instance(vm_['name'], 'action')
if not data:
return False
log.debug(
'Loaded node data for {0}:\nname: {1}\nstate: {2}'.format(
vm_['name'],
pprint.pformat(data['name']),
data['status']['state']
)
)
except Exception as err:
log.error(
'Failed to get nodes list: {0}'.format(
err
),
# Show the trackback if the debug logging level is enabled
exc_info_on_loglevel=logging.DEBUG
)
# Trigger a failure in the wait for IP function
return False
running = data['status']['state'].lower() == 'powered_on'
if not running:
# Still not running, trigger another iteration
return
vm_['ssh_host'] = data['ips'][0]['ip']
return data
try:
data = salt.utils.cloud.wait_for_ip(
__query_node_data,
update_args=(vm_, data),
timeout=config.get_cloud_config_value(
'wait_for_ip_timeout', vm_, __opts__, default=10 * 60),
interval=config.get_cloud_config_value(
'wait_for_ip_interval', vm_, __opts__, default=10),
)
except (SaltCloudExecutionTimeout, SaltCloudExecutionFailure) as exc:
try:
# It might be already up, let's destroy it!
destroy(vm_['name'])
except SaltCloudSystemExit:
pass
finally:
raise SaltCloudSystemExit(str(exc.message))
log.debug('VM is now running')
log.info('Created Cloud VM {0}'.format(vm_))
log.debug(
'{0} VM creation details:\n{1}'.format(
vm_, pprint.pformat(data)
)
)
__utils__['cloud.fire_event'](
'event',
'created instance',
'salt/cloud/{0}/created'.format(vm_['name']),
args={
'name': vm_['name'],
'profile': vm_['profile'],
'provider': vm_['driver'],
},
sock_dir=__opts__['sock_dir'],
transport=__opts__['transport']
)
if 'ssh_host' in vm_:
vm_['password'] = password
vm_['key_filename'] = get_key_filename(vm_)
ret = __utils__['cloud.bootstrap'](vm_, __opts__)
ret.update(data)
return ret
else:
raise SaltCloudSystemExit('A valid IP address was not found.')
def destroy(name, call=None):
'''
destroy a server by name
:param name: name given to the server
:param call: call value in this case is 'action'
:return: array of booleans , true if successfully stopped and true if
successfully removed
CLI Example:
.. code-block:: bash
salt-cloud -d vm_name
'''
if call == 'function':
raise SaltCloudSystemExit(
'The destroy action must be called with -d, --destroy, '
'-a or --action.'
)
__utils__['cloud.fire_event'](
'event',
'destroying instance',
'salt/cloud/{0}/destroying'.format(name),
args={'name': name},
sock_dir=__opts__['sock_dir'],
transport=__opts__['transport']
)
conn = get_conn()
node = get_node(conn, name)
conn.delete_server(server_id=node['id'])
__utils__['cloud.fire_event'](
'event',
'destroyed instance',
'salt/cloud/{0}/destroyed'.format(name),
args={'name': name},
sock_dir=__opts__['sock_dir'],
transport=__opts__['transport']
)
if __opts__.get('update_cachedir', False) is True:
__utils__['cloud.delete_minion_cachedir'](
name,
__active_provider_name__.split(':')[0],
__opts__
)
return True
def reboot(name, call=None):
'''
reboot a server by name
:param name: name given to the machine
:param call: call value in this case is 'action'
:return: true if successful
CLI Example:
.. code-block:: bash
salt-cloud -a reboot vm_name
'''
conn = get_conn()
node = get_node(conn, name)
conn.modify_server_status(server_id=node['id'], action='REBOOT')
return True
def stop(name, call=None):
'''
stop a server by name
:param name: name given to the machine
:param call: call value in this case is 'action'
:return: true if successful
CLI Example:
.. code-block:: bash
salt-cloud -a stop vm_name
'''
conn = get_conn()
node = get_node(conn, name)
conn.stop_server(server_id=node['id'])
return True
def start(name, call=None):
'''
start a server by name
:param name: name given to the machine
:param call: call value in this case is 'action'
:return: true if successful
CLI Example:
.. code-block:: bash
salt-cloud -a start vm_name
'''
conn = get_conn()
node = get_node(conn, name)
conn.start_server(server_id=node['id'])
return True
def get_node(conn, name):
'''
Return a node for the named VM
'''
for node in conn.list_servers(per_page=1000):
if node['name'] == name:
return node
def get_key_filename(vm_):
'''
Check SSH private key file and return absolute path if exists.
'''
key_filename = config.get_cloud_config_value(
'ssh_private_key', vm_, __opts__, search_global=False, default=None
)
if key_filename is not None:
key_filename = os.path.expanduser(key_filename)
if not os.path.isfile(key_filename):
raise SaltCloudConfigError(
'The defined ssh_private_key \'{0}\' does not exist'.format(
key_filename
)
)
return key_filename
def load_public_key(vm_):
'''
Load the public key file if exists.
'''
public_key_filename = config.get_cloud_config_value(
'ssh_public_key', vm_, __opts__, search_global=False, default=None
)
if public_key_filename is not None:
public_key_filename = os.path.expanduser(public_key_filename)
if not os.path.isfile(public_key_filename):
raise SaltCloudConfigError(
'The defined ssh_public_key \'{0}\' does not exist'.format(
public_key_filename
)
)
with salt.utils.fopen(public_key_filename, 'r') as public_key:
key = public_key.read().replace('\n', '')
return key
def get_wait_timeout(vm_):
'''
Return the wait_for_timeout for resource provisioning.
'''
return config.get_cloud_config_value(
'wait_for_timeout', vm_, __opts__, default=15 * 60,
search_global=False
)
def _wait_for_completion(conn, wait_timeout, server_id):
'''
Poll request status until resource is provisioned.
'''
wait_timeout = time.time() + wait_timeout
while wait_timeout > time.time():
time.sleep(5)
server = conn.get_server(server_id)
server_state = server['status']['state'].lower()
if server_state == "powered_on":
return
elif server_state == 'failed':
raise Exception('Server creation failed for {0}'.format(server_id))
elif server_state in ('active',
'enabled',
'deploying',
'configuring'):
continue
else:
raise Exception(
'Unknown server state {0}'.format(server_state))
raise Exception(
'Timed out waiting for server create completion for {0}'.format(server_id)
)

View File

@ -17,8 +17,16 @@ from __future__ import absolute_import
import logging import logging
# Import salt libs # Import salt libs
import salt.utils
import salt.config as config import salt.config as config
from salt.exceptions import SaltCloudException import salt.netapi
import salt.ext.six as six
if six.PY3:
import ipaddress
else:
import salt.ext.ipaddress as ipaddress
from salt.exceptions import SaltCloudException, SaltCloudSystemExit
# Get logging started # Get logging started
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -47,28 +55,188 @@ def __virtual__():
return True return True
def list_nodes(): def _get_connection_info():
''' '''
Because this module is not specific to any cloud providers, there will be Return connection information for the passed VM data
no nodes to list. '''
vm_ = get_configured_provider()
try:
ret = {'username': vm_['username'],
'password': vm_['password'],
'eauth': vm_['eauth'],
'vm': vm_,
}
except KeyError:
raise SaltCloudException(
'Configuration must define salt-api "username", "password" and "eauth"')
return ret
def avail_locations(call=None):
'''
This function returns a list of locations available.
.. code-block:: bash
salt-cloud --list-locations my-cloud-provider
[ saltify will always returns an empty dictionary ]
'''
return {}
def avail_images(call=None):
'''
This function returns a list of images available for this cloud provider.
.. code-block:: bash
salt-cloud --list-images saltify
returns a list of available profiles.
..versionadded:: Oxygen
'''
vm_ = get_configured_provider()
return {'Profiles': [profile for profile in vm_['profiles']]}
def avail_sizes(call=None):
'''
This function returns a list of sizes available for this cloud provider.
.. code-block:: bash
salt-cloud --list-sizes saltify
[ saltify always returns an empty dictionary ]
''' '''
return {} return {}
def list_nodes_full(): def list_nodes(call=None):
''' '''
Because this module is not specific to any cloud providers, there will be List the nodes which have salt-cloud:driver:saltify grains.
no nodes to list.
.. code-block:: bash
salt-cloud -Q
returns a list of dictionaries of defined standard fields.
salt-api setup required for operation.
..versionadded:: Oxygen
''' '''
return {} nodes = _list_nodes_full(call)
return _build_required_items(nodes)
def list_nodes_select(): def _build_required_items(nodes):
ret = {}
for name, grains in nodes.items():
if grains:
private_ips = []
public_ips = []
ips = grains['ipv4'] + grains['ipv6']
for adrs in ips:
ip_ = ipaddress.ip_address(adrs)
if not ip_.is_loopback:
if ip_.is_private:
private_ips.append(adrs)
else:
public_ips.append(adrs)
ret[name] = {
'id': grains['id'],
'image': grains['salt-cloud']['profile'],
'private_ips': private_ips,
'public_ips': public_ips,
'size': '',
'state': 'running'
}
return ret
def list_nodes_full(call=None):
''' '''
Because this module is not specific to any cloud providers, there will be Lists complete information for all nodes.
no nodes to list.
.. code-block:: bash
salt-cloud -F
returns a list of dictionaries.
for 'saltify' minions, returns dict of grains (enhanced).
salt-api setup required for operation.
..versionadded:: Oxygen
''' '''
return {}
ret = _list_nodes_full(call)
for key, grains in ret.items(): # clean up some hyperverbose grains -- everything is too much
try:
del grains['cpu_flags'], grains['disks'], grains['pythonpath'], grains['dns'], grains['gpus']
except KeyError:
pass # ignore absence of things we are eliminating
except TypeError:
del ret[key] # eliminate all reference to unexpected (None) values.
reqs = _build_required_items(ret)
for name in ret:
ret[name].update(reqs[name])
return ret
def _list_nodes_full(call=None):
'''
List the nodes, ask all 'saltify' minions, return dict of grains.
'''
local = salt.netapi.NetapiClient(__opts__)
cmd = {'client': 'local',
'tgt': 'salt-cloud:driver:saltify',
'fun': 'grains.items',
'arg': '',
'tgt_type': 'grain',
}
cmd.update(_get_connection_info())
return local.run(cmd)
def list_nodes_select(call=None):
'''
Return a list of the minions that have salt-cloud grains, with
select fields.
'''
return salt.utils.cloud.list_nodes_select(
list_nodes_full('function'), __opts__['query.selection'], call,
)
def show_instance(name, call=None):
'''
List the a single node, return dict of grains.
'''
local = salt.netapi.NetapiClient(__opts__)
cmd = {'client': 'local',
'tgt': 'name',
'fun': 'grains.items',
'arg': '',
'tgt_type': 'glob',
}
cmd.update(_get_connection_info())
ret = local.run(cmd)
ret.update(_build_required_items(ret))
return ret
def create(vm_): def create(vm_):
@ -190,3 +358,130 @@ def _verify(vm_):
except SaltCloudException as exc: except SaltCloudException as exc:
log.error('Exception: %s', exc) log.error('Exception: %s', exc)
return False return False
def destroy(name, call=None):
''' Destroy a node.
.. versionadded:: Oxygen
CLI Example:
.. code-block:: bash
salt-cloud --destroy mymachine
salt-api setup required for operation.
'''
if call == 'function':
raise SaltCloudSystemExit(
'The destroy action must be called with -d, --destroy, '
'-a, or --action.'
)
opts = __opts__
__utils__['cloud.fire_event'](
'event',
'destroying instance',
'salt/cloud/{0}/destroying'.format(name),
args={'name': name},
sock_dir=opts['sock_dir'],
transport=opts['transport']
)
local = salt.netapi.NetapiClient(opts)
cmd = {'client': 'local',
'tgt': name,
'fun': 'grains.get',
'arg': ['salt-cloud'],
}
cmd.update(_get_connection_info())
vm_ = cmd['vm']
my_info = local.run(cmd)
try:
vm_.update(my_info[name]) # get profile name to get config value
except (IndexError, TypeError):
pass
if config.get_cloud_config_value(
'remove_config_on_destroy', vm_, opts, default=True
):
cmd.update({'fun': 'service.disable', 'arg': ['salt-minion']})
ret = local.run(cmd) # prevent generating new keys on restart
if ret and ret[name]:
log.info('disabled salt-minion service on %s', name)
cmd.update({'fun': 'config.get', 'arg': ['conf_file']})
ret = local.run(cmd)
if ret and ret[name]:
confile = ret[name]
cmd.update({'fun': 'file.remove', 'arg': [confile]})
ret = local.run(cmd)
if ret and ret[name]:
log.info('removed minion %s configuration file %s',
name, confile)
cmd.update({'fun': 'config.get', 'arg': ['pki_dir']})
ret = local.run(cmd)
if ret and ret[name]:
pki_dir = ret[name]
cmd.update({'fun': 'file.remove', 'arg': [pki_dir]})
ret = local.run(cmd)
if ret and ret[name]:
log.info(
'removed minion %s key files in %s',
name,
pki_dir)
if config.get_cloud_config_value(
'shutdown_on_destroy', vm_, opts, default=False
):
cmd.update({'fun': 'system.shutdown', 'arg': ''})
ret = local.run(cmd)
if ret and ret[name]:
log.info('system.shutdown for minion %s successful', name)
__utils__['cloud.fire_event'](
'event',
'destroyed instance',
'salt/cloud/{0}/destroyed'.format(name),
args={'name': name},
sock_dir=opts['sock_dir'],
transport=opts['transport']
)
return {'Destroyed': '{0} was destroyed.'.format(name)}
def reboot(name, call=None):
'''
Reboot a saltify minion.
salt-api setup required for operation.
..versionadded:: Oxygen
name
The name of the VM to reboot.
CLI Example:
.. code-block:: bash
salt-cloud -a reboot vm_name
'''
if call != 'action':
raise SaltCloudException(
'The reboot action must be called with -a or --action.'
)
local = salt.netapi.NetapiClient(__opts__)
cmd = {'client': 'local',
'tgt': name,
'fun': 'system.reboot',
'arg': '',
}
cmd.update(_get_connection_info())
ret = local.run(cmd)
return ret

View File

@ -1663,7 +1663,8 @@ DEFAULT_PROXY_MINION_OPTS = {
'log_file': os.path.join(salt.syspaths.LOGS_DIR, 'proxy'), 'log_file': os.path.join(salt.syspaths.LOGS_DIR, 'proxy'),
'add_proxymodule_to_opts': False, 'add_proxymodule_to_opts': False,
'proxy_merge_grains_in_module': True, 'proxy_merge_grains_in_module': True,
'append_minionid_config_dirs': ['cachedir', 'pidfile', 'default_include'], 'extension_modules': os.path.join(salt.syspaths.CACHE_DIR, 'proxy', 'extmods'),
'append_minionid_config_dirs': ['cachedir', 'pidfile', 'default_include', 'extension_modules'],
'default_include': 'proxy.d/*.conf', 'default_include': 'proxy.d/*.conf',
# By default, proxies will preserve the connection. # By default, proxies will preserve the connection.
@ -3225,12 +3226,12 @@ def is_profile_configured(opts, provider, profile_name, vm_=None):
alias, driver = provider.split(':') alias, driver = provider.split(':')
# Most drivers need an image to be specified, but some do not. # Most drivers need an image to be specified, but some do not.
non_image_drivers = ['nova', 'virtualbox', 'libvirt', 'softlayer'] non_image_drivers = ['nova', 'virtualbox', 'libvirt', 'softlayer', 'oneandone']
# Most drivers need a size, but some do not. # Most drivers need a size, but some do not.
non_size_drivers = ['opennebula', 'parallels', 'proxmox', 'scaleway', non_size_drivers = ['opennebula', 'parallels', 'proxmox', 'scaleway',
'softlayer', 'softlayer_hw', 'vmware', 'vsphere', 'softlayer', 'softlayer_hw', 'vmware', 'vsphere',
'virtualbox', 'profitbricks', 'libvirt'] 'virtualbox', 'profitbricks', 'libvirt', 'oneandone']
provider_key = opts['providers'][alias][driver] provider_key = opts['providers'][alias][driver]
profile_key = opts['providers'][alias][driver]['profiles'][profile_name] profile_key = opts['providers'][alias][driver]['profiles'][profile_name]

View File

@ -640,6 +640,13 @@ class Client(object):
def on_header(hdr): def on_header(hdr):
if write_body[1] is not False and write_body[2] is None: if write_body[1] is not False and write_body[2] is None:
if not hdr.strip() and 'Content-Type' not in write_body[1]:
# We've reached the end of the headers and not yet
# found the Content-Type. Reset the values we're
# tracking so that we properly follow the redirect.
write_body[0] = None
write_body[1] = False
return
# Try to find out what content type encoding is used if # Try to find out what content type encoding is used if
# this is a text file # this is a text file
write_body[1].parse_line(hdr) # pylint: disable=no-member write_body[1].parse_line(hdr) # pylint: disable=no-member

View File

@ -82,14 +82,11 @@ else:
# which simplifies code readability, it adds some unsupported functions into # which simplifies code readability, it adds some unsupported functions into
# the driver's module scope. # the driver's module scope.
# We list un-supported functions here. These will be removed from the loaded. # We list un-supported functions here. These will be removed from the loaded.
# TODO: remove the need for this cross-module code. Maybe use NotImplemented
LIBCLOUD_FUNCS_NOT_SUPPORTED = ( LIBCLOUD_FUNCS_NOT_SUPPORTED = (
u'parallels.avail_sizes', u'parallels.avail_sizes',
u'parallels.avail_locations', u'parallels.avail_locations',
u'proxmox.avail_sizes', u'proxmox.avail_sizes',
u'saltify.destroy',
u'saltify.avail_sizes',
u'saltify.avail_images',
u'saltify.avail_locations',
u'rackspace.reboot', u'rackspace.reboot',
u'openstack.list_locations', u'openstack.list_locations',
u'rackspace.list_locations' u'rackspace.list_locations'

View File

@ -1275,7 +1275,7 @@ class Minion(MinionBase):
ret = yield channel.send(load, timeout=timeout) ret = yield channel.send(load, timeout=timeout)
raise tornado.gen.Return(ret) raise tornado.gen.Return(ret)
def _fire_master(self, data=None, tag=None, events=None, pretag=None, timeout=60, sync=True): def _fire_master(self, data=None, tag=None, events=None, pretag=None, timeout=60, sync=True, timeout_handler=None):
''' '''
Fire an event on the master, or drop message if unable to send. Fire an event on the master, or drop message if unable to send.
''' '''
@ -1294,10 +1294,6 @@ class Minion(MinionBase):
else: else:
return return
def timeout_handler(*_):
log.info(u'fire_master failed: master could not be contacted. Request timed out.')
return True
if sync: if sync:
try: try:
self._send_req_sync(load, timeout) self._send_req_sync(load, timeout)
@ -1308,6 +1304,12 @@ class Minion(MinionBase):
log.info(u'fire_master failed: %s', traceback.format_exc()) log.info(u'fire_master failed: %s', traceback.format_exc())
return False return False
else: else:
if timeout_handler is None:
def handle_timeout(*_):
log.info(u'fire_master failed: master could not be contacted. Request timed out.')
return True
timeout_handler = handle_timeout
with tornado.stack_context.ExceptionStackContext(timeout_handler): with tornado.stack_context.ExceptionStackContext(timeout_handler):
self._send_req_async(load, timeout, callback=lambda f: None) # pylint: disable=unexpected-keyword-arg self._send_req_async(load, timeout, callback=lambda f: None) # pylint: disable=unexpected-keyword-arg
return True return True
@ -1453,13 +1455,21 @@ class Minion(MinionBase):
function_name = data[u'fun'] function_name = data[u'fun']
if function_name in minion_instance.functions: if function_name in minion_instance.functions:
try: try:
minion_blackout_violation = False
if minion_instance.connected and minion_instance.opts[u'pillar'].get(u'minion_blackout', False): if minion_instance.connected and minion_instance.opts[u'pillar'].get(u'minion_blackout', False):
# this minion is blacked out. Only allow saltutil.refresh_pillar whitelist = minion_instance.opts[u'pillar'].get(u'minion_blackout_whitelist', [])
if function_name != u'saltutil.refresh_pillar' and \ # this minion is blacked out. Only allow saltutil.refresh_pillar and the whitelist
function_name not in minion_instance.opts[u'pillar'].get(u'minion_blackout_whitelist', []): if function_name != u'saltutil.refresh_pillar' and function_name not in whitelist:
minion_blackout_violation = True
elif minion_instance.opts[u'grains'].get(u'minion_blackout', False):
whitelist = minion_instance.opts[u'grains'].get(u'minion_blackout_whitelist', [])
if function_name != u'saltutil.refresh_pillar' and function_name not in whitelist:
minion_blackout_violation = True
if minion_blackout_violation:
raise SaltInvocationError(u'Minion in blackout mode. Set \'minion_blackout\' ' raise SaltInvocationError(u'Minion in blackout mode. Set \'minion_blackout\' '
u'to False in pillar to resume operations. Only ' u'to False in pillar or grains to resume operations. Only '
u'saltutil.refresh_pillar allowed in blackout mode.') u'saltutil.refresh_pillar allowed in blackout mode.')
func = minion_instance.functions[function_name] func = minion_instance.functions[function_name]
args, kwargs = load_args_and_kwargs( args, kwargs = load_args_and_kwargs(
func, func,
@ -1622,14 +1632,23 @@ class Minion(MinionBase):
for ind in range(0, len(data[u'fun'])): for ind in range(0, len(data[u'fun'])):
ret[u'success'][data[u'fun'][ind]] = False ret[u'success'][data[u'fun'][ind]] = False
try: try:
minion_blackout_violation = False
if minion_instance.connected and minion_instance.opts[u'pillar'].get(u'minion_blackout', False): if minion_instance.connected and minion_instance.opts[u'pillar'].get(u'minion_blackout', False):
# this minion is blacked out. Only allow saltutil.refresh_pillar whitelist = minion_instance.opts[u'pillar'].get(u'minion_blackout_whitelist', [])
if data[u'fun'][ind] != u'saltutil.refresh_pillar' and \ # this minion is blacked out. Only allow saltutil.refresh_pillar and the whitelist
data[u'fun'][ind] not in minion_instance.opts[u'pillar'].get(u'minion_blackout_whitelist', []): if data[u'fun'][ind] != u'saltutil.refresh_pillar' and data[u'fun'][ind] not in whitelist:
minion_blackout_violation = True
elif minion_instance.opts[u'grains'].get(u'minion_blackout', False):
whitelist = minion_instance.opts[u'grains'].get(u'minion_blackout_whitelist', [])
if data[u'fun'][ind] != u'saltutil.refresh_pillar' and data[u'fun'][ind] not in whitelist:
minion_blackout_violation = True
if minion_blackout_violation:
raise SaltInvocationError(u'Minion in blackout mode. Set \'minion_blackout\' ' raise SaltInvocationError(u'Minion in blackout mode. Set \'minion_blackout\' '
u'to False in pillar to resume operations. Only ' u'to False in pillar or grains to resume operations. Only '
u'saltutil.refresh_pillar allowed in blackout mode.') u'saltutil.refresh_pillar allowed in blackout mode.')
func = minion_instance.functions[data[u'fun'][ind]] func = minion_instance.functions[data[u'fun'][ind]]
args, kwargs = load_args_and_kwargs( args, kwargs = load_args_and_kwargs(
func, func,
data[u'arg'][ind], data[u'arg'][ind],
@ -2010,6 +2029,7 @@ class Minion(MinionBase):
elif tag.startswith(u'_minion_mine'): elif tag.startswith(u'_minion_mine'):
self._mine_send(tag, data) self._mine_send(tag, data)
elif tag.startswith(u'fire_master'): elif tag.startswith(u'fire_master'):
if self.connected:
log.debug(u'Forwarding master event tag=%s', data[u'tag']) log.debug(u'Forwarding master event tag=%s', data[u'tag'])
self._fire_master(data[u'data'], data[u'tag'], data[u'events'], data[u'pretag']) self._fire_master(data[u'data'], data[u'tag'], data[u'events'], data[u'pretag'])
elif tag.startswith(master_event(type=u'disconnected')) or tag.startswith(master_event(type=u'failback')): elif tag.startswith(master_event(type=u'disconnected')) or tag.startswith(master_event(type=u'failback')):
@ -2232,13 +2252,15 @@ class Minion(MinionBase):
if ping_interval > 0 and self.connected: if ping_interval > 0 and self.connected:
def ping_master(): def ping_master():
try: try:
if not self._fire_master(u'ping', u'minion_ping'): def ping_timeout_handler(*_):
if not self.opts.get(u'auth_safemode', True): if not self.opts.get(u'auth_safemode', True):
log.error(u'** Master Ping failed. Attempting to restart minion**') log.error(u'** Master Ping failed. Attempting to restart minion**')
delay = self.opts.get(u'random_reauth_delay', 5) delay = self.opts.get(u'random_reauth_delay', 5)
log.info(u'delaying random_reauth_delay %ss', delay) log.info(u'delaying random_reauth_delay %ss', delay)
# regular sys.exit raises an exception -- which isn't sufficient in a thread # regular sys.exit raises an exception -- which isn't sufficient in a thread
os._exit(salt.defaults.exitcodes.SALT_KEEPALIVE) os._exit(salt.defaults.exitcodes.SALT_KEEPALIVE)
self._fire_master('ping', 'minion_ping', sync=False, timeout_handler=ping_timeout_handler)
except Exception: except Exception:
log.warning(u'Attempt to ping master failed.', exc_on_loglevel=logging.DEBUG) log.warning(u'Attempt to ping master failed.', exc_on_loglevel=logging.DEBUG)
self.periodic_callbacks[u'ping'] = tornado.ioloop.PeriodicCallback(ping_master, ping_interval * 1000, io_loop=self.io_loop) self.periodic_callbacks[u'ping'] = tornado.ioloop.PeriodicCallback(ping_master, ping_interval * 1000, io_loop=self.io_loop)
@ -2253,7 +2275,7 @@ class Minion(MinionBase):
except Exception: except Exception:
log.critical(u'The beacon errored: ', exc_info=True) log.critical(u'The beacon errored: ', exc_info=True)
if beacons and self.connected: if beacons and self.connected:
self._fire_master(events=beacons) self._fire_master(events=beacons, sync=False)
self.periodic_callbacks[u'beacons'] = tornado.ioloop.PeriodicCallback(handle_beacons, loop_interval * 1000, io_loop=self.io_loop) self.periodic_callbacks[u'beacons'] = tornado.ioloop.PeriodicCallback(handle_beacons, loop_interval * 1000, io_loop=self.io_loop)

View File

@ -3848,7 +3848,6 @@ def save(name,
if os.path.exists(path) and not overwrite: if os.path.exists(path) and not overwrite:
raise CommandExecutionError('{0} already exists'.format(path)) raise CommandExecutionError('{0} already exists'.format(path))
compression = kwargs.get('compression')
if compression is None: if compression is None:
if path.endswith('.tar.gz') or path.endswith('.tgz'): if path.endswith('.tar.gz') or path.endswith('.tgz'):
compression = 'gzip' compression = 'gzip'
@ -3954,7 +3953,7 @@ def save(name,
ret['Size_Human'] = _size_fmt(ret['Size']) ret['Size_Human'] = _size_fmt(ret['Size'])
# Process push # Process push
if kwargs.get(push, False): if kwargs.get('push', False):
ret['Push'] = __salt__['cp.push'](path) ret['Push'] = __salt__['cp.push'](path)
return ret return ret

View File

@ -152,7 +152,8 @@ Optional small program to encrypt data without needing salt modules.
from __future__ import absolute_import from __future__ import absolute_import
import base64 import base64
import os import os
import salt.utils import salt.utils.files
import salt.utils.platform
import salt.utils.win_functions import salt.utils.win_functions
import salt.utils.win_dacl import salt.utils.win_dacl
import salt.syspaths import salt.syspaths
@ -203,7 +204,7 @@ def _get_sk(**kwargs):
key = config['sk'] key = config['sk']
sk_file = config['sk_file'] sk_file = config['sk_file']
if not key and sk_file: if not key and sk_file:
with salt.utils.fopen(sk_file, 'rb') as keyf: with salt.utils.files.fopen(sk_file, 'rb') as keyf:
key = str(keyf.read()).rstrip('\n') key = str(keyf.read()).rstrip('\n')
if key is None: if key is None:
raise Exception('no key or sk_file found') raise Exception('no key or sk_file found')
@ -218,7 +219,7 @@ def _get_pk(**kwargs):
pubkey = config['pk'] pubkey = config['pk']
pk_file = config['pk_file'] pk_file = config['pk_file']
if not pubkey and pk_file: if not pubkey and pk_file:
with salt.utils.fopen(pk_file, 'rb') as keyf: with salt.utils.files.fopen(pk_file, 'rb') as keyf:
pubkey = str(keyf.read()).rstrip('\n') pubkey = str(keyf.read()).rstrip('\n')
if pubkey is None: if pubkey is None:
raise Exception('no pubkey or pk_file found') raise Exception('no pubkey or pk_file found')
@ -256,9 +257,9 @@ def keygen(sk_file=None, pk_file=None):
if sk_file and pk_file is None: if sk_file and pk_file is None:
if not os.path.isfile(sk_file): if not os.path.isfile(sk_file):
kp = libnacl.public.SecretKey() kp = libnacl.public.SecretKey()
with salt.utils.fopen(sk_file, 'w') as keyf: with salt.utils.files.fopen(sk_file, 'w') as keyf:
keyf.write(base64.b64encode(kp.sk)) keyf.write(base64.b64encode(kp.sk))
if salt.utils.is_windows(): if salt.utils.platform.is_windows():
cur_user = salt.utils.win_functions.get_current_user() cur_user = salt.utils.win_functions.get_current_user()
salt.utils.win_dacl.set_owner(sk_file, cur_user) salt.utils.win_dacl.set_owner(sk_file, cur_user)
salt.utils.win_dacl.set_permissions(sk_file, cur_user, 'full_control', 'grant', reset_perms=True, protected=True) salt.utils.win_dacl.set_permissions(sk_file, cur_user, 'full_control', 'grant', reset_perms=True, protected=True)
@ -277,25 +278,25 @@ def keygen(sk_file=None, pk_file=None):
if os.path.isfile(sk_file) and not os.path.isfile(pk_file): if os.path.isfile(sk_file) and not os.path.isfile(pk_file):
# generate pk using the sk # generate pk using the sk
with salt.utils.fopen(sk_file, 'rb') as keyf: with salt.utils.files.fopen(sk_file, 'rb') as keyf:
sk = str(keyf.read()).rstrip('\n') sk = str(keyf.read()).rstrip('\n')
sk = base64.b64decode(sk) sk = base64.b64decode(sk)
kp = libnacl.public.SecretKey(sk) kp = libnacl.public.SecretKey(sk)
with salt.utils.fopen(pk_file, 'w') as keyf: with salt.utils.files.fopen(pk_file, 'w') as keyf:
keyf.write(base64.b64encode(kp.pk)) keyf.write(base64.b64encode(kp.pk))
return 'saved pk_file: {0}'.format(pk_file) return 'saved pk_file: {0}'.format(pk_file)
kp = libnacl.public.SecretKey() kp = libnacl.public.SecretKey()
with salt.utils.fopen(sk_file, 'w') as keyf: with salt.utils.files.fopen(sk_file, 'w') as keyf:
keyf.write(base64.b64encode(kp.sk)) keyf.write(base64.b64encode(kp.sk))
if salt.utils.is_windows(): if salt.utils.platform.is_windows():
cur_user = salt.utils.win_functions.get_current_user() cur_user = salt.utils.win_functions.get_current_user()
salt.utils.win_dacl.set_owner(sk_file, cur_user) salt.utils.win_dacl.set_owner(sk_file, cur_user)
salt.utils.win_dacl.set_permissions(sk_file, cur_user, 'full_control', 'grant', reset_perms=True, protected=True) salt.utils.win_dacl.set_permissions(sk_file, cur_user, 'full_control', 'grant', reset_perms=True, protected=True)
else: else:
# chmod 0600 file # chmod 0600 file
os.chmod(sk_file, 1536) os.chmod(sk_file, 1536)
with salt.utils.fopen(pk_file, 'w') as keyf: with salt.utils.files.fopen(pk_file, 'w') as keyf:
keyf.write(base64.b64encode(kp.pk)) keyf.write(base64.b64encode(kp.pk))
return 'saved sk_file:{0} pk_file: {1}'.format(sk_file, pk_file) return 'saved sk_file:{0} pk_file: {1}'.format(sk_file, pk_file)
@ -335,13 +336,13 @@ def enc_file(name, out=None, **kwargs):
data = __salt__['cp.get_file_str'](name) data = __salt__['cp.get_file_str'](name)
except Exception as e: except Exception as e:
# likly using salt-run so fallback to local filesystem # likly using salt-run so fallback to local filesystem
with salt.utils.fopen(name, 'rb') as f: with salt.utils.files.fopen(name, 'rb') as f:
data = f.read() data = f.read()
d = enc(data, **kwargs) d = enc(data, **kwargs)
if out: if out:
if os.path.isfile(out): if os.path.isfile(out):
raise Exception('file:{0} already exist.'.format(out)) raise Exception('file:{0} already exist.'.format(out))
with salt.utils.fopen(out, 'wb') as f: with salt.utils.files.fopen(out, 'wb') as f:
f.write(d) f.write(d)
return 'Wrote: {0}'.format(out) return 'Wrote: {0}'.format(out)
return d return d
@ -382,13 +383,13 @@ def dec_file(name, out=None, **kwargs):
data = __salt__['cp.get_file_str'](name) data = __salt__['cp.get_file_str'](name)
except Exception as e: except Exception as e:
# likly using salt-run so fallback to local filesystem # likly using salt-run so fallback to local filesystem
with salt.utils.fopen(name, 'rb') as f: with salt.utils.files.fopen(name, 'rb') as f:
data = f.read() data = f.read()
d = dec(data, **kwargs) d = dec(data, **kwargs)
if out: if out:
if os.path.isfile(out): if os.path.isfile(out):
raise Exception('file:{0} already exist.'.format(out)) raise Exception('file:{0} already exist.'.format(out))
with salt.utils.fopen(out, 'wb') as f: with salt.utils.files.fopen(out, 'wb') as f:
f.write(d) f.write(d)
return 'Wrote: {0}'.format(out) return 'Wrote: {0}'.format(out)
return d return d

View File

@ -80,7 +80,8 @@ for service_dir in VALID_SERVICE_DIRS:
AVAIL_SVR_DIRS = [] AVAIL_SVR_DIRS = []
# Define the module's virtual name # Define the module's virtual name
__virtualname__ = 'service' __virtualname__ = 'runit'
__virtual_aliases__ = ('runit',)
def __virtual__(): def __virtual__():
@ -91,8 +92,12 @@ def __virtual__():
if __grains__.get('init') == 'runit': if __grains__.get('init') == 'runit':
if __grains__['os'] == 'Void': if __grains__['os'] == 'Void':
add_svc_avail_path('/etc/sv') add_svc_avail_path('/etc/sv')
global __virtualname__
__virtualname__ = 'service'
return __virtualname__ return __virtualname__
return False if salt.utils.which('sv'):
return __virtualname__
return (False, 'Runit not available. Please install sv')
def _service_path(name): def _service_path(name):

View File

@ -1285,6 +1285,18 @@ def install(name=None, refresh=False, pkgs=None, **kwargs):
#Compute msiexec string #Compute msiexec string
use_msiexec, msiexec = _get_msiexec(pkginfo[version_num].get('msiexec', False)) use_msiexec, msiexec = _get_msiexec(pkginfo[version_num].get('msiexec', False))
# Build cmd and arguments
# cmd and arguments must be seperated for use with the task scheduler
if use_msiexec:
cmd = msiexec
arguments = ['/i', cached_pkg]
if pkginfo['version_num'].get('allusers', True):
arguments.append('ALLUSERS="1"')
arguments.extend(salt.utils.shlex_split(install_flags))
else:
cmd = cached_pkg
arguments = salt.utils.shlex_split(install_flags)
# Install the software # Install the software
# Check Use Scheduler Option # Check Use Scheduler Option
if pkginfo[version_num].get('use_scheduler', False): if pkginfo[version_num].get('use_scheduler', False):
@ -1313,21 +1325,43 @@ def install(name=None, refresh=False, pkgs=None, **kwargs):
start_time='01:00', start_time='01:00',
ac_only=False, ac_only=False,
stop_if_on_batteries=False) stop_if_on_batteries=False)
# Run Scheduled Task # Run Scheduled Task
# Special handling for installing salt
if pkg_name in ['salt-minion', 'salt-minion-py3']:
ret[pkg_name] = {'install status': 'task started'}
if not __salt__['task.run'](name='update-salt-software'):
log.error('Failed to install {0}'.format(pkg_name))
log.error('Scheduled Task failed to run')
ret[pkg_name] = {'install status': 'failed'}
else:
# Make sure the task is running, try for 5 secs
from time import time
t_end = time() + 5
while time() < t_end:
task_running = __salt__['task.status'](
'update-salt-software') == 'Running'
if task_running:
break
if not task_running:
log.error(
'Failed to install {0}'.format(pkg_name))
log.error('Scheduled Task failed to run')
ret[pkg_name] = {'install status': 'failed'}
# All other packages run with task scheduler
else:
if not __salt__['task.run_wait'](name='update-salt-software'): if not __salt__['task.run_wait'](name='update-salt-software'):
log.error('Failed to install {0}'.format(pkg_name)) log.error('Failed to install {0}'.format(pkg_name))
log.error('Scheduled Task failed to run') log.error('Scheduled Task failed to run')
ret[pkg_name] = {'install status': 'failed'} ret[pkg_name] = {'install status': 'failed'}
else: else:
# Build the install command
cmd = [] # Combine cmd and arguments
if use_msiexec: cmd = [cmd].extend(arguments)
cmd.extend([msiexec, '/i', cached_pkg])
if pkginfo[version_num].get('allusers', True):
cmd.append('ALLUSERS="1"')
else:
cmd.append(cached_pkg)
cmd.extend(salt.utils.args.shlex_split(install_flags))
# Launch the command # Launch the command
result = __salt__['cmd.run_all'](cmd, result = __salt__['cmd.run_all'](cmd,
cache_path, cache_path,

View File

@ -302,6 +302,11 @@ def get_community_names():
# Windows SNMP service GUI. # Windows SNMP service GUI.
if isinstance(current_values, list): if isinstance(current_values, list):
for current_value in current_values: for current_value in current_values:
# Ignore error values
if not isinstance(current_value, dict):
continue
permissions = str() permissions = str()
for permission_name in _PERMISSION_TYPES: for permission_name in _PERMISSION_TYPES:
if current_value['vdata'] == _PERMISSION_TYPES[permission_name]: if current_value['vdata'] == _PERMISSION_TYPES[permission_name]:

View File

@ -1260,7 +1260,7 @@ def status(name, location='\\'):
task_service = win32com.client.Dispatch("Schedule.Service") task_service = win32com.client.Dispatch("Schedule.Service")
task_service.Connect() task_service.Connect()
# get the folder to delete the folder from # get the folder where the task is defined
task_folder = task_service.GetFolder(location) task_folder = task_service.GetFolder(location)
task = task_folder.GetTask(name) task = task_folder.GetTask(name)

View File

@ -67,6 +67,17 @@ provider: ``napalm_base``
.. versionadded:: 2017.7.1 .. versionadded:: 2017.7.1
multiprocessing: ``False``
Overrides the :conf_minion:`multiprocessing` option, per proxy minion.
The ``multiprocessing`` option must be turned off for SSH-based proxies.
However, some NAPALM drivers (e.g. Arista, NX-OS) are not SSH-based.
As multiple proxy minions may share the same configuration file,
this option permits the configuration of the ``multiprocessing`` option
more specifically, for some proxy minions.
.. versionadded:: 2017.7.2
.. _`NAPALM Read the Docs page`: https://napalm.readthedocs.io/en/latest/#supported-network-operating-systems .. _`NAPALM Read the Docs page`: https://napalm.readthedocs.io/en/latest/#supported-network-operating-systems
.. _`optional arguments`: http://napalm.readthedocs.io/en/latest/support/index.html#list-of-supported-optional-arguments .. _`optional arguments`: http://napalm.readthedocs.io/en/latest/support/index.html#list-of-supported-optional-arguments

View File

@ -17,16 +17,28 @@ import salt.netapi
def mk_token(**load): def mk_token(**load):
''' r'''
Create an eauth token using provided credentials Create an eauth token using provided credentials
Non-root users may specify an expiration date -- if allowed via the
:conf_master:`token_expire_user_override` setting -- by passing an
additional ``token_expire`` param. This overrides the
:conf_master:`token_expire` setting of the same name in the Master config
and is how long a token should live in seconds.
CLI Example: CLI Example:
.. code-block:: shell .. code-block:: shell
salt-run auth.mk_token username=saltdev password=saltdev eauth=auto salt-run auth.mk_token username=saltdev password=saltdev eauth=auto
salt-run auth.mk_token username=saltdev password=saltdev eauth=auto \\
# Create a token valid for three years.
salt-run auth.mk_token username=saltdev password=saltdev eauth=auto \
token_expire=94670856 token_expire=94670856
# Calculate the number of seconds using expr.
salt-run auth.mk_token username=saltdev password=saltdev eauth=auto \
token_expire=$(expr \( 365 \* 24 \* 60 \* 60 \) \* 3)
''' '''
# This will hang if the master daemon is not running. # This will hang if the master daemon is not running.
netapi = salt.netapi.NetapiClient(__opts__) netapi = salt.netapi.NetapiClient(__opts__)

View File

@ -149,10 +149,14 @@ Optional small program to encrypt data without needing salt modules.
''' '''
# Import Python libs
from __future__ import absolute_import from __future__ import absolute_import
import base64 import base64
import os import os
import salt.utils
# Import Salt libs
import salt.utils.files
import salt.utils.platform
import salt.utils.win_functions import salt.utils.win_functions
import salt.utils.win_dacl import salt.utils.win_dacl
import salt.syspaths import salt.syspaths
@ -203,7 +207,7 @@ def _get_sk(**kwargs):
key = config['sk'] key = config['sk']
sk_file = config['sk_file'] sk_file = config['sk_file']
if not key and sk_file: if not key and sk_file:
with salt.utils.fopen(sk_file, 'rb') as keyf: with salt.utils.files.fopen(sk_file, 'rb') as keyf:
key = str(keyf.read()).rstrip('\n') key = str(keyf.read()).rstrip('\n')
if key is None: if key is None:
raise Exception('no key or sk_file found') raise Exception('no key or sk_file found')
@ -218,7 +222,7 @@ def _get_pk(**kwargs):
pubkey = config['pk'] pubkey = config['pk']
pk_file = config['pk_file'] pk_file = config['pk_file']
if not pubkey and pk_file: if not pubkey and pk_file:
with salt.utils.fopen(pk_file, 'rb') as keyf: with salt.utils.files.fopen(pk_file, 'rb') as keyf:
pubkey = str(keyf.read()).rstrip('\n') pubkey = str(keyf.read()).rstrip('\n')
if pubkey is None: if pubkey is None:
raise Exception('no pubkey or pk_file found') raise Exception('no pubkey or pk_file found')
@ -256,9 +260,9 @@ def keygen(sk_file=None, pk_file=None):
if sk_file and pk_file is None: if sk_file and pk_file is None:
if not os.path.isfile(sk_file): if not os.path.isfile(sk_file):
kp = libnacl.public.SecretKey() kp = libnacl.public.SecretKey()
with salt.utils.fopen(sk_file, 'w') as keyf: with salt.utils.files.fopen(sk_file, 'w') as keyf:
keyf.write(base64.b64encode(kp.sk)) keyf.write(base64.b64encode(kp.sk))
if salt.utils.is_windows(): if salt.utils.platform.is_windows():
cur_user = salt.utils.win_functions.get_current_user() cur_user = salt.utils.win_functions.get_current_user()
salt.utils.win_dacl.set_owner(sk_file, cur_user) salt.utils.win_dacl.set_owner(sk_file, cur_user)
salt.utils.win_dacl.set_permissions(sk_file, cur_user, 'full_control', 'grant', reset_perms=True, protected=True) salt.utils.win_dacl.set_permissions(sk_file, cur_user, 'full_control', 'grant', reset_perms=True, protected=True)
@ -277,25 +281,25 @@ def keygen(sk_file=None, pk_file=None):
if os.path.isfile(sk_file) and not os.path.isfile(pk_file): if os.path.isfile(sk_file) and not os.path.isfile(pk_file):
# generate pk using the sk # generate pk using the sk
with salt.utils.fopen(sk_file, 'rb') as keyf: with salt.utils.files.fopen(sk_file, 'rb') as keyf:
sk = str(keyf.read()).rstrip('\n') sk = str(keyf.read()).rstrip('\n')
sk = base64.b64decode(sk) sk = base64.b64decode(sk)
kp = libnacl.public.SecretKey(sk) kp = libnacl.public.SecretKey(sk)
with salt.utils.fopen(pk_file, 'w') as keyf: with salt.utils.files.fopen(pk_file, 'w') as keyf:
keyf.write(base64.b64encode(kp.pk)) keyf.write(base64.b64encode(kp.pk))
return 'saved pk_file: {0}'.format(pk_file) return 'saved pk_file: {0}'.format(pk_file)
kp = libnacl.public.SecretKey() kp = libnacl.public.SecretKey()
with salt.utils.fopen(sk_file, 'w') as keyf: with salt.utils.files.fopen(sk_file, 'w') as keyf:
keyf.write(base64.b64encode(kp.sk)) keyf.write(base64.b64encode(kp.sk))
if salt.utils.is_windows(): if salt.utils.platform.is_windows():
cur_user = salt.utils.win_functions.get_current_user() cur_user = salt.utils.win_functions.get_current_user()
salt.utils.win_dacl.set_owner(sk_file, cur_user) salt.utils.win_dacl.set_owner(sk_file, cur_user)
salt.utils.win_dacl.set_permissions(sk_file, cur_user, 'full_control', 'grant', reset_perms=True, protected=True) salt.utils.win_dacl.set_permissions(sk_file, cur_user, 'full_control', 'grant', reset_perms=True, protected=True)
else: else:
# chmod 0600 file # chmod 0600 file
os.chmod(sk_file, 1536) os.chmod(sk_file, 1536)
with salt.utils.fopen(pk_file, 'w') as keyf: with salt.utils.files.fopen(pk_file, 'w') as keyf:
keyf.write(base64.b64encode(kp.pk)) keyf.write(base64.b64encode(kp.pk))
return 'saved sk_file:{0} pk_file: {1}'.format(sk_file, pk_file) return 'saved sk_file:{0} pk_file: {1}'.format(sk_file, pk_file)
@ -335,13 +339,13 @@ def enc_file(name, out=None, **kwargs):
data = __salt__['cp.get_file_str'](name) data = __salt__['cp.get_file_str'](name)
except Exception as e: except Exception as e:
# likly using salt-run so fallback to local filesystem # likly using salt-run so fallback to local filesystem
with salt.utils.fopen(name, 'rb') as f: with salt.utils.files.fopen(name, 'rb') as f:
data = f.read() data = f.read()
d = enc(data, **kwargs) d = enc(data, **kwargs)
if out: if out:
if os.path.isfile(out): if os.path.isfile(out):
raise Exception('file:{0} already exist.'.format(out)) raise Exception('file:{0} already exist.'.format(out))
with salt.utils.fopen(out, 'wb') as f: with salt.utils.files.fopen(out, 'wb') as f:
f.write(d) f.write(d)
return 'Wrote: {0}'.format(out) return 'Wrote: {0}'.format(out)
return d return d
@ -382,13 +386,13 @@ def dec_file(name, out=None, **kwargs):
data = __salt__['cp.get_file_str'](name) data = __salt__['cp.get_file_str'](name)
except Exception as e: except Exception as e:
# likly using salt-run so fallback to local filesystem # likly using salt-run so fallback to local filesystem
with salt.utils.fopen(name, 'rb') as f: with salt.utils.files.fopen(name, 'rb') as f:
data = f.read() data = f.read()
d = dec(data, **kwargs) d = dec(data, **kwargs)
if out: if out:
if os.path.isfile(out): if os.path.isfile(out):
raise Exception('file:{0} already exist.'.format(out)) raise Exception('file:{0} already exist.'.format(out))
with salt.utils.fopen(out, 'wb') as f: with salt.utils.files.fopen(out, 'wb') as f:
f.write(d) f.write(d)
return 'Wrote: {0}'.format(out) return 'Wrote: {0}'.format(out)
return d return d

View File

@ -68,7 +68,7 @@ def present(dbname, name,
'db_password': db_password, 'db_password': db_password,
'db_host': db_host, 'db_host': db_host,
'db_port': db_port, 'db_port': db_port,
'runas': user 'user': user
} }
# check if schema exists # check if schema exists
@ -144,7 +144,7 @@ def absent(dbname, name, user=None,
'db_password': db_password, 'db_password': db_password,
'db_host': db_host, 'db_host': db_host,
'db_port': db_port, 'db_port': db_port,
'runas': user 'user': user
} }
# check if schema exists and remove it # check if schema exists and remove it

View File

@ -900,7 +900,7 @@ def mod_watch(name,
try: try:
result = func(name, **func_kwargs) result = func(name, **func_kwargs)
except CommandExecutionError as exc: except CommandExecutionError as exc:
ret['result'] = True ret['result'] = False
ret['comment'] = exc.strerror ret['comment'] = exc.strerror
return ret return ret

View File

@ -410,10 +410,7 @@ def bootstrap(vm_, opts):
'tmp_dir': salt.config.get_cloud_config_value( 'tmp_dir': salt.config.get_cloud_config_value(
'tmp_dir', vm_, opts, default='/tmp/.saltcloud' 'tmp_dir', vm_, opts, default='/tmp/.saltcloud'
), ),
'deploy_command': salt.config.get_cloud_config_value( 'vm_': vm_,
'deploy_command', vm_, opts,
default='/tmp/.saltcloud/deploy.sh',
),
'start_action': opts['start_action'], 'start_action': opts['start_action'],
'parallel': opts['parallel'], 'parallel': opts['parallel'],
'sock_dir': opts['sock_dir'], 'sock_dir': opts['sock_dir'],
@ -443,6 +440,9 @@ def bootstrap(vm_, opts):
'script_env', vm_, opts 'script_env', vm_, opts
), ),
'minion_conf': minion_conf, 'minion_conf': minion_conf,
'force_minion_config': salt.config.get_cloud_config_value(
'force_minion_config', vm_, opts, default=False
),
'preseed_minion_keys': vm_.get('preseed_minion_keys', None), 'preseed_minion_keys': vm_.get('preseed_minion_keys', None),
'display_ssh_output': salt.config.get_cloud_config_value( 'display_ssh_output': salt.config.get_cloud_config_value(
'display_ssh_output', vm_, opts, default=True 'display_ssh_output', vm_, opts, default=True
@ -459,9 +459,13 @@ def bootstrap(vm_, opts):
'preflight_cmds': salt.config.get_cloud_config_value( 'preflight_cmds': salt.config.get_cloud_config_value(
'preflight_cmds', vm_, __opts__, default=[] 'preflight_cmds', vm_, __opts__, default=[]
), ),
'cloud_grains': {'driver': vm_['driver'],
'provider': vm_['provider'],
'profile': vm_['profile']
}
} }
inline_script_kwargs = deploy_kwargs inline_script_kwargs = deploy_kwargs.copy() # make a copy at this point
# forward any info about possible ssh gateway to deploy script # forward any info about possible ssh gateway to deploy script
# as some providers need also a 'gateway' configuration # as some providers need also a 'gateway' configuration
@ -907,7 +911,7 @@ def validate_windows_cred(host,
host host
) )
for i in xrange(retries): for i in range(retries):
ret_code = win_cmd( ret_code = win_cmd(
cmd, cmd,
logging_command=logging_cmd logging_command=logging_cmd
@ -1235,20 +1239,26 @@ def deploy_script(host,
sudo_password=None, sudo_password=None,
sudo=False, sudo=False,
tty=None, tty=None,
deploy_command='/tmp/.saltcloud/deploy.sh', vm_=None,
opts=None, opts=None,
tmp_dir='/tmp/.saltcloud', tmp_dir='/tmp/.saltcloud',
file_map=None, file_map=None,
master_sign_pub_file=None, master_sign_pub_file=None,
cloud_grains=None,
force_minion_config=False,
**kwargs): **kwargs):
''' '''
Copy a deploy script to a remote server, execute it, and remove it Copy a deploy script to a remote server, execute it, and remove it
''' '''
if not isinstance(opts, dict): if not isinstance(opts, dict):
opts = {} opts = {}
vm_ = vm_ or {} # if None, default to empty dict
cloud_grains = cloud_grains or {}
tmp_dir = '{0}-{1}'.format(tmp_dir.rstrip('/'), uuid.uuid4()) tmp_dir = '{0}-{1}'.format(tmp_dir.rstrip('/'), uuid.uuid4())
deploy_command = os.path.join(tmp_dir, 'deploy.sh') deploy_command = salt.config.get_cloud_config_value(
'deploy_command', vm_, opts,
default=os.path.join(tmp_dir, 'deploy.sh'))
if key_filename is not None and not os.path.isfile(key_filename): if key_filename is not None and not os.path.isfile(key_filename):
raise SaltCloudConfigError( raise SaltCloudConfigError(
'The defined key_filename \'{0}\' does not exist'.format( 'The defined key_filename \'{0}\' does not exist'.format(
@ -1260,9 +1270,11 @@ def deploy_script(host,
if 'gateway' in kwargs: if 'gateway' in kwargs:
gateway = kwargs['gateway'] gateway = kwargs['gateway']
starttime = time.mktime(time.localtime()) starttime = time.localtime()
log.debug('Deploying {0} at {1}'.format(host, starttime)) log.debug('Deploying {0} at {1}'.format(
host,
time.strftime('%Y-%m-%d %H:%M:%S', starttime))
)
known_hosts_file = kwargs.get('known_hosts_file', '/dev/null') known_hosts_file = kwargs.get('known_hosts_file', '/dev/null')
hard_timeout = opts.get('hard_timeout', None) hard_timeout = opts.get('hard_timeout', None)
@ -1394,6 +1406,8 @@ def deploy_script(host,
salt_config_to_yaml(minion_grains), salt_config_to_yaml(minion_grains),
ssh_kwargs ssh_kwargs
) )
if cloud_grains and opts.get('enable_cloud_grains', True):
minion_conf['grains'] = {'salt-cloud': cloud_grains}
ssh_file( ssh_file(
opts, opts,
'{0}/minion'.format(tmp_dir), '{0}/minion'.format(tmp_dir),
@ -1499,7 +1513,8 @@ def deploy_script(host,
raise SaltCloudSystemExit( raise SaltCloudSystemExit(
'Can\'t set perms on {0}/deploy.sh'.format(tmp_dir)) 'Can\'t set perms on {0}/deploy.sh'.format(tmp_dir))
newtimeout = timeout - (time.mktime(time.localtime()) - starttime) time_used = time.mktime(time.localtime()) - time.mktime(starttime)
newtimeout = timeout - time_used
queue = None queue = None
process = None process = None
# Consider this code experimental. It causes Salt Cloud to wait # Consider this code experimental. It causes Salt Cloud to wait
@ -1520,6 +1535,8 @@ def deploy_script(host,
if script: if script:
if 'bootstrap-salt' in script: if 'bootstrap-salt' in script:
deploy_command += ' -c \'{0}\''.format(tmp_dir) deploy_command += ' -c \'{0}\''.format(tmp_dir)
if force_minion_config:
deploy_command += ' -F'
if make_syndic is True: if make_syndic is True:
deploy_command += ' -S' deploy_command += ' -S'
if make_master is True: if make_master is True:
@ -2789,6 +2806,11 @@ def cache_nodes_ip(opts, base=None):
Retrieve a list of all nodes from Salt Cloud cache, and any associated IP Retrieve a list of all nodes from Salt Cloud cache, and any associated IP
addresses. Returns a dict. addresses. Returns a dict.
''' '''
salt.utils.warn_until(
'Flourine',
'This function is incomplete and non-functional '
'and will be removed in Salt Flourine.'
)
if base is None: if base is None:
base = opts['cachedir'] base = opts['cachedir']

View File

@ -78,7 +78,7 @@ def recursive_copy(source, dest):
(identical to cp -r on a unix machine) (identical to cp -r on a unix machine)
''' '''
for root, _, files in os.walk(source): for root, _, files in os.walk(source):
path_from_source = root.replace(source, '').lstrip('/') path_from_source = root.replace(source, '').lstrip(os.sep)
target_directory = os.path.join(dest, path_from_source) target_directory = os.path.join(dest, path_from_source)
if not os.path.exists(target_directory): if not os.path.exists(target_directory):
os.makedirs(target_directory) os.makedirs(target_directory)

View File

@ -243,6 +243,11 @@ def get_device_opts(opts, salt_obj=None):
network_device = {} network_device = {}
# by default, look in the proxy config details # by default, look in the proxy config details
device_dict = opts.get('proxy', {}) or opts.get('napalm', {}) device_dict = opts.get('proxy', {}) or opts.get('napalm', {})
if opts.get('proxy') or opts.get('napalm'):
opts['multiprocessing'] = device_dict.get('multiprocessing', False)
# Most NAPALM drivers are SSH-based, so multiprocessing should default to False.
# But the user can be allows to have a different value for the multiprocessing, which will
# override the opts.
if salt_obj and not device_dict: if salt_obj and not device_dict:
# get the connection details from the opts # get the connection details from the opts
device_dict = salt_obj['config.merge']('napalm') device_dict = salt_obj['config.merge']('napalm')

View File

@ -134,8 +134,10 @@ def get_pidfile(pidfile):
''' '''
with salt.utils.files.fopen(pidfile) as pdf: with salt.utils.files.fopen(pidfile) as pdf:
pid = pdf.read() pid = pdf.read()
if pid:
return int(pid) return int(pid)
else:
return
def clean_proc(proc, wait_for_kill=10): def clean_proc(proc, wait_for_kill=10):

View File

@ -284,6 +284,9 @@ class ReactWrap(object):
# Update called function's low data with event user to # Update called function's low data with event user to
# segregate events fired by reactor and avoid reaction loops # segregate events fired by reactor and avoid reaction loops
kwargs['__user__'] = self.event_user kwargs['__user__'] = self.event_user
# Replace ``state`` kwarg which comes from high data compiler.
# It breaks some runner functions and seems unnecessary.
kwargs['__state__'] = kwargs.pop('state')
l_fun(*f_call.get('args', ()), **kwargs) l_fun(*f_call.get('args', ()), **kwargs)
except Exception: except Exception:

View File

@ -852,6 +852,12 @@ class Schedule(object):
ret['return'] = self.functions[func](*args, **kwargs) ret['return'] = self.functions[func](*args, **kwargs)
# runners do not provide retcode
if 'retcode' in self.functions.pack['__context__']:
ret['retcode'] = self.functions.pack['__context__']['retcode']
ret['success'] = True
data_returner = data.get('returner', None) data_returner = data.get('returner', None)
if data_returner or self.schedule_returner: if data_returner or self.schedule_returner:
if 'return_config' in data: if 'return_config' in data:
@ -868,7 +874,6 @@ class Schedule(object):
for returner in OrderedDict.fromkeys(rets): for returner in OrderedDict.fromkeys(rets):
ret_str = '{0}.returner'.format(returner) ret_str = '{0}.returner'.format(returner)
if ret_str in self.returners: if ret_str in self.returners:
ret['success'] = True
self.returners[ret_str](ret) self.returners[ret_str](ret)
else: else:
log.info( log.info(
@ -877,11 +882,6 @@ class Schedule(object):
) )
) )
# runners do not provide retcode
if 'retcode' in self.functions.pack['__context__']:
ret['retcode'] = self.functions.pack['__context__']['retcode']
ret['success'] = True
except Exception: except Exception:
log.exception("Unhandled exception running {0}".format(ret['fun'])) log.exception("Unhandled exception running {0}".format(ret['fun']))
# Although catch-all exception handlers are bad, the exception here # Although catch-all exception handlers are bad, the exception here

View File

@ -482,12 +482,21 @@ def clean_path(root, path, subdir=False):
return '' return ''
def clean_id(id_):
'''
Returns if the passed id is clean.
'''
if re.search(r'\.\.\{sep}'.format(sep=os.sep), id_):
return False
return True
def valid_id(opts, id_): def valid_id(opts, id_):
''' '''
Returns if the passed id is valid Returns if the passed id is valid
''' '''
try: try:
return bool(clean_path(opts['pki_dir'], id_)) return bool(clean_path(opts['pki_dir'], id_)) and clean_id(id_)
except (AttributeError, KeyError, TypeError) as e: except (AttributeError, KeyError, TypeError) as e:
return False return False

View File

@ -0,0 +1,120 @@
# -*- coding: utf-8 -*-
'''
:codeauthor: :email:`Amel Ajdinovic <amel@stackpointcloud.com>`
'''
# Import Python Libs
from __future__ import absolute_import
import os
# Import Salt Testing Libs
from tests.support.case import ShellCase
from tests.support.paths import FILES
from tests.support.unit import skipIf
from tests.support.helpers import expensiveTest, generate_random_name
# Import Salt Libs
from salt.config import cloud_providers_config
# Import Third-Party Libs
try:
from oneandone.client import OneAndOneService # pylint: disable=unused-import
HAS_ONEANDONE = True
except ImportError:
HAS_ONEANDONE = False
# Create the cloud instance name to be used throughout the tests
INSTANCE_NAME = generate_random_name('CLOUD-TEST-')
PROVIDER_NAME = 'oneandone'
DRIVER_NAME = 'oneandone'
@skipIf(HAS_ONEANDONE is False, 'salt-cloud requires >= 1and1 1.2.0')
class OneAndOneTest(ShellCase):
'''
Integration tests for the 1and1 cloud provider
'''
@expensiveTest
def setUp(self):
'''
Sets up the test requirements
'''
super(OneAndOneTest, self).setUp()
# check if appropriate cloud provider and profile files are present
profile_str = 'oneandone-config'
providers = self.run_cloud('--list-providers')
if profile_str + ':' not in providers:
self.skipTest(
'Configuration file for {0} was not found. Check {0}.conf '
'files in tests/integration/files/conf/cloud.*.d/ to run '
'these tests.'.format(PROVIDER_NAME)
)
# check if api_token present
config = cloud_providers_config(
os.path.join(
FILES,
'conf',
'cloud.providers.d',
PROVIDER_NAME + '.conf'
)
)
api_token = config[profile_str][DRIVER_NAME]['api_token']
if api_token == '':
self.skipTest(
'api_token must be provided to '
'run these tests. Check '
'tests/integration/files/conf/cloud.providers.d/{0}.conf'
.format(PROVIDER_NAME)
)
def test_list_images(self):
'''
Tests the return of running the --list-images command for 1and1
'''
image_list = self.run_cloud('--list-images {0}'.format(PROVIDER_NAME))
self.assertIn(
'coreOSimage',
[i.strip() for i in image_list]
)
def test_instance(self):
'''
Test creating an instance on 1and1
'''
# check if instance with salt installed returned
try:
self.assertIn(
INSTANCE_NAME,
[i.strip() for i in self.run_cloud(
'-p oneandone-test {0}'.format(INSTANCE_NAME), timeout=500
)]
)
except AssertionError:
self.run_cloud('-d {0} --assume-yes'.format(INSTANCE_NAME), timeout=500)
raise
# delete the instance
try:
self.assertIn(
INSTANCE_NAME + ':',
[i.strip() for i in self.run_cloud(
'-d {0} --assume-yes'.format(INSTANCE_NAME), timeout=500
)]
)
except AssertionError:
raise
def tearDown(self):
'''
Clean up after tests
'''
query = self.run_cloud('--query')
ret = ' {0}:'.format(INSTANCE_NAME)
# if test instance is still present, delete it
if ret in query:
self.run_cloud('-d {0} --assume-yes'.format(INSTANCE_NAME), timeout=500)

View File

@ -0,0 +1,15 @@
oneandone-test:
provider: oneandone-config
description: Testing salt-cloud create operation
vcore: 2
cores_per_processor: 1
ram: 2
password: P4$$w0rD
appliance_id: 8E3BAA98E3DFD37857810E0288DD8FBA
hdds:
-
is_main: true
size: 20
-
is_main: false
size: 20

View File

@ -0,0 +1,5 @@
oneandone-config:
driver: oneandone
api_token: ''
ssh_private_key: ~/.ssh/id_rsa
ssh_public_key: ~/.ssh/id_rsa.pub

View File

@ -19,8 +19,9 @@ from tornado.httpclient import HTTPClient
GEM = 'tidy' GEM = 'tidy'
GEM_VER = '1.1.2' GEM_VER = '1.1.2'
OLD_GEM = 'thor' OLD_GEM = 'brass'
OLD_VERSION = '0.17.0' OLD_VERSION = '1.0.0'
NEW_VERSION = '1.2.1'
GEM_LIST = [GEM, OLD_GEM] GEM_LIST = [GEM, OLD_GEM]
@ -129,18 +130,18 @@ class GemModuleTest(ModuleCase):
self.run_function('gem.install', [OLD_GEM], version=OLD_VERSION) self.run_function('gem.install', [OLD_GEM], version=OLD_VERSION)
gem_list = self.run_function('gem.list', [OLD_GEM]) gem_list = self.run_function('gem.list', [OLD_GEM])
self.assertEqual({'thor': ['0.17.0']}, gem_list) self.assertEqual({OLD_GEM: [OLD_VERSION]}, gem_list)
self.run_function('gem.update', [OLD_GEM]) self.run_function('gem.update', [OLD_GEM])
gem_list = self.run_function('gem.list', [OLD_GEM]) gem_list = self.run_function('gem.list', [OLD_GEM])
self.assertEqual({'thor': ['0.19.4', '0.17.0']}, gem_list) self.assertEqual({OLD_GEM: [NEW_VERSION, OLD_VERSION]}, gem_list)
self.run_function('gem.uninstall', [OLD_GEM]) self.run_function('gem.uninstall', [OLD_GEM])
self.assertFalse(self.run_function('gem.list', [OLD_GEM])) self.assertFalse(self.run_function('gem.list', [OLD_GEM]))
def test_udpate_system(self): def test_update_system(self):
''' '''
gem.udpate_system gem.update_system
''' '''
ret = self.run_function('gem.update_system') ret = self.run_function('gem.update_system')
self.assertTrue(ret) self.assertTrue(ret)

View File

@ -6,6 +6,7 @@
# Import Python libs # Import Python libs
from __future__ import absolute_import from __future__ import absolute_import
import logging
import pwd import pwd
import grp import grp
import random import random
@ -21,6 +22,8 @@ from salt.utils.pycrypto import gen_hash
# Import 3rd-party libs # Import 3rd-party libs
from salt.ext.six.moves import range # pylint: disable=import-error,redefined-builtin from salt.ext.six.moves import range # pylint: disable=import-error,redefined-builtin
log = logging.getLogger(__name__)
def gen_password(): def gen_password():
''' '''
@ -99,6 +102,7 @@ class AuthTest(ShellCase):
cmd = ('-a pam "*" test.ping ' cmd = ('-a pam "*" test.ping '
'--username {0} --password {1}'.format(self.userA, password)) '--username {0} --password {1}'.format(self.userA, password))
resp = self.run_salt(cmd) resp = self.run_salt(cmd)
log.debug('resp = %s', resp)
self.assertTrue( self.assertTrue(
'minion:' in resp 'minion:' in resp
) )

View File

@ -11,15 +11,11 @@ from tests.support.unit import skipIf, TestCase
from tests.support.mock import NO_MOCK, NO_MOCK_REASON, patch from tests.support.mock import NO_MOCK, NO_MOCK_REASON, patch
# Import Salt libs # Import Salt libs
import salt.config
import salt.loader
import salt.utils.boto import salt.utils.boto
from salt.ext import six from salt.ext import six
from salt.utils.versions import LooseVersion from salt.utils.versions import LooseVersion
import salt.states.boto_vpc as boto_vpc import salt.states.boto_vpc as boto_vpc
# Import test suite libs
# pylint: disable=import-error,unused-import # pylint: disable=import-error,unused-import
from tests.unit.modules.test_boto_vpc import BotoVpcTestCaseMixin from tests.unit.modules.test_boto_vpc import BotoVpcTestCaseMixin

View File

@ -6,6 +6,7 @@
# Import Python libs # Import Python libs
from __future__ import absolute_import from __future__ import absolute_import
import errno import errno
import os
# Import Salt Testing libs # Import Salt Testing libs
from tests.support.mock import patch, Mock from tests.support.mock import patch, Mock
@ -38,7 +39,7 @@ class FileclientTestCase(TestCase):
for exists in range(2): for exists in range(2):
with patch('os.makedirs', self._fake_makedir()): with patch('os.makedirs', self._fake_makedir()):
with Client(self.opts)._cache_loc('testfile') as c_ref_itr: with Client(self.opts)._cache_loc('testfile') as c_ref_itr:
assert c_ref_itr == '/__test__/files/base/testfile' assert c_ref_itr == os.sep + os.sep.join(['__test__', 'files', 'base', 'testfile'])
def test_cache_raises_exception_on_non_eexist_ioerror(self): def test_cache_raises_exception_on_non_eexist_ioerror(self):
''' '''

View File

@ -5,6 +5,7 @@
# Import python libs # Import python libs
from __future__ import absolute_import from __future__ import absolute_import
import textwrap
# Import Salt Libs # Import Salt Libs
from yaml.constructor import ConstructorError from yaml.constructor import ConstructorError
@ -36,12 +37,11 @@ class YamlLoaderTestCase(TestCase):
''' '''
Test parsing an ordinary path Test parsing an ordinary path
''' '''
self.assertEqual( self.assertEqual(
self._render_yaml(b''' self._render_yaml(textwrap.dedent('''\
p1: p1:
- alpha - alpha
- beta'''), - beta''')),
{'p1': ['alpha', 'beta']} {'p1': ['alpha', 'beta']}
) )
@ -49,38 +49,37 @@ p1:
''' '''
Test YAML anchors Test YAML anchors
''' '''
# Simple merge test # Simple merge test
self.assertEqual( self.assertEqual(
self._render_yaml(b''' self._render_yaml(textwrap.dedent('''\
p1: &p1 p1: &p1
v1: alpha v1: alpha
p2: p2:
<<: *p1 <<: *p1
v2: beta'''), v2: beta''')),
{'p1': {'v1': 'alpha'}, 'p2': {'v1': 'alpha', 'v2': 'beta'}} {'p1': {'v1': 'alpha'}, 'p2': {'v1': 'alpha', 'v2': 'beta'}}
) )
# Test that keys/nodes are overwritten # Test that keys/nodes are overwritten
self.assertEqual( self.assertEqual(
self._render_yaml(b''' self._render_yaml(textwrap.dedent('''\
p1: &p1 p1: &p1
v1: alpha v1: alpha
p2: p2:
<<: *p1 <<: *p1
v1: new_alpha'''), v1: new_alpha''')),
{'p1': {'v1': 'alpha'}, 'p2': {'v1': 'new_alpha'}} {'p1': {'v1': 'alpha'}, 'p2': {'v1': 'new_alpha'}}
) )
# Test merging of lists # Test merging of lists
self.assertEqual( self.assertEqual(
self._render_yaml(b''' self._render_yaml(textwrap.dedent('''\
p1: &p1 p1: &p1
v1: &v1 v1: &v1
- t1 - t1
- t2 - t2
p2: p2:
v2: *v1'''), v2: *v1''')),
{"p2": {"v2": ["t1", "t2"]}, "p1": {"v1": ["t1", "t2"]}} {"p2": {"v2": ["t1", "t2"]}, "p1": {"v1": ["t1", "t2"]}}
) )
@ -89,15 +88,27 @@ p2:
Test that duplicates still throw an error Test that duplicates still throw an error
''' '''
with self.assertRaises(ConstructorError): with self.assertRaises(ConstructorError):
self._render_yaml(b''' self._render_yaml(textwrap.dedent('''\
p1: alpha p1: alpha
p1: beta''') p1: beta'''))
with self.assertRaises(ConstructorError): with self.assertRaises(ConstructorError):
self._render_yaml(b''' self._render_yaml(textwrap.dedent('''\
p1: &p1 p1: &p1
v1: alpha v1: alpha
p2: p2:
<<: *p1 <<: *p1
v2: beta v2: beta
v2: betabeta''') v2: betabeta'''))
def test_yaml_with_unicode_literals(self):
'''
Test proper loading of unicode literals
'''
self.assertEqual(
self._render_yaml(textwrap.dedent('''\
foo:
a: Д
b: {'a': u'\\u0414'}''')),
{'foo': {'a': u'\u0414', 'b': {'a': u'\u0414'}}}
)