ProfitBricks CloudAPI v4 updates

This commit is contained in:
denza 2017-12-11 21:25:03 +01:00
parent 015d66be4d
commit 3407b272c0
5 changed files with 342 additions and 150 deletions

View File

@ -11,7 +11,7 @@ and disk size without being tied to a particular server size.
Dependencies Dependencies
============ ============
* profitbricks >= 3.0.0 * profitbricks >= 4.1.1
Configuration Configuration
============= =============
@ -34,8 +34,10 @@ Configuration
# #
username: user@domain.com username: user@domain.com
password: 123456 password: 123456
# datacenter_id is the UUID of a pre-existing virtual data center. # datacenter is the UUID of a pre-existing virtual data center.
datacenter_id: 9e6709a0-6bf9-4bd6-8692-60349c70ce0e datacenter: 9e6709a0-6bf9-4bd6-8692-60349c70ce0e
# delete_volumes is forcing a deletion of all volumes attached to a server on a deletion of a server
delete_volumes: true
# Connect to public LAN ID 1. # Connect to public LAN ID 1.
public_lan: 1 public_lan: 1
ssh_public_key: /path/to/id_rsa.pub ssh_public_key: /path/to/id_rsa.pub
@ -65,6 +67,13 @@ A list of existing virtual data centers can be retrieved with the following comm
salt-cloud -f list_datacenters my-profitbricks-config salt-cloud -f list_datacenters my-profitbricks-config
A new data center can be created with the following command:
.. code-block:: bash
salt-cloud -f create_datacenter my-profitbricks-config name=example location=us/las description="my description"
Authentication Authentication
============== ==============
@ -81,7 +90,9 @@ Here is an example of a profile:
profitbricks_staging profitbricks_staging
provider: my-profitbricks-config provider: my-profitbricks-config
size: Micro Instance size: Micro Instance
image: 2f98b678-6e7e-11e5-b680-52540066fee9 image_alias: 'ubuntu:latest'
# image or image_alias must be provided
# image: 2f98b678-6e7e-11e5-b680-52540066fee9
cores: 2 cores: 2
ram: 4096 ram: 4096
public_lan: 1 public_lan: 1
@ -117,8 +128,31 @@ Here is an example of a profile:
disk_size: 500 disk_size: 500
db_log: db_log:
disk_size: 50 disk_size: 50
disk_type: HDD disk_type: SSD
disk_availability_zone: ZONE_3
Locations can be obtained using the ``--list-locations`` option for the ``salt-cloud``
command:
.. code-block:: bash
# salt-cloud --list-locations my-profitbricks-config
Images can be obtained using the ``--list-sizes`` option for the ``salt-cloud``
command:
.. code-block:: bash
# salt-cloud --list-images my-profitbricks-config
Sizes can be obtained using the ``--list-sizes`` option for the ``salt-cloud``
command:
.. code-block:: bash
# salt-cloud --list-sizes my-profitbricks-config
Profile Specifics:
------------------
The following list explains some of the important properties. The following list explains some of the important properties.
@ -127,14 +161,21 @@ size
.. code-block:: bash .. code-block:: bash
salt-cloud --list-sizes my-profitbricks salt-cloud --list-sizes my-profitbricks-config
image image
Can be one of the options listed in the output of the following command: Can be one of the options listed in the output of the following command:
.. code-block:: bash .. code-block:: bash
salt-cloud --list-images my-profitbricks salt-cloud --list-images my-profitbricks-config
image_alias
Can be one of the options listed in the output of the following command:
.. code-block:: bash
salt-cloud -f list_images my-profitbricks-config
disk_size disk_size
This option allows you to override the size of the disk as defined by the This option allows you to override the size of the disk as defined by the
@ -144,9 +185,6 @@ disk_type
This option allow the disk type to be set to HDD or SSD. The default is This option allow the disk type to be set to HDD or SSD. The default is
HDD. HDD.
disk_availability_zone
This option will provision the volume in the specified availability_zone.
cores cores
This option allows you to override the number of CPU cores as defined by This option allows you to override the number of CPU cores as defined by
the size. the size.
@ -156,10 +194,6 @@ ram
The value must be a multiple of 256, e.g. 256, 512, 768, 1024, and so The value must be a multiple of 256, e.g. 256, 512, 768, 1024, and so
forth. forth.
availability_zone
This options specifies in which availability zone the server should be
built. Zones include ZONE_1 and ZONE_2. The default is AUTO.
public_lan public_lan
This option will connect the server to the specified public LAN. If no This option will connect the server to the specified public LAN. If no
LAN exists, then a new public LAN will be created. The value accepts a LAN LAN exists, then a new public LAN will be created. The value accepts a LAN
@ -179,9 +213,6 @@ public_firewall_rules
icmp_type: <icmp-type> icmp_type: <icmp-type>
icmp_code: <icmp-code> icmp_code: <icmp-code>
nat
This option will enable NAT on the private NIC.
private_lan private_lan
This option will connect the server to the specified private LAN. If no This option will connect the server to the specified private LAN. If no
LAN exists, then a new private LAN will be created. The value accepts a LAN LAN exists, then a new private LAN will be created. The value accepts a LAN
@ -209,7 +240,7 @@ ssh_public_key
ssh_interface ssh_interface
This option will use the private LAN IP for node connections (such as This option will use the private LAN IP for node connections (such as
bootstrapping the node) instead of the public LAN IP. The value accepts as bootstrapping the node) instead of the public LAN IP. The value accepts
'private_lan'. 'private_lan'.
cpu_family cpu_family
@ -228,5 +259,5 @@ wait_for_timeout
The timeout to wait in seconds for provisioning resources such as servers. The timeout to wait in seconds for provisioning resources such as servers.
The default wait_for_timeout is 15 minutes. The default wait_for_timeout is 15 minutes.
For more information concerning cloud profiles, see :ref:`here For more information concerning cloud profiles, see :doc:`here
<salt-cloud-profiles>`. </topics/cloud/profiles>`.

View File

@ -48,6 +48,9 @@ Set up the cloud configuration at ``/etc/salt/cloud.providers`` or
availability_zone: ZONE_1 availability_zone: ZONE_1
# Name or UUID of the HDD image to use. # Name or UUID of the HDD image to use.
image: <UUID> image: <UUID>
# Image alias could be provided instead of image.
# Example 'ubuntu:latest'
#image_alias: <IMAGE_ALIAS>
# Size of the node disk in GB (overrides server size). # Size of the node disk in GB (overrides server size).
disk_size: 40 disk_size: 40
# Type of disk (HDD or SSD). # Type of disk (HDD or SSD).
@ -96,6 +99,7 @@ import logging
import os import os
import pprint import pprint
import time import time
from distutils.version import LooseVersion
# Import salt libs # Import salt libs
import salt.utils.cloud import salt.utils.cloud
@ -112,11 +116,12 @@ from salt.exceptions import (
# Import 3rd-party libs # Import 3rd-party libs
from salt.ext import six from salt.ext import six
try: try:
import profitbricks
from profitbricks.client import ( from profitbricks.client import (
ProfitBricksService, Server, ProfitBricksService, Server,
NIC, Volume, FirewallRule, NIC, Volume, FirewallRule,
Datacenter, LoadBalancer, LAN, Datacenter, LoadBalancer, LAN,
PBNotFoundError PBNotFoundError, PBError
) )
HAS_PROFITBRICKS = True HAS_PROFITBRICKS = True
except ImportError: except ImportError:
@ -153,6 +158,13 @@ def get_configured_provider():
) )
def version_compatible(version):
'''
Checks profitbricks version
'''
return LooseVersion(profitbricks.API_VERSION) >= LooseVersion(version)
def get_dependencies(): def get_dependencies():
''' '''
Warn if dependencies are not met. Warn if dependencies are not met.
@ -183,6 +195,31 @@ def get_conn():
) )
def avail_locations(call=None):
'''
Return a dict of all available VM locations on the cloud provider with
relevant data
'''
if call == 'action':
raise SaltCloudSystemExit(
'The avail_images function must be called with '
'-f or --function, or with the --list-locations option'
)
ret = {}
conn = get_conn()
for item in conn.list_locations()['items']:
reg, loc = item['id'].split('/')
location = {'id': item['id']}
if reg not in ret:
ret[reg] = {}
ret[reg][loc] = location
return ret
def avail_images(call=None): def avail_images(call=None):
''' '''
Return a list of the images that are on the provider Return a list of the images that are on the provider
@ -195,18 +232,51 @@ def avail_images(call=None):
ret = {} ret = {}
conn = get_conn() conn = get_conn()
datacenter = get_datacenter(conn)
for item in conn.list_images()['items']: for item in conn.list_images()['items']:
if (item['properties']['location'] == image = {'id': item['id']}
datacenter['properties']['location']): image.update(item['properties'])
image = {'id': item['id']} ret[image['name']] = image
image.update(item['properties'])
ret[image['name']] = image
return ret return ret
def list_images(call=None, kwargs=None):
'''
List all the images with alias by location
CLI Example:
.. code-block:: bash
salt-cloud -f list_images my-profitbricks-config location=us/las
'''
if call != 'function':
raise SaltCloudSystemExit(
'The list_images function must be called with '
'-f or --function.'
)
if not version_compatible('4.0'):
raise SaltCloudNotFound(
"The 'image_alias' feature requires the profitbricks "
"SDK v4.0.0 or greater."
)
ret = {}
conn = get_conn()
if kwargs.get('location') is not None:
item = conn.get_location(kwargs.get('location'), 3)
ret[item['id']] = {'image_alias': item['properties']['imageAliases']}
return ret
for item in conn.list_locations(3)['items']:
ret[item['id']] = {'image_alias': item['properties']['imageAliases']}
return ret
def avail_sizes(call=None): def avail_sizes(call=None):
''' '''
Return a dict of all available VM sizes on the cloud provider with Return a dict of all available VM sizes on the cloud provider with
@ -288,13 +358,24 @@ def get_datacenter_id():
''' '''
Return datacenter ID from provider configuration Return datacenter ID from provider configuration
''' '''
return config.get_cloud_config_value( datacenter_id = config.get_cloud_config_value(
'datacenter_id', 'datacenter_id',
get_configured_provider(), get_configured_provider(),
__opts__, __opts__,
search_global=False search_global=False
) )
conn = get_conn()
try:
conn.get_datacenter(datacenter_id=datacenter_id)
except PBNotFoundError:
log.error('Failed to get datacenter: {0}'.format(
datacenter_id))
raise
return datacenter_id
def list_loadbalancers(call=None): def list_loadbalancers(call=None):
''' '''
@ -373,7 +454,8 @@ def create_datacenter(call=None, kwargs=None):
.. code-block:: bash .. code-block:: bash
salt-cloud -f create_datacenter profitbricks name=mydatacenter location=us/las description="my description" salt-cloud -f create_datacenter profitbricks name=mydatacenter
location=us/las description="my description"
''' '''
if call != 'function': if call != 'function':
raise SaltCloudSystemExit( raise SaltCloudSystemExit(
@ -492,6 +574,7 @@ def list_nodes(conn=None, call=None):
for item in nodes['items']: for item in nodes['items']:
node = {'id': item['id']} node = {'id': item['id']}
node.update(item['properties']) node.update(item['properties'])
node['state'] = node.pop('vmState')
ret[node['name']] = node ret[node['name']] = node
return ret return ret
@ -517,10 +600,13 @@ def list_nodes_full(conn=None, call=None):
for item in nodes['items']: for item in nodes['items']:
node = {'id': item['id']} node = {'id': item['id']}
node.update(item['properties']) node.update(item['properties'])
node['state'] = node.pop('vmState')
node['public_ips'] = [] node['public_ips'] = []
node['private_ips'] = [] node['private_ips'] = []
if item['entities']['nics']['items'] > 0: if item['entities']['nics']['items'] > 0:
for nic in item['entities']['nics']['items']: for nic in item['entities']['nics']['items']:
if len(nic['properties']['ips']) > 0:
pass
ip_address = nic['properties']['ips'][0] ip_address = nic['properties']['ips'][0]
if salt.utils.cloud.is_public_ip(ip_address): if salt.utils.cloud.is_public_ip(ip_address):
node['public_ips'].append(ip_address) node['public_ips'].append(ip_address)
@ -673,6 +759,23 @@ def get_key_filename(vm_):
return key_filename return key_filename
def signal_event(vm_, event, description):
args = __utils__['cloud.filter_event'](
event,
vm_,
['name', 'profile', 'provider', 'driver']
)
__utils__['cloud.fire_event'](
'event',
description,
'salt/cloud/{0}/creating'.format(vm_['name']),
args=args,
sock_dir=__opts__['sock_dir'],
transport=__opts__['transport']
)
def create(vm_): def create(vm_):
''' '''
Create a single VM from a data dict Create a single VM from a data dict
@ -680,22 +783,24 @@ def create(vm_):
try: try:
# Check for required profile parameters before sending any API calls. # Check for required profile parameters before sending any API calls.
if (vm_['profile'] and if (vm_['profile'] and
config.is_profile_configured(__opts__, config.is_profile_configured(__opts__,
(__active_provider_name__ or (__active_provider_name__ or
'profitbricks'), 'profitbricks'),
vm_['profile']) is False): vm_['profile']) is False):
return False return False
except AttributeError: except AttributeError:
pass pass
__utils__['cloud.fire_event']( if 'image_alias' in vm_ and not version_compatible('4.0'):
'event', raise SaltCloudNotFound(
'starting create', "The 'image_alias' parameter requires the profitbricks "
'salt/cloud/{0}/creating'.format(vm_['name']), "SDK v4.0.0 or greater."
args=__utils__['cloud.filter_event']('creating', vm_, ['name', 'profile', 'provider', 'driver']), )
sock_dir=__opts__['sock_dir'],
transport=__opts__['transport'] if 'image' not in vm_ and 'image_alias' not in vm_:
) log.error('The image or image_alias parameter is required.')
signal_event(vm_, 'creating', 'starting create')
data = None data = None
datacenter_id = get_datacenter_id() datacenter_id = get_datacenter_id()
@ -712,14 +817,7 @@ def create(vm_):
# Assembla the composite server object. # Assembla the composite server object.
server = _get_server(vm_, volumes, nics) server = _get_server(vm_, volumes, nics)
__utils__['cloud.fire_event']( signal_event(vm_, 'requesting', 'requesting instance')
'event',
'requesting instance',
'salt/cloud/{0}/requesting'.format(vm_['name']),
args=__utils__['cloud.filter_event']('requesting', vm_, ['name', 'profile', 'provider', 'driver']),
sock_dir=__opts__['sock_dir'],
transport=__opts__['transport']
)
try: try:
data = conn.create_server(datacenter_id=datacenter_id, server=server) data = conn.create_server(datacenter_id=datacenter_id, server=server)
@ -728,11 +826,20 @@ def create(vm_):
_wait_for_completion(conn, data, get_wait_timeout(vm_), _wait_for_completion(conn, data, get_wait_timeout(vm_),
'create_server') 'create_server')
except Exception as exc: # pylint: disable=W0703 except PBError as exc:
log.error( log.error(
'Error creating {0} on ProfitBricks\n\n' 'Error creating {0} on ProfitBricks\n\n'
'The following exception was thrown by the profitbricks library ' 'The following exception was thrown by the profitbricks library '
'when trying to run the initial deployment: \n{1}'.format( 'when trying to run the initial deployment: \n{1}:\n{2}'.format(
vm_['name'], exc, exc.content
),
exc_info_on_loglevel=logging.DEBUG
)
return False
except Exception as exc: # pylint: disable=W0703
log.error(
'Error creating {0} \n\n'
'Error: \n{1}'.format(
vm_['name'], exc vm_['name'], exc
), ),
exc_info_on_loglevel=logging.DEBUG exc_info_on_loglevel=logging.DEBUG
@ -754,7 +861,7 @@ def create(vm_):
'Loaded node data for {0}:\nname: {1}\nstate: {2}'.format( 'Loaded node data for {0}:\nname: {1}\nstate: {2}'.format(
vm_['name'], vm_['name'],
pprint.pformat(data['name']), pprint.pformat(data['name']),
data['vmState'] data['state']
) )
) )
except Exception as err: except Exception as err:
@ -768,7 +875,7 @@ def create(vm_):
# Trigger a failure in the wait for IP function # Trigger a failure in the wait for IP function
return False return False
running = data['vmState'] == 'RUNNING' running = data['state'] == 'RUNNING'
if not running: if not running:
# Still not running, trigger another iteration # Still not running, trigger another iteration
return return
@ -807,14 +914,7 @@ def create(vm_):
) )
) )
__utils__['cloud.fire_event']( signal_event(vm_, 'created', 'created instance')
'event',
'created instance',
'salt/cloud/{0}/created'.format(vm_['name']),
args=__utils__['cloud.filter_event']('created', vm_, ['name', 'profile', 'provider', 'driver']),
sock_dir=__opts__['sock_dir'],
transport=__opts__['transport']
)
if 'ssh_host' in vm_: if 'ssh_host' in vm_:
vm_['key_filename'] = get_key_filename(vm_) vm_['key_filename'] = get_key_filename(vm_)
@ -859,9 +959,32 @@ def destroy(name, call=None):
datacenter_id = get_datacenter_id() datacenter_id = get_datacenter_id()
conn = get_conn() conn = get_conn()
node = get_node(conn, name) node = get_node(conn, name)
attached_volumes = None
delete_volumes = config.get_cloud_config_value(
'delete_volumes',
get_configured_provider(),
__opts__,
search_global=False
)
# Get volumes before the server is deleted
attached_volumes = conn.get_attached_volumes(
datacenter_id=datacenter_id,
server_id=node['id']
)
conn.delete_server(datacenter_id=datacenter_id, server_id=node['id']) conn.delete_server(datacenter_id=datacenter_id, server_id=node['id'])
# The server is deleted and now is safe to delete the volumes
if delete_volumes:
for vol in attached_volumes['items']:
log.debug('Deleting volume {0}'.format(vol['id']))
conn.delete_volume(
datacenter_id=datacenter_id,
volume_id=vol['id']
)
log.debug('Deleted volume {0}'.format(vol['id']))
__utils__['cloud.fire_event']( __utils__['cloud.fire_event'](
'event', 'event',
'destroyed instance', 'destroyed instance',
@ -1010,14 +1133,17 @@ def _get_system_volume(vm_):
volume = Volume( volume = Volume(
name='{0} Storage'.format(vm_['name']), name='{0} Storage'.format(vm_['name']),
size=disk_size, size=disk_size,
image=get_image(vm_)['id'],
disk_type=get_disk_type(vm_), disk_type=get_disk_type(vm_),
ssh_keys=ssh_keys ssh_keys=ssh_keys
) )
# Set volume availability zone if defined in the cloud profile if 'image_alias' in vm_.keys():
if 'disk_availability_zone' in vm_: volume.image_alias = vm_['image_alias']
volume.availability_zone = vm_['disk_availability_zone'] else:
volume.image = get_image(vm_)['id']
# Set volume availability zone if defined in the cloud profile
if 'disk_availability_zone' in vm_:
volume.availability_zone = vm_['disk_availability_zone']
return volume return volume
@ -1109,4 +1235,4 @@ def _wait_for_completion(conn, promise, wait_timeout, msg):
raise Exception( raise Exception(
'Timed out waiting for async operation ' + msg + ' "' + str( 'Timed out waiting for async operation ' + msg + ' "' + str(
promise['requestId'] promise['requestId']
) + '" to complete.') ) + '" to complete.')

View File

@ -124,15 +124,6 @@ VALID_OPTS = {
# master address will not be split into IP and PORT. # master address will not be split into IP and PORT.
'master_uri_format': str, 'master_uri_format': str,
# The following optiosn refer to the Minion only, and they specify
# the details of the source address / port to be used when connecting to
# the Master. This is useful when dealing withmachines where due to firewall
# rules you are restricted to use a certain IP/port combination only.
'source_interface_name': str,
'source_address': str,
'source_ret_port': (six.string_types, int),
'source_publish_port': (six.string_types, int),
# The fingerprint of the master key may be specified to increase security. Generate # The fingerprint of the master key may be specified to increase security. Generate
# a master fingerprint with `salt-key -F master` # a master fingerprint with `salt-key -F master`
'master_finger': str, 'master_finger': str,
@ -174,10 +165,6 @@ VALID_OPTS = {
# The master_pubkey_signature must also be set for this. # The master_pubkey_signature must also be set for this.
'master_use_pubkey_signature': bool, 'master_use_pubkey_signature': bool,
# Enable master stats eveents to be fired, these events will contain information about
# what commands the master is processing and what the rates are of the executions
'master_stats': bool,
'master_stats_event_iter': int,
# The key fingerprint of the higher-level master for the syndic to verify it is talking to the # The key fingerprint of the higher-level master for the syndic to verify it is talking to the
# intended master # intended master
'syndic_finger': str, 'syndic_finger': str,
@ -255,10 +242,7 @@ VALID_OPTS = {
'autoload_dynamic_modules': bool, 'autoload_dynamic_modules': bool,
# Force the minion into a single environment when it fetches files from the master # Force the minion into a single environment when it fetches files from the master
'saltenv': str, 'environment': str,
# Prevent saltenv from being overriden on the command line
'lock_saltenv': bool,
# Force the minion into a single pillar root when it fetches pillar data from the master # Force the minion into a single pillar root when it fetches pillar data from the master
'pillarenv': str, 'pillarenv': str,
@ -1153,9 +1137,6 @@ VALID_OPTS = {
# part of the extra_minion_data param # part of the extra_minion_data param
# Subconfig entries can be specified by using the ':' notation (e.g. key:subkey) # Subconfig entries can be specified by using the ':' notation (e.g. key:subkey)
'pass_to_ext_pillars': (six.string_types, list), 'pass_to_ext_pillars': (six.string_types, list),
# Used by salt.modules.dockermod.compare_container_networks to specify which keys are compared
'docker.compare_container_networks': dict,
} }
# default configurations # default configurations
@ -1164,10 +1145,6 @@ DEFAULT_MINION_OPTS = {
'master': 'salt', 'master': 'salt',
'master_type': 'str', 'master_type': 'str',
'master_uri_format': 'default', 'master_uri_format': 'default',
'source_interface_name': '',
'source_address': '',
'source_ret_port': 0,
'source_publish_port': 0,
'master_port': 4506, 'master_port': 4506,
'master_finger': '', 'master_finger': '',
'master_shuffle': False, 'master_shuffle': False,
@ -1200,8 +1177,7 @@ DEFAULT_MINION_OPTS = {
'random_startup_delay': 0, 'random_startup_delay': 0,
'failhard': False, 'failhard': False,
'autoload_dynamic_modules': True, 'autoload_dynamic_modules': True,
'saltenv': None, 'environment': None,
'lock_saltenv': False,
'pillarenv': None, 'pillarenv': None,
'pillarenv_from_saltenv': False, 'pillarenv_from_saltenv': False,
'pillar_opts': False, 'pillar_opts': False,
@ -1435,11 +1411,6 @@ DEFAULT_MINION_OPTS = {
'extmod_whitelist': {}, 'extmod_whitelist': {},
'extmod_blacklist': {}, 'extmod_blacklist': {},
'minion_sign_messages': False, 'minion_sign_messages': False,
'docker.compare_container_networks': {
'static': ['Aliases', 'Links', 'IPAMConfig'],
'automatic': ['IPAddress', 'Gateway',
'GlobalIPv6Address', 'IPv6Gateway'],
},
} }
DEFAULT_MASTER_OPTS = { DEFAULT_MASTER_OPTS = {
@ -1483,8 +1454,7 @@ DEFAULT_MASTER_OPTS = {
}, },
'top_file_merging_strategy': 'merge', 'top_file_merging_strategy': 'merge',
'env_order': [], 'env_order': [],
'saltenv': None, 'environment': None,
'lock_saltenv': False,
'default_top': 'base', 'default_top': 'base',
'file_client': 'local', 'file_client': 'local',
'git_pillar_base': 'master', 'git_pillar_base': 'master',
@ -1545,8 +1515,6 @@ DEFAULT_MASTER_OPTS = {
'svnfs_saltenv_whitelist': [], 'svnfs_saltenv_whitelist': [],
'svnfs_saltenv_blacklist': [], 'svnfs_saltenv_blacklist': [],
'max_event_size': 1048576, 'max_event_size': 1048576,
'master_stats': False,
'master_stats_event_iter': 60,
'minionfs_env': 'base', 'minionfs_env': 'base',
'minionfs_mountpoint': '', 'minionfs_mountpoint': '',
'minionfs_whitelist': [], 'minionfs_whitelist': [],
@ -2444,7 +2412,7 @@ def syndic_config(master_config_path,
# Prepend root_dir to other paths # Prepend root_dir to other paths
prepend_root_dirs = [ prepend_root_dirs = [
'pki_dir', 'key_dir', 'cachedir', 'pidfile', 'sock_dir', 'extension_modules', 'pki_dir', 'key_dir', 'cachedir', 'pidfile', 'sock_dir', 'extension_modules',
'autosign_file', 'autoreject_file', 'token_dir', 'autosign_grains_dir' 'autosign_file', 'autoreject_file', 'token_dir'
] ]
for config_key in ('log_file', 'key_logfile', 'syndic_log_file'): for config_key in ('log_file', 'key_logfile', 'syndic_log_file'):
# If this is not a URI and instead a local path # If this is not a URI and instead a local path
@ -3339,7 +3307,7 @@ def is_profile_configured(opts, provider, profile_name, vm_=None):
alias, driver = provider.split(':') alias, driver = provider.split(':')
# Most drivers need an image to be specified, but some do not. # Most drivers need an image to be specified, but some do not.
non_image_drivers = ['nova', 'virtualbox', 'libvirt', 'softlayer', 'oneandone'] non_image_drivers = ['nova', 'virtualbox', 'libvirt', 'softlayer', 'oneandone', 'profitbricks']
# Most drivers need a size, but some do not. # Most drivers need a size, but some do not.
non_size_drivers = ['opennebula', 'parallels', 'proxmox', 'scaleway', non_size_drivers = ['opennebula', 'parallels', 'proxmox', 'scaleway',
@ -3624,24 +3592,6 @@ def apply_minion_config(overrides=None,
if overrides: if overrides:
opts.update(overrides) opts.update(overrides)
if 'environment' in opts:
if 'saltenv' in opts:
log.warning(
'The \'saltenv\' and \'environment\' minion config options '
'cannot both be used. Ignoring \'environment\' in favor of '
'\'saltenv\'.',
)
# Set environment to saltenv in case someone's custom module is
# refrencing __opts__['environment']
opts['environment'] = opts['saltenv']
else:
log.warning(
'The \'environment\' minion config option has been renamed '
'to \'saltenv\'. Using %s as the \'saltenv\' config value.',
opts['environment']
)
opts['saltenv'] = opts['environment']
opts['__cli'] = os.path.basename(sys.argv[0]) opts['__cli'] = os.path.basename(sys.argv[0])
# No ID provided. Will getfqdn save us? # No ID provided. Will getfqdn save us?
@ -3794,24 +3744,6 @@ def apply_master_config(overrides=None, defaults=None):
if overrides: if overrides:
opts.update(overrides) opts.update(overrides)
if 'environment' in opts:
if 'saltenv' in opts:
log.warning(
'The \'saltenv\' and \'environment\' master config options '
'cannot both be used. Ignoring \'environment\' in favor of '
'\'saltenv\'.',
)
# Set environment to saltenv in case someone's custom runner is
# refrencing __opts__['environment']
opts['environment'] = opts['saltenv']
else:
log.warning(
'The \'environment\' master config option has been renamed '
'to \'saltenv\'. Using %s as the \'saltenv\' config value.',
opts['environment']
)
opts['saltenv'] = opts['environment']
if len(opts['sock_dir']) > len(opts['cachedir']) + 10: if len(opts['sock_dir']) > len(opts['cachedir']) + 10:
opts['sock_dir'] = os.path.join(opts['cachedir'], '.salt-unix') opts['sock_dir'] = os.path.join(opts['cachedir'], '.salt-unix')
@ -3854,7 +3786,7 @@ def apply_master_config(overrides=None, defaults=None):
prepend_root_dirs = [ prepend_root_dirs = [
'pki_dir', 'key_dir', 'cachedir', 'pidfile', 'sock_dir', 'extension_modules', 'pki_dir', 'key_dir', 'cachedir', 'pidfile', 'sock_dir', 'extension_modules',
'autosign_file', 'autoreject_file', 'token_dir', 'syndic_dir', 'autosign_file', 'autoreject_file', 'token_dir', 'syndic_dir',
'sqlite_queue_dir', 'autosign_grains_dir' 'sqlite_queue_dir'
] ]
# These can be set to syslog, so, not actual paths on the system # These can be set to syslog, so, not actual paths on the system

View File

@ -18,7 +18,8 @@ from salt.config import cloud_providers_config
# Import Third-Party Libs # Import Third-Party Libs
try: try:
from profitbricks.client import ProfitBricksService # pylint: disable=unused-import # pylint: disable=unused-import
from profitbricks.client import ProfitBricksService
HAS_PROFITBRICKS = True HAS_PROFITBRICKS = True
except ImportError: except ImportError:
HAS_PROFITBRICKS = False HAS_PROFITBRICKS = False
@ -29,7 +30,7 @@ PROVIDER_NAME = 'profitbricks'
DRIVER_NAME = 'profitbricks' DRIVER_NAME = 'profitbricks'
@skipIf(HAS_PROFITBRICKS is False, 'salt-cloud requires >= profitbricks 2.3.0') @skipIf(HAS_PROFITBRICKS is False, 'salt-cloud requires >= profitbricks 4.1.0')
class ProfitBricksTest(ShellCase): class ProfitBricksTest(ShellCase):
''' '''
Integration tests for the ProfitBricks cloud provider Integration tests for the ProfitBricks cloud provider
@ -65,6 +66,7 @@ class ProfitBricksTest(ShellCase):
username = config[profile_str][DRIVER_NAME]['username'] username = config[profile_str][DRIVER_NAME]['username']
password = config[profile_str][DRIVER_NAME]['password'] password = config[profile_str][DRIVER_NAME]['password']
datacenter_id = config[profile_str][DRIVER_NAME]['datacenter_id'] datacenter_id = config[profile_str][DRIVER_NAME]['datacenter_id']
self.datacenter_id = datacenter_id
if username == '' or password == '' or datacenter_id == '': if username == '' or password == '' or datacenter_id == '':
self.skipTest( self.skipTest(
'A username, password, and an datacenter must be provided to ' 'A username, password, and an datacenter must be provided to '
@ -77,10 +79,104 @@ class ProfitBricksTest(ShellCase):
''' '''
Tests the return of running the --list-images command for ProfitBricks Tests the return of running the --list-images command for ProfitBricks
''' '''
image_list = self.run_cloud('--list-images {0}'.format(PROVIDER_NAME)) list_images = self.run_cloud('--list-images {0}'.format(PROVIDER_NAME))
self.assertIn( self.assertIn(
'Ubuntu-16.04-LTS-server-2016-10-06', 'Ubuntu-16.04-LTS-server-2017-10-01',
[i.strip() for i in image_list] [i.strip() for i in list_images]
)
def test_list_image_alias(self):
'''
Tests the return of running the -f list_images
command for ProfitBricks
'''
cmd = '-f list_images {0}'.format(PROVIDER_NAME)
list_images = self.run_cloud(cmd)
self.assertIn(
'- ubuntu:latest',
[i.strip() for i in list_images]
)
def test_list_sizes(self):
'''
Tests the return of running the --list_sizes command for ProfitBricks
'''
list_sizes = self.run_cloud('--list-sizes {0}'.format(PROVIDER_NAME))
self.assertIn(
'Micro Instance:',
[i.strip() for i in list_sizes]
)
def test_list_datacenters(self):
'''
Tests the return of running the -f list_datacenters
command for ProfitBricks
'''
cmd = '-f list_datacenters {0}'.format(PROVIDER_NAME)
list_datacenters = self.run_cloud(cmd)
self.assertIn(
self.datacenter_id,
[i.strip() for i in list_datacenters]
)
def test_list_nodes(self):
'''
Tests the return of running the -f list_nodes command for ProfitBricks
'''
list_nodes = self.run_cloud('-f list_nodes {0}'.format(PROVIDER_NAME))
self.assertIn(
'state:',
[i.strip() for i in list_nodes]
)
self.assertIn(
'name:',
[i.strip() for i in list_nodes]
)
def test_list_nodes_full(self):
'''
Tests the return of running the -f list_nodes_full
command for ProfitBricks
'''
cmd = '-f list_nodes_full {0}'.format(PROVIDER_NAME)
list_nodes = self.run_cloud(cmd)
self.assertIn(
'state:',
[i.strip() for i in list_nodes]
)
self.assertIn(
'name:',
[i.strip() for i in list_nodes]
)
def test_list_location(self):
'''
Tests the return of running the --list-locations
command for ProfitBricks
'''
cmd = '--list-locations {0}'.format(PROVIDER_NAME)
list_locations = self.run_cloud(cmd)
self.assertIn(
'de/fkb',
[i.strip() for i in list_locations]
)
self.assertIn(
'de/fra',
[i.strip() for i in list_locations]
)
self.assertIn(
'us/las',
[i.strip() for i in list_locations]
)
self.assertIn(
'us/ewr',
[i.strip() for i in list_locations]
) )
def test_instance(self): def test_instance(self):
@ -92,11 +188,15 @@ class ProfitBricksTest(ShellCase):
self.assertIn( self.assertIn(
INSTANCE_NAME, INSTANCE_NAME,
[i.strip() for i in self.run_cloud( [i.strip() for i in self.run_cloud(
'-p profitbricks-test {0}'.format(INSTANCE_NAME), timeout=500 '-p profitbricks-test {0}'.format(INSTANCE_NAME),
timeout=500
)] )]
) )
except AssertionError: except AssertionError:
self.run_cloud('-d {0} --assume-yes'.format(INSTANCE_NAME), timeout=500) self.run_cloud(
'-d {0} --assume-yes'.format(INSTANCE_NAME),
timeout=500
)
raise raise
# delete the instance # delete the instance
@ -119,4 +219,7 @@ class ProfitBricksTest(ShellCase):
# if test instance is still present, delete it # if test instance is still present, delete it
if ret in query: if ret in query:
self.run_cloud('-d {0} --assume-yes'.format(INSTANCE_NAME), timeout=500) self.run_cloud(
'-d {0} --assume-yes'.format(INSTANCE_NAME),
timeout=500
)

View File

@ -1,6 +1,6 @@
profitbricks-test: profitbricks-test:
provider: profitbricks-config provider: profitbricks-config
image: Ubuntu-16.04-LTS-server-2016-10-06 image_alias: 'ubuntu:latest'
image_password: volume2016 image_password: volume2016
size: Small Instance size: Small Instance
disk_size: 10 disk_size: 10