Merge remote-tracking branch 'upstream/2015.8' into merge-forward-develop

Conflicts:
    salt/cloud/clouds/vmware.py
    salt/modules/rh_ip.py
    salt/modules/s3.py
    salt/modules/saltutil.py
    salt/modules/zypper.py
    salt/spm/__init__.py
    salt/utils/aws.py
    salt/utils/s3.py
    tests/unit/modules/s3_test.py
    tests/unit/pydsl_test.py
This commit is contained in:
Colton Myers 2015-12-01 15:31:03 -07:00
commit 3f09d58fff
52 changed files with 2404 additions and 518 deletions

View File

@ -75,7 +75,50 @@ profile configuration as `userdata_file`. For instance:
userdata_file: /etc/salt/windows-firewall.ps1
If you are using WinRM on EC2 the HTTPS port for the WinRM service must also be enabled
in your userdata. By default EC2 Windows images only have insecure HTTP enabled. To
enable HTTPS and basic authentication required by pywinrm consider the following
userdata example:
.. code-block:: powershell
<powershell>
New-NetFirewallRule -Name "SMB445" -DisplayName "SMB445" -Protocol TCP -LocalPort 445
New-NetFirewallRule -Name "WINRM5986" -DisplayName "WINRM5986" -Protocol TCP -LocalPort 5986
winrm quickconfig -q
winrm set winrm/config/winrs '@{MaxMemoryPerShellMB="300"}'
winrm set winrm/config '@{MaxTimeoutms="1800000"}'
winrm set winrm/config/service/auth '@{Basic="true"}'
$SourceStoreScope = 'LocalMachine'
$SourceStorename = 'Remote Desktop'
$SourceStore = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $SourceStorename, $SourceStoreScope
$SourceStore.Open([System.Security.Cryptography.X509Certificates.OpenFlags]::ReadOnly)
$cert = $SourceStore.Certificates | Where-Object -FilterScript {
$_.subject -like '*'
}
$DestStoreScope = 'LocalMachine'
$DestStoreName = 'My'
$DestStore = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $DestStoreName, $DestStoreScope
$DestStore.Open([System.Security.Cryptography.X509Certificates.OpenFlags]::ReadWrite)
$DestStore.Add($cert)
$SourceStore.Close()
$DestStore.Close()
winrm create winrm/config/listener?Address=*+Transport=HTTPS `@`{Hostname=`"($certId)`"`;CertificateThumbprint=`"($cert.Thumbprint)`"`}
Restart-Service winrm
</powershell>
No certificate store is available by default on EC2 images and creating
one does not seem possible without an MMC (cannot be automated). To use the
default EC2 Windows images the above copies the RDP store.
Configuration
=============
@ -102,7 +145,8 @@ Setting the installer in ``/etc/salt/cloud.providers``:
The default Windows user is `Administrator`, and the default Windows password
is blank.
If WinRM is to be used ``use_winrm`` needs to be set to `True`.
If WinRM is to be used ``use_winrm`` needs to be set to `True`. ``winrm_port``
can be used to specify a custom port (must be HTTPS listener).
Auto-Generated Passwords on EC2

View File

@ -0,0 +1,196 @@
========================
Developing Salt Tutorial
========================
This tutorial assumes you have:
- a web browser
- a GitHub account (``<my_account>``)
- a command line (CLI)
- git
- a text editor
----
Fork
----
In your browser, navigate to the ``saltstack/salt`` `GitHub repository
https://github.com/saltstack/salt`_.
Click on ``Fork`` (https://github.com/saltstack/salt/#fork-destination-box).
.. note::
If you have more than one GitHub presence, for example if you are a member of
a team, GitHub will ask you into which area to clone Salt. If you don't know
where, then select your personal GitHub account.
-----
Clone
-----
In your CLI, navigate to the directory into which you want clone the Salt
codebase and submit the following command:
.. code-block:: shell
$ git clone https://github.com/<my_account>/salt.git
where ``<my_account>`` is the name of your GitHub account. After the clone has
completed, add SaltStack as a second remote and fetch any changes from
``upstream``.
.. code-block:: shell
$ cd salt
$ git remote add upstream https://github.com/saltstack/salt.git
$ git fetch upstream
For this tutorial, we will be working off from the ``develop`` branch, which is
the default branch for the SaltStack GitHub project. This branch needs to
track ``upstream/develop`` so that we will get all upstream changes when they
happen.
.. code-block:: shell
$ git checkout develop
$ git branch --set-upstream-to upstream/develop
-----
Fetch
-----
Fetch any ``upstream`` changes on the ``develop`` branch and sync them to your
local copy of the branch with a single command:
.. code-block:: shell
$ git pull --rebase
.. note::
For an explanation on ``pull`` vs ``pull --rebase`` and other excellent
points, see `this article http://mislav.net/2013/02/merge-vs-rebase/`_ by
Mislav Marohnić.
------
Branch
------
Now we are ready to get to work. Consult the sprint beginner bug list and
select an execution module whose ``__virtual__`` function needs to be updated.
I'll select the ``alternatives`` module.
Create a new branch off from ``develop``. Be sure to name it something short
and descriptive.
.. code-block:: shell
$ git checkout -b virt_ret
----
Edit
----
Edit the file you have selected, and verify that the changes are correct.
.. code-block:: shell
$ vim salt/modules/alternatives.py
$ git diff
diff --git a/salt/modules/alternatives.py b/salt/modules/alternatives.py
index 1653e5f..30c0a59 100644
--- a/salt/modules/alternatives.py
+++ b/salt/modules/alternatives.py
@@ -30,7 +30,7 @@ def __virtual__():
'''
if os.path.isdir('/etc/alternatives'):
return True
- return False
+ return (False, 'Cannot load alternatives module: /etc/alternatives dir not found')
def _get_cmd():
------
Commit
------
Stage and commit the changes. Write a descriptive commit summary, but try to
keep it less than 50 characters. Review your commit.
.. code-block:: shell
$ git add salt/modules/alternatives.py
$ git commit -m 'alternatives module: add error msg to __virtual__ return'
$ git show
.. note::
If you need more room to describe the changes in your commit, run ``git
commit`` (without the ``-m``, message, option) and you will be presented with
an editor. The first line is the commit summary and should still be 50
characters or less. The following paragraphs you create are free form and
will be preserved as part of the commit.
----
Push
----
Push your branch to your GitHub account. You will likely need to enter your
GitHub username and password.
.. code-block:: shell
$ git push origin virt_ret
Username for 'https://github.com': <my_account>
Password for 'https://<my_account>@github.com':
.. note::
If authentication over https does not work, you can alternatively setup `ssh
keys https://help.github.com/articles/generating-ssh-keys/`_. Once you have
done this, you may need add the keys to your git repository configuration
.. code-block:: shell
$ git config ssh.key ~/.ssh/<key_name>
where ``<key_name>`` is the file name of the private key you created.
-----
Merge
-----
In your browser, navigate to the `new pull request
https://github.com/saltstack/salt/compare`_ page on the ``saltstack/salt``
GitHub repository and click on 'compare across forks'. Select ``<my_account>``
from the list of head forks and the branch you are wanting to merge into
``develop`` (``virt_ret`` in this case).
When you have finished reviewing the changes, click 'Create pull request'.
.. note::
Although these instructions seem to be the official pull request proceedure
on github's website, here are two alternative methods that are simpler.
If you navigate to your clone of salt, https://github.com/<my_account>/salt,
depending on how old your branch is or how recently you pushed updates on it,
you may be presented with a button to create a pull request with your branch.
I find it easiest to edit the following URL:
``https://github.com/saltstack/salt/compare/develop...<my_account>:virt_ret``
---------
Resources
---------
GitHub offers many great tutorials on various aspects of the git- and
GitHub-centric development workflow:
https://help.github.com/
There are many topics covered by the Salt Developer documentation:
https://docs.saltstack.com/en/latest/topics/development/index.html

View File

@ -2,6 +2,11 @@
Salt 2015.8.2 Release Notes
===========================
.. note::
A significant orchestrate issue `#29110`_ was discovered during the release
process of 2015.8.2, so it has not been officially released.
Extended changelog courtesy of Todd Stansell (https://github.com/tjstansell/salt-changelogs):
*Generated at: 2015-11-13T17:24:04Z*
@ -556,6 +561,7 @@ Changes:
- **PR** `#27585`_: (*ryan-lane*) Fix undefined variable in cron state module
.. _`#29110`: https://github.com/saltstack/salt/issues/29110
.. _`#22115`: https://github.com/saltstack/salt/pull/22115
.. _`#25315`: https://github.com/saltstack/salt/pull/25315
.. _`#25521`: https://github.com/saltstack/salt/pull/25521

File diff suppressed because it is too large Load Diff

View File

@ -310,7 +310,7 @@ debugging purposes, SSL verification can be turned off.
salt.utils.http.query(
'https://example.com',
ssl_verify=False,
verify_ssl=False,
)
CA Bundles

View File

@ -7,6 +7,7 @@ LimitNOFILE=16384
Type=notify
NotifyAccess=all
ExecStart=/usr/bin/salt-master
KillMode=process
[Install]
WantedBy=multi-user.target

View File

@ -14,6 +14,7 @@ from __future__ import absolute_import
# Import Salt libs
import salt.spm
import salt.utils.parsers as parsers
from salt.utils.verify import verify_log
class SPM(parsers.SPMParser):
@ -28,5 +29,6 @@ class SPM(parsers.SPMParser):
ui = salt.spm.SPMCmdlineInterface()
self.parse_args()
self.setup_logfile_logger()
verify_log(self.config)
client = salt.spm.SPMClient(ui, self.config)
client.run(self.args)

View File

@ -2106,6 +2106,10 @@ def wait_for_instance(
win_deploy_auth_retry_delay = config.get_cloud_config_value(
'win_deploy_auth_retry_delay', vm_, __opts__, default=1
)
use_winrm = config.get_cloud_config_value(
'use_winrm', vm_, __opts__, default=False
)
if win_passwd and win_passwd == 'auto':
log.debug('Waiting for auto-generated Windows EC2 password')
while True:
@ -2132,20 +2136,55 @@ def wait_for_instance(
vm_['win_password'] = win_passwd
break
# SMB used whether winexe or winrm
if not salt.utils.cloud.wait_for_port(ip_address,
port=445,
timeout=ssh_connect_timeout):
raise SaltCloudSystemExit(
'Failed to connect to remote windows host'
)
if not salt.utils.cloud.validate_windows_cred(ip_address,
username,
win_passwd,
retries=win_deploy_auth_retries,
retry_delay=win_deploy_auth_retry_delay):
raise SaltCloudSystemExit(
'Failed to authenticate against remote windows host'
# If not using winrm keep same winexe behavior
if not use_winrm:
log.debug('Trying to authenticate via SMB using winexe')
if not salt.utils.cloud.validate_windows_cred(ip_address,
username,
win_passwd,
retries=win_deploy_auth_retries,
retry_delay=win_deploy_auth_retry_delay):
raise SaltCloudSystemExit(
'Failed to authenticate against remote windows host (smb)'
)
# If using winrm
else:
# Default HTTPS port can be changed in cloud configuration
winrm_port = config.get_cloud_config_value(
'winrm_port', vm_, __opts__, default=5986
)
# Wait for winrm port to be available
if not salt.utils.cloud.wait_for_port(ip_address,
port=winrm_port,
timeout=ssh_connect_timeout):
raise SaltCloudSystemExit(
'Failed to connect to remote windows host (winrm)'
)
log.debug('Trying to authenticate via Winrm using pywinrm')
if not salt.utils.cloud.wait_for_winrm(ip_address,
winrm_port,
username,
win_passwd,
timeout=ssh_connect_timeout):
raise SaltCloudSystemExit(
'Failed to authenticate against remote windows host'
)
elif salt.utils.cloud.wait_for_port(ip_address,
port=ssh_port,
timeout=ssh_connect_timeout,

View File

@ -748,28 +748,6 @@ def _wait_for_ip(vm_ref, max_wait):
return False
def _wait_for_task(task, vm_name, task_type, sleep_seconds=1, log_level='debug'):
time_counter = 0
starttime = time.time()
while task.info.state == 'running' or task.info.state == 'queued':
if time_counter % sleep_seconds == 0:
message = "[ {0} ] Waiting for {1} task to finish [{2} s]".format(vm_name, task_type, time_counter)
if log_level == 'info':
log.info(message)
else:
log.debug(message)
time.sleep(1.0 - ((time.time() - starttime) % 1.0))
time_counter += 1
if task.info.state == 'success':
message = "[ {0} ] Successfully completed {1} task in {2} seconds".format(vm_name, task_type, time_counter)
if log_level == 'info':
log.info(message)
else:
log.debug(message)
else:
raise Exception(task.info.error)
def _wait_for_host(host_ref, task_type, sleep_seconds=5, log_level='debug'):
time_counter = 0
starttime = time.time()
@ -1077,7 +1055,11 @@ def _upg_tools_helper(vm, reboot=False):
else:
status = 'Only Linux and Windows guests are currently supported'
return status
_wait_for_task(task, vm.name, "tools upgrade", 5, "info")
salt.utils.vmware.wait_for_task(task,
vm.name,
'tools upgrade',
sleep_seconds=5,
log_level='info')
except Exception as exc:
log.error(
'Error while upgrading VMware tools on VM {0}: {1}'.format(
@ -1752,7 +1734,7 @@ def start(name, call=None):
try:
log.info('Starting VM {0}'.format(name))
task = vm["object"].PowerOn()
_wait_for_task(task, name, "power on")
salt.utils.vmware.wait_for_task(task, name, 'power on')
except Exception as exc:
log.error(
'Error while powering on VM {0}: {1}'.format(
@ -1799,7 +1781,7 @@ def stop(name, call=None):
try:
log.info('Stopping VM {0}'.format(name))
task = vm["object"].PowerOff()
_wait_for_task(task, name, "power off")
salt.utils.vmware.wait_for_task(task, name, 'power off')
except Exception as exc:
log.error(
'Error while powering off VM {0}: {1}'.format(
@ -1850,7 +1832,7 @@ def suspend(name, call=None):
try:
log.info('Suspending VM {0}'.format(name))
task = vm["object"].Suspend()
_wait_for_task(task, name, "suspend")
salt.utils.vmware.wait_for_task(task, name, 'suspend')
except Exception as exc:
log.error(
'Error while suspending VM {0}: {1}'.format(
@ -1897,7 +1879,7 @@ def reset(name, call=None):
try:
log.info('Resetting VM {0}'.format(name))
task = vm["object"].Reset()
_wait_for_task(task, name, "reset")
salt.utils.vmware.wait_for_task(task, name, 'reset')
except Exception as exc:
log.error(
'Error while resetting VM {0}: {1}'.format(
@ -1999,7 +1981,7 @@ def destroy(name, call=None):
try:
log.info('Powering Off VM {0}'.format(name))
task = vm["object"].PowerOff()
_wait_for_task(task, name, "power off")
salt.utils.vmware.wait_for_task(task, name, 'power off')
except Exception as exc:
log.error(
'Error while powering off VM {0}: {1}'.format(
@ -2013,7 +1995,7 @@ def destroy(name, call=None):
try:
log.info('Destroying VM {0}'.format(name))
task = vm["object"].Destroy_Task()
_wait_for_task(task, name, "destroy")
salt.utils.vmware.wait_for_task(task, name, 'destroy')
except Exception as exc:
log.error(
'Error while destroying VM {0}: {1}'.format(
@ -2361,11 +2343,11 @@ def create(vm_):
# apply storage DRS recommendations
task = si.content.storageResourceManager.ApplyStorageDrsRecommendation_Task(recommended_datastores.recommendations[0].key)
_wait_for_task(task, vm_name, "apply storage DRS recommendations", 5, 'info')
salt.utils.vmware.wait_for_task(task, vm_name, 'apply storage DRS recommendations', 5, 'info')
else:
# clone the VM/template
task = object_ref.Clone(folder_ref, vm_name, clone_spec)
_wait_for_task(task, vm_name, "clone", 5, 'info')
salt.utils.vmware.wait_for_task(task, vm_name, 'clone', 5, 'info')
else:
log.info('Creating {0}'.format(vm_['name']))
@ -2944,7 +2926,7 @@ def enter_maintenance_mode(kwargs=None, call=None):
try:
task = host_ref.EnterMaintenanceMode(timeout=0, evacuatePoweredOffVms=True)
_wait_for_task(task, host_name, "enter maintenance mode", 1)
salt.utils.vmware.wait_for_task(task, host_name, 'enter maintenance mode')
except Exception as exc:
log.error(
'Error while moving host system {0} in maintenance mode: {1}'.format(
@ -2989,7 +2971,7 @@ def exit_maintenance_mode(kwargs=None, call=None):
try:
task = host_ref.ExitMaintenanceMode(timeout=0)
_wait_for_task(task, host_name, "exit maintenance mode", 1)
salt.utils.vmware.wait_for_task(task, host_name, 'exit maintenance mode')
except Exception as exc:
log.error(
'Error while moving host system {0} out of maintenance mode: {1}'.format(
@ -3142,7 +3124,7 @@ def create_snapshot(name, kwargs=None, call=None):
try:
task = vm_ref.CreateSnapshot(snapshot_name, desc, memdump, quiesce)
_wait_for_task(task, name, "create snapshot", 5, 'info')
salt.utils.vmware.wait_for_task(task, name, 'create snapshot', 5, 'info')
except Exception as exc:
log.error(
'Error while creating snapshot of {0}: {1}'.format(
@ -3197,7 +3179,7 @@ def revert_to_snapshot(name, kwargs=None, call=None):
try:
task = vm_ref.RevertToCurrentSnapshot(suppressPowerOn=suppress_power_on)
_wait_for_task(task, name, "revert to snapshot", 5, 'info')
salt.utils.vmware.wait_for_task(task, name, 'revert to snapshot', 5, 'info')
except Exception as exc:
log.error(
@ -3241,7 +3223,7 @@ def remove_all_snapshots(name, kwargs=None, call=None):
try:
task = vm_ref.RemoveAllSnapshots()
_wait_for_task(task, name, "remove snapshots", 5, 'info')
salt.utils.vmware.wait_for_task(task, name, 'remove snapshots', 5, 'info')
except Exception as exc:
log.error(
'Error while removing snapshots on VM {0}: {1}'.format(
@ -3387,7 +3369,7 @@ def add_host(kwargs=None, call=None):
if datacenter_name:
task = datacenter_ref.hostFolder.AddStandaloneHost(spec=spec, addConnected=True)
ret = 'added host system to datacenter {0}'.format(datacenter_name)
_wait_for_task(task, host_name, "add host system", 5, 'info')
salt.utils.vmware.wait_for_task(task, host_name, 'add host system', 5, 'info')
except Exception as exc:
if isinstance(exc, vim.fault.SSLVerifyFault):
log.error('Authenticity of the host\'s SSL certificate is not verified')
@ -3441,7 +3423,7 @@ def remove_host(kwargs=None, call=None):
else:
# This is a host system that is part of a Cluster
task = host_ref.Destroy_Task()
_wait_for_task(task, host_name, "remove host", 1, 'info')
salt.utils.vmware.wait_for_task(task, host_name, 'remove host', log_level='info')
except Exception as exc:
log.error(
'Error while removing host {0}: {1}'.format(
@ -3490,7 +3472,7 @@ def connect_host(kwargs=None, call=None):
try:
task = host_ref.ReconnectHost_Task()
_wait_for_task(task, host_name, "connect host", 5, 'info')
salt.utils.vmware.wait_for_task(task, host_name, 'connect host', 5, 'info')
except Exception as exc:
log.error(
'Error while connecting host {0}: {1}'.format(
@ -3539,7 +3521,7 @@ def disconnect_host(kwargs=None, call=None):
try:
task = host_ref.DisconnectHost_Task()
_wait_for_task(task, host_name, "disconnect host", 1, 'info')
salt.utils.vmware.wait_for_task(task, host_name, 'disconnect host', log_level='info')
except Exception as exc:
log.error(
'Error while disconnecting host {0}: {1}'.format(

View File

@ -68,6 +68,10 @@ def _query(function,
base_url = _urljoin(consul_url, '{0}/'.format(api_version))
url = _urljoin(base_url, function, False)
if data is None:
data = {}
data = json.dumps(data)
result = salt.utils.http.query(
url,
method=method,
@ -309,7 +313,7 @@ def put(consul_url=None, key=None, value=None, **kwargs):
ret = _query(consul_url=consul_url,
function=function,
method=method,
data=json.dumps(data),
data=data,
query_params=query_params)
if ret['res']:
@ -695,9 +699,7 @@ def agent_check_register(consul_url=None, **kwargs):
if 'name' in kwargs:
data['Name'] = kwargs['name']
else:
ret['message'] = 'Required parameter "name" is missing.'
ret['res'] = False
return ret
raise SaltInvocationError('Required argument "name" is missing.')
if True not in [True for item in ('script', 'http') if item in kwargs]:
ret['message'] = 'Required parameter "script" or "http" is missing.'
@ -973,6 +975,8 @@ def agent_service_register(consul_url=None, **kwargs):
if 'name' in kwargs:
data['Name'] = kwargs['name']
else:
raise SaltInvocationError('Required argument "name" is missing.')
if 'address' in kwargs:
data['Address'] = kwargs['address']
@ -1031,7 +1035,7 @@ def agent_service_deregister(consul_url=None, serviceid=None):
Used to remove a service.
:param consul_url: The Consul server URL.
:param name: A name describing the service.
:param serviceid: A serviceid describing the service.
:return: Boolean and message indicating success or failure.
CLI Example:
@ -1176,6 +1180,8 @@ def session_create(consul_url=None, **kwargs):
if 'name' in kwargs:
data['Name'] = kwargs['name']
else:
raise SaltInvocationError('Required argument "name" is missing.')
if 'checks' in kwargs:
data['Touch'] = kwargs['touch']
@ -1455,11 +1461,11 @@ def catalog_register(consul_url=None, **kwargs):
if res['res']:
ret['res'] = True
ret['message'] = ('Catalog registration '
'for {0} successful.'.format(kwargs['name']))
'for {0} successful.'.format(kwargs['node']))
else:
ret['res'] = False
ret['message'] = ('Catalog registration '
'for {0} failed.'.format(kwargs['name']))
'for {0} failed.'.format(kwargs['node']))
return ret
@ -1516,11 +1522,11 @@ def catalog_deregister(consul_url=None, **kwargs):
data=data)
if res['res']:
ret['res'] = True
ret['message'] = 'Catalog item {0} removed.'.format(kwargs['name'])
ret['message'] = 'Catalog item {0} removed.'.format(kwargs['node'])
else:
ret['res'] = False
ret['message'] = ('Removing Catalog '
'item {0} failed.'.format(kwargs['name']))
'item {0} failed.'.format(kwargs['node']))
return ret
@ -1980,6 +1986,8 @@ def acl_create(consul_url=None, **kwargs):
if 'name' in kwargs:
data['Name'] = kwargs['name']
else:
raise SaltInvocationError('Required argument "name" is missing.')
if 'type' in kwargs:
data['Type'] = kwargs['type']
@ -2043,6 +2051,8 @@ def acl_update(consul_url=None, **kwargs):
if 'name' in kwargs:
data['Name'] = kwargs['name']
else:
raise SaltInvocationError('Required argument "name" is missing.')
if 'type' in kwargs:
data['Type'] = kwargs['type']
@ -2325,6 +2335,8 @@ def event_list(consul_url=None, **kwargs):
if 'name' in kwargs:
query_params = kwargs['name']
else:
raise SaltInvocationError('Required argument "name" is missing.')
function = 'event/list/'
ret = _query(consul_url=consul_url,

View File

@ -1801,58 +1801,61 @@ def replace(path,
append_if_not_found) \
else repl
# mmap throws a ValueError if the file is empty, but if it is empty we
# should be able to skip the search anyway. NOTE: Is there a use case for
# searching an empty file with an empty pattern?
if filesize is not 0:
try:
# First check the whole file, determine whether to make the replacement
# Searching first avoids modifying the time stamp if there are no changes
r_data = None
try:
# Use a read-only handle to open the file
with salt.utils.fopen(path,
mode='rb',
buffering=bufsize) as r_file:
# Use a read-only handle to open the file
with salt.utils.fopen(path,
mode='rb',
buffering=bufsize) as r_file:
try:
# mmap throws a ValueError if the file is empty.
r_data = mmap.mmap(r_file.fileno(),
0,
access=mmap.ACCESS_READ)
if search_only:
# Just search; bail as early as a match is found
if re.search(cpattern, r_data):
return True # `with` block handles file closure
else:
result, nrepl = re.subn(cpattern, repl, r_data, count)
except ValueError:
# size of file in /proc is 0, but contains data
r_data = "".join(r_file)
if search_only:
# Just search; bail as early as a match is found
if re.search(cpattern, r_data):
return True # `with` block handles file closure
else:
result, nrepl = re.subn(cpattern, repl, r_data, count)
# found anything? (even if no change)
if nrepl > 0:
# found anything? (even if no change)
if nrepl > 0:
found = True
# Identity check the potential change
has_changes = True if pattern != repl else has_changes
if prepend_if_not_found or append_if_not_found:
# Search for content, to avoid pre/appending the
# content if it was pre/appended in a previous run.
if re.search('^{0}$'.format(re.escape(content)),
r_data,
flags=flags_num):
# Content was found, so set found.
found = True
# Identity check the potential change
has_changes = True if pattern != repl else has_changes
if prepend_if_not_found or append_if_not_found:
# Search for content, to avoid pre/appending the
# content if it was pre/appended in a previous run.
if re.search('^{0}$'.format(re.escape(content)),
r_data,
flags=flags_num):
# Content was found, so set found.
found = True
# Keep track of show_changes here, in case the file isn't
# modified
if show_changes or append_if_not_found or \
prepend_if_not_found:
orig_file = r_data.read(filesize).splitlines(True) \
if hasattr(r_data, 'read') \
else r_data.splitlines(True)
new_file = result.splitlines(True)
# Keep track of show_changes here, in case the file isn't
# modified
if show_changes or append_if_not_found or \
prepend_if_not_found:
orig_file = r_data.read(filesize).splitlines(True)
new_file = result.splitlines(True)
except (OSError, IOError) as exc:
raise CommandExecutionError(
"Unable to open file '{0}'. "
"Exception: {1}".format(path, exc)
)
finally:
if r_data and isinstance(r_data, mmap.mmap):
r_data.close()
except (OSError, IOError) as exc:
raise CommandExecutionError(
"Unable to open file '{0}'. "
"Exception: {1}".format(path, exc)
)
finally:
if r_data and isinstance(r_data, mmap.mmap):
r_data.close()
if has_changes and not dry_run:
# Write the replacement text in this block.
@ -3349,15 +3352,8 @@ def get_managed(
source_sum = __salt__['cp.hash_file'](source, saltenv)
if not source_sum:
return '', {}, 'Source file {0} not found'.format(source)
# if its a local file
elif urlparsed_source.scheme == 'file':
file_sum = get_hash(urlparsed_source.path, form='sha256')
source_sum = {'hsum': file_sum, 'hash_type': 'sha256'}
elif source.startswith('/'):
file_sum = get_hash(source, form='sha256')
source_sum = {'hsum': file_sum, 'hash_type': 'sha256'}
elif source_hash:
protos = ('salt', 'http', 'https', 'ftp', 'swift', 's3')
protos = ('salt', 'http', 'https', 'ftp', 'swift', 's3', 'file')
if _urlparse(source_hash).scheme in protos:
# The source_hash is a file on a server
hash_fn = __salt__['cp.cache_file'](source_hash, saltenv)
@ -3366,20 +3362,27 @@ def get_managed(
source_hash)
source_sum = extract_hash(hash_fn, '', name)
if source_sum is None:
return '', {}, ('Source hash file {0} contains an invalid '
'hash format, it must be in the format <hash type>=<hash>.'
).format(source_hash)
return '', {}, ('Source hash {0} format is invalid. It '
'must be in the format, <hash type>=<hash>, or it '
'must be a supported protocol: {1}'
).format(source_hash, ', '.join(protos))
else:
# The source_hash is a hash string
comps = source_hash.split('=')
if len(comps) < 2:
return '', {}, ('Source hash file {0} contains an '
'invalid hash format, it must be in '
'the format <hash type>=<hash>'
).format(source_hash)
return '', {}, ('Source hash {0} format is invalid. It '
'must be in the format, <hash type>=<hash>, or it '
'must be a supported protocol: {1}'
).format(source_hash, ', '.join(protos))
source_sum['hsum'] = comps[1].strip()
source_sum['hash_type'] = comps[0].strip()
elif urlparsed_source.scheme == 'file':
file_sum = get_hash(urlparsed_source.path, form='sha256')
source_sum = {'hsum': file_sum, 'hash_type': 'sha256'}
elif source.startswith('/'):
file_sum = get_hash(source, form='sha256')
source_sum = {'hsum': file_sum, 'hash_type': 'sha256'}
else:
return '', {}, ('Unable to determine upstream hash of'
' source file {0}').format(source)

View File

@ -127,7 +127,7 @@ def _auth(profile=None, api_version=2, **connection_args):
admin_token = get('token')
region = get('region')
ks_endpoint = get('endpoint', 'http://127.0.0.1:9292/')
g_endpoint_url = __salt__['keystone.endpoint_get']('glance')
g_endpoint_url = __salt__['keystone.endpoint_get']('glance', profile)
# The trailing 'v2' causes URLs like thise one:
# http://127.0.0.1:9292/v2/v1/images
g_endpoint_url = re.sub('/v2', '', g_endpoint_url['internalurl'])
@ -293,7 +293,7 @@ def image_create(name, location=None, profile=None, visibility=None,
# in a usable fashion. Thus we have to use v1 for now.
g_client = _auth(profile, api_version=1)
image = g_client.images.create(name=name, **kwargs)
return image_show(image.id)
return image_show(image.id, profile=profile)
def image_delete(id=None, name=None, profile=None): # pylint: disable=C0103
@ -460,13 +460,13 @@ def image_update(id=None, name=None, profile=None, **kwargs): # pylint: disable
- visibility ('public' or 'private')
'''
if id:
image = image_show(id=id)
image = image_show(id=id, profile=profile)
if 'result' in image and not image['result']:
return image
elif len(image) == 1:
image = image.values()[0]
elif name:
img_list = image_list(name=name)
img_list = image_list(name=name, profile=profile)
if img_list is dict and 'result' in img_list:
return img_list
elif len(img_list) == 0:

View File

@ -109,11 +109,12 @@ def getfacl(*args, **kwargs):
if entity in vals:
del vals[entity]
if acl_type == 'acl':
ret[dentry][entity] = vals
ret[dentry][entity] = [{"": vals}]
elif acl_type == 'default':
if 'defaults' not in ret[dentry]:
ret[dentry]['defaults'] = {}
ret[dentry]['defaults'][entity] = vals
ret[dentry]['defaults'][entity] = [{"": vals}]
return ret

View File

@ -567,7 +567,7 @@ def _parse_settings_eth(opts, iface_type, enabled, iface):
if 'mtu' in opts:
try:
result['mtu'] = int(opts['mtu'])
except Exception:
except ValueError:
_raise_error_iface(iface, 'mtu', ['integer'])
if iface_type not in ['bridge']:
@ -648,13 +648,6 @@ def _parse_settings_eth(opts, iface_type, enabled, iface):
if opt in opts:
result[opt] = opts[opt]
if 'mtu' in opts:
try:
int(opts['mtu'])
result['mtu'] = opts['mtu']
except Exception:
_raise_error_iface(iface, 'mtu', ['integer'])
if 'enable_ipv6' in opts:
result['enable_ipv6'] = opts['enable_ipv6']

View File

@ -19,6 +19,10 @@ Connection module for Amazon S3
s3.service_url: s3.amazonaws.com
A role_arn may also be specified in the configuration::
s3.role_arn: arn:aws:iam::111111111111:role/my-role-to-assume
If a service_url is not specified, the default is s3.amazonaws.com. This
may appear in various documentation as an "endpoint". A comprehensive list
for Amazon S3 may be found at::
@ -67,7 +71,8 @@ def __virtual__():
def delete(bucket, path=None, action=None, key=None, keyid=None,
service_url=None, verify_ssl=None, kms_keyid=None, location=None):
service_url=None, verify_ssl=None, kms_keyid=None, location=None,
role_arn=None):
'''
Delete a bucket, or delete an object from a bucket.
@ -79,13 +84,14 @@ def delete(bucket, path=None, action=None, key=None, keyid=None,
salt myminion s3.delete mybucket remoteobject
'''
key, keyid, service_url, verify_ssl, kms_keyid, location = _get_key(
key, keyid, service_url, verify_ssl, kms_keyid, location, role_arn = _get_key(
key,
keyid,
service_url,
verify_ssl,
kms_keyid,
location,
role_arn,
)
return salt.utils.s3.query(method='DELETE',
@ -97,12 +103,13 @@ def delete(bucket, path=None, action=None, key=None, keyid=None,
kms_keyid=kms_keyid,
service_url=service_url,
verify_ssl=verify_ssl,
location=location)
location=location,
role_arn=role_arn)
def get(bucket=None, path=None, return_bin=False, action=None,
local_file=None, key=None, keyid=None, service_url=None,
verify_ssl=None, kms_keyid=None, location=None):
verify_ssl=None, kms_keyid=None, location=None, role_arn=None):
'''
List the contents of a bucket, or return an object from a bucket. Set
return_bin to True in order to retrieve an object wholesale. Otherwise,
@ -154,13 +161,14 @@ def get(bucket=None, path=None, return_bin=False, action=None,
salt myminion s3.get mybucket myfile.png action=acl
'''
key, keyid, service_url, verify_ssl, kms_keyid, location = _get_key(
key, keyid, service_url, verify_ssl, kms_keyid, location, role_arn = _get_key(
key,
keyid,
service_url,
verify_ssl,
kms_keyid,
location,
role_arn,
)
return salt.utils.s3.query(method='GET',
@ -174,11 +182,12 @@ def get(bucket=None, path=None, return_bin=False, action=None,
kms_keyid=kms_keyid,
service_url=service_url,
verify_ssl=verify_ssl,
location=location)
location=location,
role_arn=role_arn)
def head(bucket, path=None, key=None, keyid=None, service_url=None,
verify_ssl=None, kms_keyid=None, location=None):
verify_ssl=None, kms_keyid=None, location=None, role_arn=None):
'''
Return the metadata for a bucket, or an object in a bucket.
@ -189,13 +198,14 @@ def head(bucket, path=None, key=None, keyid=None, service_url=None,
salt myminion s3.head mybucket
salt myminion s3.head mybucket myfile.png
'''
key, keyid, service_url, verify_ssl, kms_keyid, location = _get_key(
key, keyid, service_url, verify_ssl, kms_keyid, location, role_arn = _get_key(
key,
keyid,
service_url,
verify_ssl,
kms_keyid,
location,
role_arn,
)
return salt.utils.s3.query(method='HEAD',
@ -207,12 +217,13 @@ def head(bucket, path=None, key=None, keyid=None, service_url=None,
service_url=service_url,
verify_ssl=verify_ssl,
location=location,
full_headers=True)
full_headers=True,
role_arn=role_arn)
def put(bucket, path=None, return_bin=False, action=None, local_file=None,
key=None, keyid=None, service_url=None, verify_ssl=None,
kms_keyid=None, location=None):
kms_keyid=None, location=None, role_arn=None):
'''
Create a new bucket, or upload an object to a bucket.
@ -228,13 +239,14 @@ def put(bucket, path=None, return_bin=False, action=None, local_file=None,
salt myminion s3.put mybucket remotepath local_file=/path/to/file
'''
key, keyid, service_url, verify_ssl, kms_keyid, location = _get_key(
key, keyid, service_url, verify_ssl, kms_keyid, location, role_arn = _get_key(
key,
keyid,
service_url,
verify_ssl,
kms_keyid,
location,
role_arn,
)
return salt.utils.s3.query(method='PUT',
@ -248,10 +260,11 @@ def put(bucket, path=None, return_bin=False, action=None, local_file=None,
kms_keyid=kms_keyid,
service_url=service_url,
verify_ssl=verify_ssl,
location=location)
location=location,
role_arn=role_arn)
def _get_key(key, keyid, service_url, verify_ssl, kms_keyid, location):
def _get_key(key, keyid, service_url, verify_ssl, kms_keyid, location, role_arn):
'''
Examine the keys, and populate as necessary
'''
@ -279,4 +292,7 @@ def _get_key(key, keyid, service_url, verify_ssl, kms_keyid, location):
if location is None and __salt__['config.option']('s3.location') is not None:
location = __salt__['config.option']('s3.location')
return key, keyid, service_url, verify_ssl, kms_keyid, location
if role_arn is None and __salt__['config.option']('s3.role_arn') is not None:
role_arn = __salt__['config.option']('s3.role_arn')
return key, keyid, service_url, verify_ssl, kms_keyid, location, role_arn

View File

@ -839,7 +839,7 @@ def _get_ssh_or_api_client(cfgfile, ssh=False):
def _exec(client, tgt, fun, arg, timeout, expr_form, ret, kwarg, **kwargs):
ret = {}
fcn_ret = {}
seen = 0
if 'batch' in kwargs:
_cmd = client.cmd_batch
@ -856,14 +856,14 @@ def _exec(client, tgt, fun, arg, timeout, expr_form, ret, kwarg, **kwargs):
}
cmd_kwargs.update(kwargs)
for ret_comp in _cmd(**cmd_kwargs):
ret.update(ret_comp)
fcn_ret.update(ret_comp)
seen += 1
# ret can be empty, so we cannot len the whole return dict
# fcn_ret can be empty, so we cannot len the whole return dict
if expr_form == 'list' and len(tgt) == seen:
# do not wait for timeout when explicit list matching
# and all results are there
break
return ret
return fcn_ret
def cmd(tgt,
@ -886,29 +886,30 @@ def cmd(tgt,
'''
cfgfile = __opts__['conf_file']
client = _get_ssh_or_api_client(cfgfile, ssh)
ret = _exec(
fcn_ret = _exec(
client, tgt, fun, arg, timeout, expr_form, ret, kwarg, **kwargs)
# if return is empty, we may have not used the right conf,
# try with the 'minion relative master configuration counter part
# if available
master_cfgfile = '{0}master'.format(cfgfile[:-6]) # remove 'minion'
if (
not ret
not fcn_ret
and cfgfile.endswith('{0}{1}'.format(os.path.sep, 'minion'))
and os.path.exists(master_cfgfile)
):
client = _get_ssh_or_api_client(master_cfgfile, ssh)
ret = _exec(
fcn_ret = _exec(
client, tgt, fun, arg, timeout, expr_form, ret, kwarg, **kwargs)
if 'batch' in kwargs:
old_ret, ret = ret, {}
old_ret, fcn_ret = fcn_ret, {}
for key, value in old_ret.items():
ret[key] = {
fcn_ret[key] = {
'out': value.get('out', 'highstate') if isinstance(value, dict) else 'highstate',
'ret': value,
}
return ret
return fcn_ret
def cmd_iter(tgt,

View File

@ -854,7 +854,7 @@ def check_known_host(user=None, hostname=None, key=None, fingerprint=None,
known_host = get_known_host(user, hostname, config=config, port=port)
if not known_host:
if not known_host or 'fingerprint' not in known_host:
return 'add'
if key:
return 'exists' if key == known_host['key'] else 'update'

View File

@ -25,6 +25,7 @@ import os
import ctypes
import sys
import time
import datetime
from subprocess import list2cmdline
log = logging.getLogger(__name__)
@ -64,7 +65,7 @@ def cpuload():
.. code-block:: bash
salt '*' status.cpu_load
salt '*' status.cpuload
'''
# Pull in the information from WMIC
@ -94,7 +95,7 @@ def diskusage(human_readable=False, path=None):
.. code-block:: bash
salt '*' status.disk_usage path=c:/salt
salt '*' status.diskusage path=c:/salt
'''
if not path:
path = 'c:/'
@ -169,8 +170,8 @@ def saltmem(human_readable=False):
.. code-block:: bash
salt '*' status.salt_mem
salt '*' status.salt_mem human_readable=True
salt '*' status.saltmem
salt '*' status.saltmem human_readable=True
'''
with salt.utils.winapi.Com():
wmi_obj = wmi.WMI()
@ -216,40 +217,18 @@ def uptime(human_readable=False):
#
# Get string
startup_time = stats_line[len('Statistics Since '):]
# Convert to struct
startup_time = time.strptime(startup_time, '%d/%m/%Y %H:%M:%S')
# eonvert to seconds since epoch
startup_time = time.mktime(startup_time)
# Convert to time struct
try:
startup_time = time.strptime(startup_time, '%d/%m/%Y %H:%M:%S')
except ValueError:
startup_time = time.strptime(startup_time, '%d/%m/%Y %I:%M:%S %p')
# Convert to datetime object
startup_time = datetime.datetime(*startup_time[:6])
# Subtract startup time from current time to get the uptime of the system
uptime = time.time() - startup_time
uptime = datetime.now() - startup_time
if human_readable:
# Pull out the majority of the uptime tuple. h:m:s
uptime = int(uptime)
seconds = uptime % 60
uptime /= 60
minutes = uptime % 60
uptime /= 60
hours = uptime % 24
uptime /= 24
# Translate the h:m:s from above into HH:MM:SS format.
ret = '{0:0>2}:{1:0>2}:{2:0>2}'.format(hours, minutes, seconds)
# If the minion has been on for days, add that in.
if uptime > 0:
ret = 'Days: {0} {1}'.format(uptime % 365, ret)
# If you have a Windows minion that has been up for years,
# my hat is off to you sir.
if uptime > 365:
ret = 'Years: {0} {1}'.format(uptime / 365, ret)
return ret
else:
return uptime
return str(uptime) if human_readable else uptime.total_seconds()
def _get_process_info(proc):

View File

@ -1252,54 +1252,32 @@ def _get_first_aggregate_text(node_list):
return '\n'.join(out)
def _parse_suse_product(path, *info):
def list_products(all=False):
'''
Parse SUSE LLC product.
'''
doc = dom.parse(path)
product = {}
for nfo in info:
product.update(
{nfo: _get_first_aggregate_text(
doc.getElementsByTagName(nfo)
)}
)
List all available or installed SUSE products.
return product
def list_products():
'''
List all installed SUSE products.
all
List all products available or only installed. Default is False.
CLI Examples:
.. code-block:: bash
salt '*' pkg.list_products
salt '*' pkg.list_products all=True
'''
products_dir = '/etc/products.d'
if not os.path.exists(products_dir):
raise CommandExecutionError(
'Directory {0} does not exist'.format(products_dir)
)
p_data = {}
for fname in os.listdir(products_dir):
pth_name = os.path.join(products_dir, fname)
r_pth_name = os.path.realpath(pth_name)
p_data[r_pth_name] = r_pth_name != pth_name and 'baseproduct' or None
info = ['vendor', 'name', 'version', 'baseversion', 'patchlevel',
'predecessor', 'release', 'endoflife', 'arch', 'cpeid',
'productline', 'updaterepokey', 'summary', 'shortsummary',
'description']
ret = {}
for prod_meta, is_base_product in six.iteritems(p_data):
product = _parse_suse_product(prod_meta, *info)
product['baseproduct'] = is_base_product is not None
ret[product.pop('name')] = product
ret = list()
doc = dom.parseString(__salt__['cmd.run'](("zypper -x products{0}".format(not all and ' -i' or '')),
output_loglevel='trace'))
for prd in doc.getElementsByTagName('product-list')[0].getElementsByTagName('product'):
p_data = dict()
p_nfo = dict(prd.attributes.items())
p_name = p_nfo.pop('name')
p_data[p_name] = p_nfo
p_data[p_name]['eol'] = prd.getElementsByTagName('endoflife')[0].getAttribute('text')
descr = _get_first_aggregate_text(prd.getElementsByTagName('description'))
p_data[p_name]['description'] = " ".join([line.strip() for line in descr.split(os.linesep)])
ret.append(p_data)
return ret

View File

@ -375,16 +375,17 @@ def _format_host(host, data):
line_max_len - 7)
hstrs.append(colorfmt.format(colors['CYAN'], totals, colors))
sum_duration = sum(rdurations)
duration_unit = 'ms'
# convert to seconds if duration is 1000ms or more
if sum_duration > 999:
sum_duration /= 1000
duration_unit = 's'
total_duration = u'Total run time: {0} {1}'.format(
'{0:.3f}'.format(sum_duration).rjust(line_max_len - 5),
duration_unit)
hstrs.append(colorfmt.format(colors['CYAN'], total_duration, colors))
if __opts__.get('state_output_profile', False):
sum_duration = sum(rdurations)
duration_unit = 'ms'
# convert to seconds if duration is 1000ms or more
if sum_duration > 999:
sum_duration /= 1000
duration_unit = 's'
total_duration = u'Total run time: {0} {1}'.format(
'{0:.3f}'.format(sum_duration).rjust(line_max_len - 5),
duration_unit)
hstrs.append(colorfmt.format(colors['CYAN'], total_duration, colors))
if strip_colors:
host = salt.output.strip_esc_sequence(host)

View File

@ -8,6 +8,15 @@ from __future__ import absolute_import
import fnmatch
import re
# Try to import range from https://github.com/ytoolshed/range
HAS_RANGE = False
try:
import seco.range
HAS_RANGE = True
except ImportError:
pass
# pylint: enable=import-error
# Import Salt libs
import salt.loader
from salt.template import compile_template
@ -107,6 +116,23 @@ class RosterMatcher(object):
minions[minion] = data
return minions
def ret_range_minions(self):
'''
Return minions that are returned by a range query
'''
if HAS_RANGE is False:
raise RuntimeError("Python lib 'seco.range' is not available")
minions = {}
range_hosts = _convert_range_to_list(self.tgt, __opts__['range_server'])
for minion in self.raw:
if minion in range_hosts:
data = self.get_data(minion)
if data:
minions[minion] = data
return minions
def get_data(self, minion):
'''
Return the configured ip
@ -116,3 +142,15 @@ class RosterMatcher(object):
if isinstance(self.raw[minion], dict):
return self.raw[minion]
return False
def _convert_range_to_list(tgt, range_server):
'''
convert a seco.range range into a list target
'''
r = seco.range.Range(range_server)
try:
return r.expand(tgt)
except seco.range.RangeException as err:
log.error('Range server exception: {0}'.format(err))
return []

73
salt/roster/range.py Normal file
View File

@ -0,0 +1,73 @@
# -*- coding: utf-8 -*-
'''
This roster resolves targets from a range server.
:depends: seco.range, https://github.com/ytoolshed/range
When you want to use a range query for target matching, use ``--roster range``. For example:
.. code-block:: bash
salt-ssh --roster range '%%%example.range.cluster' test.ping
'''
from __future__ import absolute_import
import fnmatch
import logging
log = logging.getLogger(__name__)
# Try to import range from https://github.com/ytoolshed/range
HAS_RANGE = False
try:
import seco.range
HAS_RANGE = True
except ImportError:
log.error('Unable to load range library')
# pylint: enable=import-error
def __virtual__():
return HAS_RANGE
def targets(tgt, tgt_type='range', **kwargs):
'''
Return the targets from a range query
'''
r = seco.range.Range(__opts__['range_server'])
log.debug('Range connection to \'{0}\' established'.format(__opts__['range_server']))
hosts = []
try:
log.debug('Querying range for \'{0}\''.format(tgt))
hosts = r.expand(tgt)
except seco.range.RangeException as err:
log.error('Range server exception: %s', err)
return {}
log.debug('Range responded with: \'{0}\''.format(hosts))
# Currently we only support giving a raw range entry, no target filtering supported other than what range returns :S
tgt_func = {
'range': target_range,
'glob': target_range,
# 'glob': target_glob,
}
log.debug('Filtering using tgt_type: \'{0}\''.format(tgt_type))
try:
targeted_hosts = tgt_func[tgt_type](tgt, hosts)
except KeyError:
raise NotImplementedError
log.debug('Targeting data for salt-ssh: \'{0}\''.format(targeted_hosts))
return targeted_hosts
def target_range(tgt, hosts):
return dict((host, {'host': host, 'user': __opts__['ssh_user']}) for host in hosts)
def target_glob(tgt, hosts):
return dict((host, {'host': host, 'user': __opts__['ssh_user']}) for host in hosts if fnmatch.fnmatch(tgt, host))

View File

@ -1,28 +1,105 @@
# -*- coding: utf-8 -*-
'''
Directly manage the salt git_pillar plugin
Runner module to directly manage the git external pillar
'''
from __future__ import absolute_import
# Import python libs
import logging
# Import salt libs
import salt.pillar.git_pillar
import salt.utils.gitfs
from salt.exceptions import SaltRunnerError
from salt.ext import six
log = logging.getLogger(__name__)
def update(branch, repo):
def update(branch=None, repo=None):
'''
Execute an update for the configured git fileserver backend for Pillar
.. versionadded:: 2014.1.0
.. versionchanged:: 2015.8.4
This runner function now supports the :ref:`new git_pillar
configuration schema <git-pillar-2015-8-0-and-later>` introduced in
2015.8.0. Additionally, the branch and repo can now be omitted to
update all git_pillar remotes. The return data has also changed. For
releases 2015.8.3 and earlier, there is no value returned. Starting
with 2015.8.4, the return data is a dictionary. If using the :ref:`old
git_pillar configuration schema <git-pillar-pre-2015-8-0>`, then the
dictionary values will be ``True`` if the update completed without
error, and ``False`` if an error occurred. If using the :ref:`new
git_pillar configuration schema <git-pillar-2015-8-0-and-later>`, the
values will be ``True`` only if new commits were fetched, and ``False``
if there were errors or no new commits were fetched.
Update one or all configured git_pillar remotes.
CLI Example:
.. code-block:: bash
salt-run git_pillar.update branch='branch' repo='location'
# Update specific branch and repo
salt-run git_pillar.update branch='branch' repo='https://foo.com/bar.git'
# Update all repos (2015.8.4 and later)
salt-run git_pillar.update
# Run with debug logging
salt-run git_pillar.update -l debug
'''
for opts_dict in __opts__.get('ext_pillar', []):
parts = opts_dict.get('git', '').split()
if len(parts) >= 2 and parts[:2] == [branch, repo]:
salt.pillar.git_pillar.GitPillar(branch, repo, __opts__).update()
break
else:
raise SaltRunnerError('git repo/branch not found in ext_pillar config')
ret = {}
for ext_pillar in __opts__.get('ext_pillar', []):
pillar_type = next(iter(ext_pillar))
if pillar_type != 'git':
continue
pillar_conf = ext_pillar[pillar_type]
if isinstance(pillar_conf, six.string_types):
parts = pillar_conf.split()
if len(parts) >= 2:
desired_branch, desired_repo = parts[:2]
# Skip this remote if it doesn't match the search criteria
if branch is not None:
if branch != desired_branch:
continue
if repo is not None:
if repo != desired_repo:
continue
ret[pillar_conf] = salt.pillar.git_pillar._LegacyGitPillar(
parts[0],
parts[1],
__opts__).update()
else:
pillar = salt.utils.gitfs.GitPillar(__opts__)
pillar.init_remotes(pillar_conf,
salt.pillar.git_pillar.PER_REMOTE_OVERRIDES)
for remote in pillar.remotes:
# Skip this remote if it doesn't match the search criteria
if branch is not None:
if branch != remote.branch:
continue
if repo is not None:
if repo != remote.url:
continue
try:
result = remote.fetch()
except Exception as exc:
log.error(
'Exception \'{0}\' caught while fetching git_pillar '
'remote \'{1}\''.format(exc, remote.id),
exc_info_on_loglevel=logging.DEBUG
)
result = False
finally:
remote.clear_lock()
ret[remote.id] = result
if not ret:
if branch is not None or repo is not None:
raise SaltRunnerError(
'Specified git branch/repo not found in ext_pillar config'
)
else:
raise SaltRunnerError('No git_pillar remotes are configured')
return ret

View File

@ -256,7 +256,10 @@ class SPMClient(object):
else:
self._verbose('Installing file {0} to {1}'.format(member.name, out_path), log.trace)
file_hash = hashlib.sha1()
digest = self._pkgfiles_fun('hash_file', out_path, file_hash, self.files_conn)
digest = self._pkgfiles_fun('hash_file',
os.path.join(out_path, member.name),
file_hash,
self.files_conn)
self._pkgdb_fun('register_file',
name,
member,

View File

@ -175,25 +175,31 @@ def state_args(id_, state, high):
def find_name(name, state, high):
'''
Scan high data for the id referencing the given name
Scan high data for the id referencing the given name and return a list of (IDs, state) tuples that match
Note: if `state` is sls, then we are looking for all IDs that match the given SLS
'''
ext_id = ''
ext_id = []
if name in high:
ext_id = name
ext_id.append((name, state))
# if we are requiring an entire SLS, then we need to add ourselves to everything in that SLS
elif state == 'sls':
for nid, item in high.iteritems():
if item['__sls__'] == name:
ext_id.append((nid, next(iter(item))))
# otherwise we are requiring a single state, lets find it
else:
# We need to scan for the name
for nid in high:
if state in high[nid]:
if isinstance(
high[nid][state],
list):
if isinstance(high[nid][state], list):
for arg in high[nid][state]:
if not isinstance(arg, dict):
continue
if len(arg) != 1:
continue
if arg[next(iter(arg))] == name:
ext_id = nid
ext_id.append((name, state))
return ext_id
@ -1252,10 +1258,8 @@ class State(object):
x for x in body if not x.startswith('__')
)
# Check for a matching 'name' override in high data
id_ = find_name(name, state_type, high)
if id_:
name = id_
else:
ids = find_name(name, state_type, high)
if len(ids) != 1:
errors.append(
'Cannot extend ID \'{0}\' in \'{1}:{2}\'. It is not '
'part of the high state.\n'
@ -1268,6 +1272,9 @@ class State(object):
body.get('__sls__', 'base'))
)
continue
else:
name = ids[0][0]
for state, run in six.iteritems(body):
if state.startswith('__'):
continue
@ -1463,68 +1470,69 @@ class State(object):
)
if key == 'prereq':
# Add prerequired to prereqs
ext_id = find_name(name, _state, high)
if not ext_id:
continue
if ext_id not in extend:
extend[ext_id] = {}
if _state not in extend[ext_id]:
extend[ext_id][_state] = []
extend[ext_id][_state].append(
{'prerequired': [{state: id_}]}
)
ext_ids = find_name(name, _state, high)
for ext_id, _req_state in ext_ids:
if ext_id not in extend:
extend[ext_id] = {}
if _req_state not in extend[ext_id]:
extend[ext_id][_req_state] = []
extend[ext_id][_req_state].append(
{'prerequired': [{state: id_}]}
)
continue
if key == 'use_in':
# Add the running states args to the
# use_in states
ext_id = find_name(name, _state, high)
if not ext_id:
continue
ext_args = state_args(ext_id, _state, high)
if ext_id not in extend:
extend[ext_id] = {}
if _state not in extend[ext_id]:
extend[ext_id][_state] = []
ignore_args = req_in_all.union(ext_args)
for arg in high[id_][state]:
if not isinstance(arg, dict):
ext_ids = find_name(name, _state, high)
for ext_id, _req_state in ext_ids:
if not ext_id:
continue
if len(arg) != 1:
continue
if next(iter(arg)) in ignore_args:
continue
# Don't use name or names
if next(six.iterkeys(arg)) == 'name':
continue
if next(six.iterkeys(arg)) == 'names':
continue
extend[ext_id][_state].append(arg)
ext_args = state_args(ext_id, _state, high)
if ext_id not in extend:
extend[ext_id] = {}
if _req_state not in extend[ext_id]:
extend[ext_id][_req_state] = []
ignore_args = req_in_all.union(ext_args)
for arg in high[id_][state]:
if not isinstance(arg, dict):
continue
if len(arg) != 1:
continue
if next(iter(arg)) in ignore_args:
continue
# Don't use name or names
if next(six.iterkeys(arg)) == 'name':
continue
if next(six.iterkeys(arg)) == 'names':
continue
extend[ext_id][_req_state].append(arg)
continue
if key == 'use':
# Add the use state's args to the
# running state
ext_id = find_name(name, _state, high)
if not ext_id:
continue
loc_args = state_args(id_, state, high)
if id_ not in extend:
extend[id_] = {}
if state not in extend[id_]:
extend[id_][state] = []
ignore_args = req_in_all.union(loc_args)
for arg in high[ext_id][_state]:
if not isinstance(arg, dict):
ext_ids = find_name(name, _state, high)
for ext_id, _req_state in ext_ids:
if not ext_id:
continue
if len(arg) != 1:
continue
if next(iter(arg)) in ignore_args:
continue
# Don't use name or names
if next(six.iterkeys(arg)) == 'name':
continue
if next(six.iterkeys(arg)) == 'names':
continue
extend[id_][state].append(arg)
loc_args = state_args(id_, state, high)
if id_ not in extend:
extend[id_] = {}
if state not in extend[id_]:
extend[id_][state] = []
ignore_args = req_in_all.union(loc_args)
for arg in high[ext_id][_req_state]:
if not isinstance(arg, dict):
continue
if len(arg) != 1:
continue
if next(iter(arg)) in ignore_args:
continue
# Don't use name or names
if next(six.iterkeys(arg)) == 'name':
continue
if next(six.iterkeys(arg)) == 'names':
continue
extend[id_][state].append(arg)
continue
found = False
if name not in extend:
@ -1900,9 +1908,9 @@ class State(object):
for req in low[requisite]:
req = trim_req(req)
found = False
req_key = next(iter(req))
req_val = req[req_key]
for chunk in chunks:
req_key = next(iter(req))
req_val = req[req_key]
if req_val is None:
continue
if req_key == 'sls':

View File

@ -1218,6 +1218,8 @@ def running(name,
``--net=none``)
- ``container:<name_or_id>`` - Reuses another container's network stack
- ``host`` - Use the host's network stack inside the container
- Any name that identifies an existing network that might be created
with ``dockerng.network_present``.
.. warning::

View File

@ -57,11 +57,19 @@ def present(name, acl_type, acl_name='', perms='', recurse=False):
'comment': ''}
_octal = {'r': 4, 'w': 2, 'x': 1}
_current_perms = __salt__['acl.getfacl'](name)
if _current_perms[name].get(acl_type, None):
__current_perms = __salt__['acl.getfacl'](name)
if acl_type.startswith(('d:', 'default:')):
_acl_type = ':'.join(acl_type.split(':')[1:])
_current_perms = __current_perms[name].get('defaults', {})
else:
_acl_type = acl_type
_current_perms = __current_perms[name]
if _current_perms.get(_acl_type, None):
try:
user = [i for i in _current_perms[name][acl_type] if next(six.iterkeys(i)) == acl_name].pop()
user = [i for i in _current_perms[_acl_type] if next(six.iterkeys(i)) == acl_name].pop()
except (AttributeError, IndexError, StopIteration):
user = None
@ -106,11 +114,18 @@ def absent(name, acl_type, acl_name='', perms='', recurse=False):
'changes': {},
'comment': ''}
_current_perms = __salt__['acl.getfacl'](name)
__current_perms = __salt__['acl.getfacl'](name)
if _current_perms[name].get(acl_type, None):
if acl_type.startswith(('d:', 'default:')):
_acl_type = ':'.join(acl_type.split(':')[1:])
_current_perms = __current_perms[name].get('defaults', {})
else:
_acl_type = acl_type
_current_perms = __current_perms[name]
if _current_perms.get(_acl_type, None):
try:
user = [i for i in _current_perms[name][acl_type] if next(six.iterkeys(i)) == acl_name].pop()
user = [i for i in _current_perms[_acl_type] if next(six.iterkeys(i)) == acl_name].pop()
except IndexError:
user = None

View File

@ -578,7 +578,7 @@ def swap(name, persist=True, config='/etc/fstab'):
def unmounted(name,
device,
device=None,
config='/etc/fstab',
persist=False,
user=None):
@ -590,10 +590,11 @@ def unmounted(name,
name
The path to the location where the device is to be unmounted from
.. versionadded:: 2015.5.0
device
The device to be unmounted.
The device to be unmounted. This is optional because the device could
be mounted in multiple places.
.. versionadded:: 2015.5.0
config
Set an alternative location for the fstab, Default is ``/etc/fstab``

View File

@ -147,17 +147,17 @@ def state(
'''
cmd_kw = {'arg': [], 'kwarg': {}, 'ret': ret, 'timeout': timeout}
ret = {'name': name,
'changes': {},
'comment': '',
'result': True}
state_ret = {'name': name,
'changes': {},
'comment': '',
'result': True}
try:
allow_fail = int(allow_fail)
except ValueError:
ret['result'] = False
ret['comment'] = 'Passed invalid value for \'allow_fail\', must be an int'
return ret
state_ret['result'] = False
state_ret['comment'] = 'Passed invalid value for \'allow_fail\', must be an int'
return state_ret
if env is not None:
msg = (
@ -167,11 +167,11 @@ def state(
'state files.'
)
salt.utils.warn_until('Boron', msg)
ret.setdefault('warnings', []).append(msg)
state_ret.setdefault('warnings', []).append(msg)
# No need to set __env__ = env since that's done in the state machinery
if expr_form and tgt_type:
ret.setdefault('warnings', []).append(
state_ret.setdefault('warnings', []).append(
'Please only use \'tgt_type\' or \'expr_form\' not both. '
'Preferring \'tgt_type\' over \'expr_form\''
)
@ -195,9 +195,9 @@ def state(
sls = ','.join(sls)
cmd_kw['arg'].append(sls)
else:
ret['comment'] = 'No highstate or sls specified, no execution made'
ret['result'] = False
return ret
state_ret['comment'] = 'No highstate or sls specified, no execution made'
state_ret['result'] = False
return state_ret
if test or __opts__.get('test'):
cmd_kw['kwarg']['test'] = True
@ -210,9 +210,9 @@ def state(
if isinstance(concurrent, bool):
cmd_kw['kwarg']['concurrent'] = concurrent
else:
ret['comment'] = ('Must pass in boolean for value of \'concurrent\'')
ret['result'] = False
return ret
state_ret['comment'] = ('Must pass in boolean for value of \'concurrent\'')
state_ret['result'] = False
return state_ret
if batch is not None:
cmd_kw['batch'] = str(batch)
@ -229,7 +229,7 @@ def state(
elif isinstance(fail_minions, string_types):
fail_minions = [minion.strip() for minion in fail_minions.split(',')]
elif not isinstance(fail_minions, list):
ret.setdefault('warnings', []).append(
state_ret.setdefault('warnings', []).append(
'\'fail_minions\' needs to be a list or a comma separated '
'string. Ignored.'
)
@ -268,20 +268,20 @@ def state(
no_change.add(minion)
if changes:
ret['changes'] = {'out': 'highstate', 'ret': changes}
state_ret['changes'] = {'out': 'highstate', 'ret': changes}
if len(fail) > allow_fail:
ret['result'] = False
ret['comment'] = 'Run failed on minions: {0}'.format(', '.join(fail))
state_ret['result'] = False
state_ret['comment'] = 'Run failed on minions: {0}'.format(', '.join(fail))
else:
ret['comment'] = 'States ran successfully.'
state_ret['comment'] = 'States ran successfully.'
if changes:
ret['comment'] += ' Updating {0}.'.format(', '.join(changes))
state_ret['comment'] += ' Updating {0}.'.format(', '.join(changes))
if no_change:
ret['comment'] += ' No changes made to {0}.'.format(', '.join(no_change))
state_ret['comment'] += ' No changes made to {0}.'.format(', '.join(no_change))
if failures:
ret['comment'] += '\nFailures:\n'
state_ret['comment'] += '\nFailures:\n'
for minion, failure in six.iteritems(failures):
ret['comment'] += '\n'.join(
state_ret['comment'] += '\n'.join(
(' ' * 4 + l)
for l in salt.output.out_format(
{minion: failure},
@ -289,12 +289,12 @@ def state(
__opts__,
).splitlines()
)
ret['comment'] += '\n'
state_ret['comment'] += '\n'
if test or __opts__.get('test'):
if ret['changes'] and ret['result'] is True:
if state_ret['changes'] and state_ret['result'] is True:
# Test mode with changes is the only case where result should ever be none
ret['result'] = None
return ret
state_ret['result'] = None
return state_ret
def function(

View File

@ -394,6 +394,18 @@ def absent(name,
'result': True,
'comment': ''}
if __opts__['test']:
ret['result'], ret['comment'] = _absent_test(
user,
name,
enc,
comment,
options or [],
source,
config,
)
return ret
# Extract Key from file if source is present
if source != '':
key = __salt__['cp.get_file_str'](
@ -438,18 +450,6 @@ def absent(name,
comment = comps[2]
ret['comment'] = __salt__['ssh.rm_auth_key'](user, name, config)
if __opts__['test']:
ret['result'], ret['comment'] = _absent_test(
user,
name,
enc,
comment,
options or [],
source,
config,
)
return ret
if ret['comment'] == 'User authorized keys file not present':
ret['result'] = False
return ret

View File

@ -25,6 +25,7 @@ import os
# Import salt libs
import salt.utils
from salt.exceptions import CommandNotFoundError
def present(
@ -116,10 +117,15 @@ def present(
ret['result'] = False
return dict(ret, comment=comment)
result = __salt__['ssh.check_known_host'](user, name,
key=key,
fingerprint=fingerprint,
config=config)
try:
result = __salt__['ssh.check_known_host'](user, name,
key=key,
fingerprint=fingerprint,
config=config)
except CommandNotFoundError as err:
ret['result'] = False
ret['comment'] = 'ssh.check_known_host error: {0}'.format(err)
return ret
if result == 'exists':
comment = 'Host {0} is already in {1}'.format(name, config)

View File

@ -14,10 +14,12 @@ from __future__ import absolute_import
import sys
import time
import binascii
import datetime
from datetime import datetime
import hashlib
import hmac
import logging
import salt.config
import re
# Import Salt libs
import salt.utils.xmlutil as xml
@ -53,6 +55,7 @@ __SecretAccessKey__ = ''
__Token__ = ''
__Expiration__ = ''
__Location__ = ''
__AssumeCache__ = {}
def creds(provider):
@ -70,7 +73,7 @@ def creds(provider):
if provider['id'] == IROLE_CODE or provider['key'] == IROLE_CODE:
# Check to see if we have cache credentials that are still good
if __Expiration__ != '':
timenow = datetime.datetime.utcnow()
timenow = datetime.utcnow()
timestamp = timenow.strftime('%Y-%m-%dT%H:%M:%SZ')
if timestamp < __Expiration__:
# Current timestamp less than expiration fo cached credentials
@ -114,7 +117,7 @@ def sig2(method, endpoint, params, provider, aws_api_version):
http://docs.aws.amazon.com/general/latest/gr/signature-version-2.html
'''
timenow = datetime.datetime.utcnow()
timenow = datetime.utcnow()
timestamp = timenow.strftime('%Y-%m-%dT%H:%M:%SZ')
# Retrieve access credentials from meta-data, or use provided
@ -147,9 +150,59 @@ def sig2(method, endpoint, params, provider, aws_api_version):
return params_with_headers
def assumed_creds(prov_dict, role_arn, location=None):
valid_session_name_re = re.compile("[^a-z0-9A-Z+=,.@-]")
now = (datetime.utcnow() - datetime(1970, 1, 1)).total_seconds()
for key, creds in __AssumeCache__.items():
if (creds["Expiration"] - now) <= 120:
__AssumeCache__.delete(key)
if role_arn in __AssumeCache__:
c = __AssumeCache__[role_arn]
return c["AccessKeyId"], c["SecretAccessKey"], c["SessionToken"]
version = "2011-06-15"
session_name = valid_session_name_re.sub('', salt.config.get_id({"root_dir": None})[0])[0:63]
headers, requesturl = sig4(
'GET',
'sts.amazonaws.com',
params={
"Version": version,
"Action": "AssumeRole",
"RoleSessionName": session_name,
"RoleArn": role_arn,
"Policy": '{"Version":"2012-10-17","Statement":[{"Sid":"Stmt1", "Effect":"Allow","Action":"*","Resource":"*"}]}',
"DurationSeconds": "3600"
},
aws_api_version=version,
data='',
uri='/',
prov_dict=prov_dict,
product='sts',
location=location,
requesturl="https://sts.amazonaws.com/"
)
headers["Accept"] = "application/json"
result = requests.request('GET', requesturl, headers=headers,
data='',
verify=True)
if result.status_code >= 400:
LOG.info('AssumeRole response: {0}'.format(result.content))
result.raise_for_status()
resp = result.json()
data = resp["AssumeRoleResponse"]["AssumeRoleResult"]["Credentials"]
__AssumeCache__[role_arn] = data
return data["AccessKeyId"], data["SecretAccessKey"], data["SessionToken"]
def sig4(method, endpoint, params, prov_dict,
aws_api_version=DEFAULT_AWS_API_VERSION, location=None,
product='ec2', uri='/', requesturl=None, data='', headers=None):
product='ec2', uri='/', requesturl=None, data='', headers=None,
role_arn=None):
'''
Sign a query against AWS services using Signature Version 4 Signing
Process. This is documented at:
@ -158,10 +211,13 @@ def sig4(method, endpoint, params, prov_dict,
http://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html
http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html
'''
timenow = datetime.datetime.utcnow()
timenow = datetime.utcnow()
# Retrieve access credentials from meta-data, or use provided
access_key_id, secret_access_key, token = creds(prov_dict)
if role_arn is None:
access_key_id, secret_access_key, token = creds(prov_dict)
else:
access_key_id, secret_access_key, token = assumed_creds(prov_dict, role_arn, location=location)
if location is None:
location = get_region_from_metadata()

View File

@ -493,6 +493,9 @@ def bootstrap(vm_, opts):
deploy_kwargs['use_winrm'] = salt.config.get_cloud_config_value(
'use_winrm', vm_, opts, default=False
)
deploy_kwargs['winrm_port'] = salt.config.get_cloud_config_value(
'winrm_port', vm_, opts, default=5986
)
# Store what was used to the deploy the VM
event_kwargs = copy.deepcopy(deploy_kwargs)
@ -841,6 +844,7 @@ def wait_for_winrm(host, port, username, password, timeout=900):
host, port, trycount
)
)
time.sleep(1)
def validate_windows_cred(host,
@ -965,6 +969,7 @@ def deploy_windows(host,
opts=None,
master_sign_pub_file=None,
use_winrm=False,
winrm_port=5986,
**kwargs):
'''
Copy the install files to a remote Windows box, and execute them
@ -989,7 +994,7 @@ def deploy_windows(host,
winrm_session = None
if HAS_WINRM and use_winrm:
winrm_session = wait_for_winrm(host=host, port=5986,
winrm_session = wait_for_winrm(host=host, port=winrm_port,
username=username, password=password,
timeout=port_timeout * 60)
if winrm_session is not None:

View File

@ -298,6 +298,30 @@ class GitProvider(object):
_check_ref(ret, base_ref, rname)
return ret
def check_lock(self):
'''
Used by the provider-specific fetch() function to check the existence
of an update lock, and set the lock if not present. If the lock exists
already, or if there was a problem setting the lock, this function
returns False. If the lock was successfully set, return True.
'''
if os.path.exists(self.lockfile):
log.warning(
'Update lockfile is present for {0} remote \'{1}\', '
'skipping. If this warning persists, it is possible that the '
'update process was interrupted. Removing {2} or running '
'\'salt-run cache.clear_git_lock {0}\' will allow updates to '
'continue for this remote.'
.format(self.role, self.id, self.lockfile)
)
return False
errors = self.lock()[-1]
if errors:
log.error('Unable to set update lock for {0} remote \'{1}\', '
'skipping.'.format(self.role, self.id))
return False
return True
def check_root(self):
'''
Check if the relative root path exists in the checked-out copy of the
@ -347,7 +371,10 @@ class GitProvider(object):
else:
_add_error(failed, exc)
else:
msg = 'Removed lock for {0}'.format(self.url)
msg = 'Removed lock for {0} remote \'{1}\''.format(
self.role,
self.id
)
log.debug(msg)
success.append(msg)
return success, failed
@ -365,10 +392,13 @@ class GitProvider(object):
except (IOError, OSError) as exc:
msg = ('Unable to set update lock for {0} ({1}): {2} '
.format(self.url, self.lockfile, exc))
log.debug(msg)
log.error(msg)
failed.append(msg)
else:
msg = 'Set lock for {0}'.format(self.url)
msg = 'Set lock for {0} remote \'{1}\''.format(
self.role,
self.id
)
log.debug(msg)
success.append(msg)
return success, failed
@ -579,13 +609,44 @@ class GitPython(GitProvider):
Fetch the repo. If the local copy was updated, return True. If the
local copy was already up-to-date, return False.
'''
if not self.check_lock():
return False
origin = self.repo.remotes[0]
try:
fetch_results = origin.fetch()
except AssertionError:
fetch_results = origin.fetch()
new_objs = False
for fetchinfo in fetch_results:
if fetchinfo.old_commit is not None:
log.debug(
'{0} has updated \'{1}\' for remote \'{2}\' '
'from {3} to {4}'.format(
self.role,
fetchinfo.name,
self.id,
fetchinfo.old_commit.hexsha[:7],
fetchinfo.commit.hexsha[:7]
)
)
new_objs = True
elif fetchinfo.flags in (fetchinfo.NEW_TAG,
fetchinfo.NEW_HEAD):
log.debug(
'{0} has fetched new {1} \'{2}\' for remote \'{3}\' '
.format(
self.role,
'tag' if fetchinfo.flags == fetchinfo.NEW_TAG
else 'head',
fetchinfo.name,
self.id
)
)
new_objs = True
cleaned = self.clean_stale_refs()
return bool(fetch_results or cleaned)
return bool(new_objs or cleaned)
def file_list(self, tgt_env):
'''
@ -697,6 +758,9 @@ class Pygit2(GitProvider):
def __init__(self, opts, remote, per_remote_defaults,
override_params, cache_root, role='gitfs'):
self.provider = 'pygit2'
self.use_callback = \
distutils.version.LooseVersion(pygit2.__version__) >= \
distutils.version.LooseVersion('0.23.2')
GitProvider.__init__(self, opts, remote, per_remote_defaults,
override_params, cache_root, role)
@ -935,24 +999,42 @@ class Pygit2(GitProvider):
Fetch the repo. If the local copy was updated, return True. If the
local copy was already up-to-date, return False.
'''
if not self.check_lock():
return False
origin = self.repo.remotes[0]
refs_pre = self.repo.listall_references()
fetch_kwargs = {}
if self.credentials is not None:
origin.credentials = self.credentials
if self.use_callback:
fetch_kwargs['callbacks'] = \
pygit2.RemoteCallbacks(credentials=self.credentials)
else:
origin.credentials = self.credentials
try:
fetch_results = origin.fetch()
fetch_results = origin.fetch(**fetch_kwargs)
except GitError as exc:
# Using exc.__str__() here to avoid deprecation warning
# when referencing exc.message
if 'unsupported url protocol' in exc.__str__().lower() \
exc_str = exc.__str__().lower()
if 'unsupported url protocol' in exc_str \
and isinstance(self.credentials, pygit2.Keypair):
log.error(
'Unable to fetch SSH-based {0} remote \'{1}\'. '
'libgit2 must be compiled with libssh2 to support '
'SSH authentication.'.format(self.role, self.id)
)
return False
raise
elif 'authentication required but no callback set' in exc_str:
log.error(
'{0} remote \'{1}\' requires authentication, but no '
'authentication configured'.format(self.role, self.id)
)
else:
log.error(
'Error occured fetching {0} remote \'{1}\': {2}'.format(
self.role, self.id, exc
)
)
return False
try:
# pygit2.Remote.fetch() returns a dict in pygit2 < 0.21.0
received_objects = fetch_results['received_objects']
@ -1092,6 +1174,7 @@ class Pygit2(GitProvider):
authenticaion.
'''
self.credentials = None
if os.path.isabs(self.url):
# If the URL is an absolute file path, there is no authentication.
return True
@ -1257,6 +1340,8 @@ class Dulwich(GitProvider): # pylint: disable=abstract-method
Fetch the repo. If the local copy was updated, return True. If the
local copy was already up-to-date, return False.
'''
if not self.check_lock():
return False
# origin is just a url here, there is no origin object
origin = self.url
client, path = \
@ -1733,24 +1818,6 @@ class GitBase(object):
'''
changed = False
for repo in self.remotes:
if os.path.exists(repo.lockfile):
log.warning(
'Update lockfile is present for {0} remote \'{1}\', '
'skipping. If this warning persists, it is possible that '
'the update process was interrupted. Removing {2} or '
'running \'salt-run cache.clear_git_lock {0}\' will '
'allow updates to continue for this remote.'
.format(self.role, repo.id, repo.lockfile)
)
continue
_, errors = repo.lock()
if errors:
log.error('Unable to set update lock for {0} remote \'{1}\', '
'skipping.'.format(self.role, repo.id))
continue
log.debug(
'{0} is fetching from \'{1}\''.format(self.role, repo.id)
)
try:
if repo.fetch():
# We can't just use the return value from repo.fetch()
@ -1760,7 +1827,6 @@ class GitBase(object):
# this value and make it incorrect.
changed = True
except Exception as exc:
# Do not use {0} in the error message, as exc is not a string
log.error(
'Exception \'{0}\' caught while fetching {1} remote '
'\'{2}\''.format(exc, self.role, repo.id),

View File

@ -585,8 +585,9 @@ def get_ca_bundle(opts=None):
# Check Salt first
for salt_root in file_roots.get('base', []):
for path in ('cacert.pem', 'ca-bundle.crt'):
if os.path.exists(path):
return path
cert_path = os.path.join(salt_root, path)
if os.path.exists(cert_path):
return cert_path
locations = (
# Debian has paths that often exist on other distros

View File

@ -41,11 +41,12 @@ NOVACLIENT_MINVER = '2.6.1'
def check_nova():
novaclient_ver = LooseVersion(novaclient.__version__)
min_ver = LooseVersion(NOVACLIENT_MINVER)
if novaclient_ver >= min_ver:
return HAS_NOVA
log.debug('Newer novaclient version required. Minimum: 2.6.1')
if HAS_NOVA:
novaclient_ver = LooseVersion(novaclient.__version__)
min_ver = LooseVersion(NOVACLIENT_MINVER)
if novaclient_ver >= min_ver:
return HAS_NOVA
log.debug('Newer novaclient version required. Minimum: 2.6.1')
return False

View File

@ -29,7 +29,7 @@ def query(key, keyid, method='GET', params=None, headers=None,
requesturl=None, return_url=False, bucket=None, service_url=None,
path='', return_bin=False, action=None, local_file=None,
verify_ssl=True, full_headers=False, kms_keyid=None,
location=None):
location=None, role_arn=None):
'''
Perform a query against an S3-like API. This function requires that a
secret key and the id for that key are passed in. For instance:
@ -111,6 +111,7 @@ def query(key, keyid, method='GET', params=None, headers=None,
data=data,
uri='/{0}'.format(path),
prov_dict={'id': keyid, 'key': key},
role_arn=role_arn,
location=location,
product='s3',
requesturl=requesturl,

View File

@ -14,6 +14,7 @@ ESX, ESXi, and vCenter servers.
from __future__ import absolute_import
import atexit
import logging
import time
# Import Salt Libs
from salt.exceptions import SaltSystemExit
@ -377,3 +378,43 @@ def list_vapps(service_instance):
The Service Instance Object from which to obtain vApps.
'''
return list_objects(service_instance, vim.VirtualApp)
def wait_for_task(task, instance_name, task_type, sleep_seconds=1, log_level='debug'):
'''
Waits for a task to be completed.
task
The task to wait for.
instance_name
The name of the ESXi host, vCenter Server, or Virtual Machine that the task is being run on.
task_type
The type of task being performed. Useful information for debugging purposes.
sleep_seconds
The number of seconds to wait before querying the task again. Defaults to ``1`` second.
log_level
The level at which to log task information. Default is ``debug``, but ``info`` is also supported.
'''
time_counter = 0
start_time = time.time()
while task.info.state == 'running' or task.info.state == 'queued':
if time_counter % sleep_seconds == 0:
msg = '[ {0} ] Waiting for {1} task to finish [{2} s]'.format(instance_name, task_type, time_counter)
if log_level == 'info':
log.info(msg)
else:
log.debug(msg)
time.sleep(1.0 - ((time.time() - start_time) % 1.0))
time_counter += 1
if task.info.state == 'success':
msg = '[ {0} ] Successfully completed {1} task in {2} seconds'.format(instance_name, task_type, time_counter)
if log_level == 'info':
log.info(msg)
else:
log.debug(msg)
else:
raise Exception(task.info.error)

View File

@ -17,6 +17,9 @@ SYS_TMP_DIR = tempfile.gettempdir()
TMP = os.path.join(SYS_TMP_DIR, 'salt-tests-tmpdir')
def get_salt_temp_dir():
return TMP
def get_salt_temp_dir_for_path(*path):
return os.path.join(TMP, *path)

View File

@ -0,0 +1,11 @@
#!pydsl|stateconf -ps
include('pydsl.xxx')
yyy = include('pydsl.yyy')
# ensure states in xxx are run first, then those in yyy and then those in aaa last.
extend(state('pydsl.yyy::start').stateconf.require(stateconf='pydsl.xxx::goal'))
extend(state('.start').stateconf.require(stateconf='pydsl.yyy::goal'))
extend(state('pydsl.yyy::Y2').cmd.run('echo Y2 extended >> {0}'.format('/tmp/output')))
__pydsl__.set(ordered=True)
yyy.hello('red', 1)
yyy.hello('green', 2)
yyy.hello('blue', 3)

View File

@ -0,0 +1,23 @@
#!stateconf -os yaml . jinja
include:
- pydsl.yyy
extend:
pydsl.yyy::start:
stateconf.set:
- require:
- stateconf: .goal
pydsl.yyy::Y1:
cmd.run:
- name: 'echo Y1 extended >> /tmp/output'
.X1:
cmd.run:
- name: echo X1 >> /tmp/output
- cwd: /
.X2:
cmd.run:
- name: echo X2 >> /tmp/output
- cwd: /
.X3:
cmd.run:
- name: echo X3 >> /tmp/output
- cwd: /

View File

@ -0,0 +1,8 @@
#!pydsl|stateconf -ps
include('pydsl.xxx')
__pydsl__.set(ordered=True)
state('.Y1').cmd.run('echo Y1 >> {0}'.format('/tmp/output'), cwd='/')
state('.Y2').cmd.run('echo Y2 >> {0}'.format('/tmp/output'), cwd='/')
state('.Y3').cmd.run('echo Y3 >> {0}'.format('/tmp/output'), cwd='/')
def hello(color, number):
state(color).cmd.run('echo hello '+color+' '+str(number)+' >> {0}'.format('/tmp/output'), cwd='/')

View File

@ -0,0 +1,7 @@
include:
- requisites.prereq_sls_infinite_recursion_2
A:
test.succeed_without_changes:
- name: A
- prereq:
- sls: requisites.prereq_sls_infinite_recursion_2

View File

@ -0,0 +1,4 @@
B:
test.succeed_without_changes:
- name: B

View File

@ -60,9 +60,10 @@ class LinuxAclModuleTest(integration.ModuleCase,
def test_getfacl_w_single_file_without_acl(self):
ret = self.run_function('acl.getfacl', arg=[self.myfile])
self.maxDiff = None
self.assertEqual(
ret,
{self.myfile: {'other': {'octal': 4, 'permissions': {'read': True, 'write': False, 'execute': False}},
{self.myfile: {'other': [{'': {'octal': 4, 'permissions': {'read': True, 'write': False, 'execute': False}}}],
'user': [{'root': {'octal': 6, 'permissions': {'read': True, 'write': True, 'execute': False}}}],
'group': [{'root': {'octal': 4, 'permissions': {'read': True, 'write': False, 'execute': False}}}],
'comment': {'owner': 'root', 'group': 'root', 'file': self.myfile}}}

View File

@ -866,6 +866,9 @@ class StateModuleTest(integration.ModuleCase,
# ret,
# ['A recursive requisite was found, SLS "requisites.prereq_recursion_error" ID "B" ID "A"']
#)
def test_infinite_recursion_sls_prereq(self):
ret = self.run_function('state.sls', mods='requisites.prereq_sls_infinite_recursion')
self.assertSaltTrueReturn(ret)
def test_requisites_use(self):
'''

View File

@ -0,0 +1 @@
# -*- coding: utf-8 -*-

View File

@ -0,0 +1,50 @@
# -*- coding: utf-8 -*-
# Import Python libs
from __future__ import absolute_import
import os
import textwrap
# Import Salt Testing libs
from salttesting.helpers import ensure_in_syspath
ensure_in_syspath('../')
# Import Salt libs
import integration
import salt.utils
class PyDSLRendererIncludeTestCase(integration.ModuleCase):
def test_rendering_includes(self):
'''
This test is currently hard-coded to /tmp to work-around a seeming
inability to load custom modules inside the pydsl renderers. This
is a FIXME.
'''
try:
self.run_function('state.sls', ['pydsl.aaa'])
expected = textwrap.dedent('''\
X1
X2
X3
Y1 extended
Y2 extended
Y3
hello red 1
hello green 2
hello blue 3
''')
with salt.utils.fopen('/tmp/output', 'r') as f:
self.assertEqual(sorted(f.read()), sorted(expected))
finally:
os.remove('/tmp/output')
if __name__ == '__main__':
from integration import run_tests
tests = [PyDSLRendererIncludeTestCase]
run_tests(*tests, needs_daemon=True)

View File

@ -121,6 +121,14 @@ class SaltTestsuiteParser(SaltCoverageTestingParser):
action='store_true',
help='Run salt/runners/*.py tests'
)
self.test_selection_group.add_option(
'-R',
'--renderers',
dest='renderers',
default=False,
action='store_true',
help='Run salt/renderers/*.py tests'
)
self.test_selection_group.add_option(
'-l',
'--loader',
@ -203,6 +211,7 @@ class SaltTestsuiteParser(SaltCoverageTestingParser):
self.options.unit,
self.options.state,
self.options.runners,
self.options.renderers,
self.options.loader,
self.options.name,
self.options.outputter,
@ -223,13 +232,15 @@ class SaltTestsuiteParser(SaltCoverageTestingParser):
self.options.shell, self.options.unit, self.options.state,
self.options.runners, self.options.loader, self.options.name,
self.options.outputter, self.options.cloud_provider_tests,
self.options.fileserver, self.options.wheel, self.options.api)):
self.options.fileserver, self.options.wheel, self.options.api,
self.options.renderers)):
self.options.module = True
self.options.cli = True
self.options.client = True
self.options.shell = True
self.options.unit = True
self.options.runners = True
self.options.renderers = True
self.options.state = True
self.options.loader = True
self.options.outputter = True
@ -348,6 +359,7 @@ class SaltTestsuiteParser(SaltCoverageTestingParser):
if (self.options.unit or named_unit_test) and not \
(self.options.runners or
self.options.renderers or
self.options.state or
self.options.module or
self.options.cli or
@ -379,7 +391,7 @@ class SaltTestsuiteParser(SaltCoverageTestingParser):
if not any([self.options.cli, self.options.client, self.options.module,
self.options.runners, self.options.shell, self.options.state,
self.options.loader, self.options.outputter, self.options.name,
self.options.cloud_provider_tests, self.options.api,
self.options.cloud_provider_tests, self.options.api, self.options.renderers,
self.options.fileserver, self.options.wheel]):
return status
@ -414,6 +426,8 @@ class SaltTestsuiteParser(SaltCoverageTestingParser):
status.append(self.run_integration_suite('cloud/providers', 'Cloud Provider'))
if self.options.api:
status.append(self.run_integration_suite('netapi', 'NetAPI'))
if self.options.renderers:
status.append(self.run_integration_suite('renderers', 'Renderers'))
return status
def run_unit_tests(self):

View File

@ -33,7 +33,8 @@ class S3TestCase(TestCase):
'''
with patch.object(s3, '_get_key',
return_value=('key', 'keyid', 'service_url',
'verify_ssl', 'kms_keyid', 'location')):
'verify_ssl', 'kms_keyid', 'location',
'role_arn')):
with patch.object(salt.utils.s3, 'query', return_value='A'):
self.assertEqual(s3.delete('bucket'), 'A')
@ -44,7 +45,8 @@ class S3TestCase(TestCase):
'''
with patch.object(s3, '_get_key',
return_value=('key', 'keyid', 'service_url',
'verify_ssl', 'kms_keyid', 'location')):
'verify_ssl', 'kms_keyid', 'location',
'role_arn')):
with patch.object(salt.utils.s3, 'query', return_value='A'):
self.assertEqual(s3.get(), 'A')
@ -54,7 +56,8 @@ class S3TestCase(TestCase):
'''
with patch.object(s3, '_get_key',
return_value=('key', 'keyid', 'service_url',
'verify_ssl', 'kms_keyid', 'location')):
'verify_ssl', 'kms_keyid', 'location',
'role_arn')):
with patch.object(salt.utils.s3, 'query', return_value='A'):
self.assertEqual(s3.head('bucket'), 'A')
@ -64,7 +67,8 @@ class S3TestCase(TestCase):
'''
with patch.object(s3, '_get_key',
return_value=('key', 'keyid', 'service_url',
'verify_ssl', 'kms_keyid', 'location')):
'verify_ssl', 'kms_keyid', 'location',
'role_arn')):
with patch.object(salt.utils.s3, 'query', return_value='A'):
self.assertEqual(s3.put('bucket'), 'A')

View File

@ -445,102 +445,6 @@ class PyDSLRendererTestCase(CommonTestCaseBoilerplate):
shutil.rmtree(dirpath, ignore_errors=True)
class PyDSLRendererIncludeTestCase(CommonTestCaseBoilerplate):
def test_rendering_includes(self):
dirpath = tempfile.mkdtemp(dir=integration.SYS_TMP_DIR)
if not os.path.isdir(dirpath):
self.skipTest(
'The temporary directory \'{0}\' was not created'.format(
dirpath
)
)
output = os.path.join(dirpath, 'output')
try:
write_to(os.path.join(dirpath, 'aaa.sls'), textwrap.dedent('''\
#!pydsl|stateconf -ps
include('xxx')
yyy = include('yyy')
# ensure states in xxx are run first, then those in yyy and then those in aaa last.
extend(state('yyy::start').stateconf.require(stateconf='xxx::goal'))
extend(state('.start').stateconf.require(stateconf='yyy::goal'))
extend(state('yyy::Y2').cmd.run('echo Y2 extended >> {0}'))
__pydsl__.set(ordered=True)
yyy.hello('red', 1)
yyy.hello('green', 2)
yyy.hello('blue', 3)
'''.format(output)))
write_to(os.path.join(dirpath, 'xxx.sls'), textwrap.dedent('''\
#!stateconf -os yaml . jinja
include:
- yyy
extend:
yyy::start:
stateconf.set:
- require:
- stateconf: .goal
yyy::Y1:
cmd.run:
- name: 'echo Y1 extended >> {0}'
.X1:
cmd.run:
- name: echo X1 >> {1}
- cwd: /
.X2:
cmd.run:
- name: echo X2 >> {2}
- cwd: /
.X3:
cmd.run:
- name: echo X3 >> {3}
- cwd: /
'''.format(output, output, output, output)))
write_to(os.path.join(dirpath, 'yyy.sls'), textwrap.dedent('''\
#!pydsl|stateconf -ps
include('xxx')
__pydsl__.set(ordered=True)
state('.Y1').cmd.run('echo Y1 >> {0}', cwd='/')
state('.Y2').cmd.run('echo Y2 >> {1}', cwd='/')
state('.Y3').cmd.run('echo Y3 >> {2}', cwd='/')
def hello(color, number):
state(color).cmd.run('echo hello '+color+' '+str(number)+' >> {3}', cwd='/')
'''.format(output, output, output, output)))
self.state_highstate({'base': ['aaa']}, dirpath)
expected = textwrap.dedent('''\
X1
X2
X3
Y1 extended
Y2 extended
Y3
hello red 1
hello green 2
hello blue 3
''')
with salt.utils.fopen(output, 'r') as f:
self.assertEqual(sorted(f.read()), sorted(expected))
finally:
shutil.rmtree(dirpath, ignore_errors=True)
def write_to(fpath, content):
with salt.utils.fopen(fpath, 'w') as f:
f.write(content)
@ -548,5 +452,5 @@ def write_to(fpath, content):
if __name__ == '__main__':
from integration import run_tests
tests = [PyDSLRendererTestCase, PyDSLRendererIncludeTestCase]
tests = [PyDSLRendererTestCase]
run_tests(*tests, needs_daemon=False)

View File

@ -83,8 +83,6 @@ class SshAuthTestCase(TestCase):
'comment': ''}
mock = MagicMock(side_effect=['User authorized keys file not present',
'User authorized keys file not present',
'User authorized keys file not present',
'Key removed'])
mock_up = MagicMock(side_effect=['update', 'updated'])
with patch.dict(ssh_auth.__salt__, {'ssh.rm_auth_key': mock,