Merge branch 'oxygen' into 'develop'

Conflicts:
 - salt/modules/swarm.py
This commit is contained in:
rallytime 2018-01-14 10:47:45 -05:00
commit d76813d30d
No known key found for this signature in database
GPG Key ID: E8F1A4B90D0DEA19
255 changed files with 3931 additions and 1626 deletions

View File

@ -11,7 +11,7 @@ profile=no
# Add files or directories to the blacklist. They should be base names, not
# paths.
ignore=CVS
ignore=CVS,ext
# Pickle collected data for later comparisons.
persistent=yes

View File

@ -8,7 +8,7 @@
# Add files or directories to the blacklist. They should be base names, not
# paths.
ignore=CVS
ignore=CVS,ext
# Pickle collected data for later comparisons.
persistent=no

View File

@ -25,6 +25,7 @@ Salt Table of Contents
topics/api
topics/topology/index
topics/cache/index
topics/slots/index
topics/windows/index
topics/development/index
topics/releases/index

View File

@ -1011,6 +1011,38 @@ The TCP port for ``mworkers`` to connect to on the master.
tcp_master_workers: 4515
.. conf_master:: auth_events
``auth_events``
--------------------
.. versionadded:: 2017.7.3
Default: ``True``
Determines whether the master will fire authentication events.
:ref:`Authentication events <event-master_auth>` are fired when
a minion performs an authentication check with the master.
.. code-block:: yaml
auth_events: True
.. conf_master:: minion_data_cache_events
``minion_data_cache_events``
--------------------
.. versionadded:: 2017.7.3
Default: ``True``
Determines whether the master will fire minion data cache events. Minion data
cache events are fired when a minion requests a minion data cache refresh.
.. code-block:: yaml
minion_data_cache_events: True
.. _salt-ssh-configuration:

View File

@ -7,6 +7,8 @@ Salt Master Events
These events are fired on the Salt Master event bus. This list is **not**
comprehensive.
.. _event-master_auth:
Authentication events
=====================

View File

@ -4,3 +4,19 @@ Salt 2016.11.9 Release Notes
Version 2016.11.9 is a bugfix release for :ref:`2016.11.0 <release-2016-11-0>`.]
Windows
=======
Execution module pkg
--------------------
Significate changes (PR #43708, damon-atkins) have been made to the pkg execution module. Users should test this release against their existing package sls definition files.
- ``pkg.list_available`` no longer defaults to refreshing the winrepo meta database.
- ``pkg.install`` without a ``version`` parameter no longer upgrades software if the software is already installed. Use ``pkg.install version=latest`` or in a state use ``pkg.latest`` to get the old behavior.
- Documentation was update for the execution module to match the style in new versions, some corrections as well.
- All install/remove commands are prefix with cmd.exe shell and cmdmod is called with a command line string instead of a list. Some sls files in saltstack/salt-winrepo-ng expected the commands to be prefixed with cmd.exe (i.e. the use of ``&``).
- Some execution module functions results, now behavour more like their Unix/Linux versions.
Execution module cmdmod
--------------------
Windows cmdmod forcing cmd to be a list (issue #43522) resolved by "cmdmod: Don't list-ify string commands on Windows" PR #43807. Linux/Unix OS command & arguments requires a list. Windows was being treated the same. Windows requires commands & arguments to be a string, which this PR fixes.

View File

@ -439,7 +439,7 @@ The new grains added are:
* ``fc_wwn``: Show all fibre channel world wide port names for a host
* ``iscsi_iqn``: Show the iSCSI IQN name for a host
* ``swap_total``: Show the configured swap_total for Linux, *BSD, OS X and Solaris/SunOS
* ``swap_total``: Show the configured swap_total for Linux, \*BSD, OS X and Solaris/SunOS
Salt Minion Autodiscovery
------------------------
@ -1457,6 +1457,24 @@ thread of states because of a failure.
The ``onfail_any`` requisite is applied in the same way as ``require_any`` and ``watch_any``:
Basic Slots support in states compiler
--------------------------------------
Slots extend the state syntax and allows you to do things right before the
state function is executed. So you can make a decision in the last moment right
before a state is executed.
Slot syntax looks close to the simple python function call. Here is a simple example:
.. code-block:: yaml
copy-some-file:
file.copy:
- name: __slot__:salt:test.echo(text=/tmp/some_file)
- source: __slot__:salt:test.echo(/etc/hosts)
Read more :ref:`here <slots-subsystem>`.
Deprecations
------------

View File

@ -0,0 +1,53 @@
.. _slots-subsystem:
=====
Slots
=====
.. versionadded:: Oxygen
.. note:: This functionality is under development and could be changed in the
future releases
Many times it is useful to store the results of a command during the course of
an execution. Salt Slots are designed to allow to store this information and
use it later during the :ref:`highstate <running-highstate>` or other job
execution.
Slots extend the state syntax and allows you to do things right before the
state function is executed. So you can make a decision in the last moment right
before a state is executed.
Execution functions
-------------------
.. note:: Using execution modules return data as a state values is a first step
of Slots development. Other functionality is under development.
Slots allow you to use the return from a remote-execution function as an
argument value in states.
Slot syntax looks close to the simple python function call.
.. code-block::
__slot__:salt:<module>.<function>(<args>, ..., <kwargs...>, ...)
Also there are some specifics in the syntax coming from the execution functions
nature and a desire to simplify the user experience. First one is that you
don't need to quote the strings passed to the slots functions. The second one
is that all arguments handled as strings.
Here is a simple example:
.. code-block:: yaml
copy-some-file:
file.copy:
- name: __slot__:salt:test.echo(text=/tmp/some_file)
- source: __slot__:salt:test.echo(/etc/hosts)
This will execute the :py:func:`test.echo <salt.modules.test.echo>` execution
functions right before calling the state. The functions in the example will
return `/tmp/some_file` and `/etc/hosts` strings that will be used as a target
and source arguments in the state function `file.copy`.

View File

@ -6,6 +6,7 @@ After=network.target
[Service]
User=salt
Type=simple
Environment=SHELL=/bin/bash
LimitNOFILE=8192
ExecStart=/usr/bin/salt-api
TimeoutStopSec=3

View File

@ -38,6 +38,7 @@ import salt.utils.dictupdate
import salt.utils.files
import salt.utils.verify
import salt.utils.yaml
import salt.utils.user
import salt.syspaths
from salt.template import compile_template
@ -188,7 +189,7 @@ class CloudClient(object):
# Check the cache-dir exists. If not, create it.
v_dirs = [self.opts['cachedir']]
salt.utils.verify.verify_env(v_dirs, salt.utils.get_user())
salt.utils.verify.verify_env(v_dirs, salt.utils.user.get_user())
if pillars:
for name, provider in six.iteritems(pillars.pop('providers', {})):
@ -324,22 +325,22 @@ class CloudClient(object):
>>> client= salt.cloud.CloudClient(path='/etc/salt/cloud')
>>> client.profile('do_512_git', names=['minion01',])
{'minion01': {u'backups_active': 'False',
u'created_at': '2014-09-04T18:10:15Z',
u'droplet': {u'event_id': 31000502,
u'id': 2530006,
u'image_id': 5140006,
u'name': u'minion01',
u'size_id': 66},
u'id': '2530006',
u'image_id': '5140006',
u'ip_address': '107.XXX.XXX.XXX',
u'locked': 'True',
u'name': 'minion01',
u'private_ip_address': None,
u'region_id': '4',
u'size_id': '66',
u'status': 'new'}}
{'minion01': {'backups_active': 'False',
'created_at': '2014-09-04T18:10:15Z',
'droplet': {'event_id': 31000502,
'id': 2530006,
'image_id': 5140006,
'name': 'minion01',
'size_id': 66},
'id': '2530006',
'image_id': '5140006',
'ip_address': '107.XXX.XXX.XXX',
'locked': 'True',
'name': 'minion01',
'private_ip_address': None,
'region_id': '4',
'size_id': '66',
'status': 'new'}}
'''

View File

@ -255,13 +255,13 @@ VALID_OPTS = {
'autoload_dynamic_modules': bool,
# Force the minion into a single environment when it fetches files from the master
'saltenv': six.string_types,
'saltenv': (type(None), six.string_types),
# Prevent saltenv from being overriden on the command line
'lock_saltenv': bool,
# Force the minion into a single pillar root when it fetches pillar data from the master
'pillarenv': six.string_types,
'pillarenv': (type(None), six.string_types),
# Make the pillarenv always match the effective saltenv
'pillarenv_from_saltenv': bool,
@ -269,7 +269,7 @@ VALID_OPTS = {
# Allows a user to provide an alternate name for top.sls
'state_top': six.string_types,
'state_top_saltenv': six.string_types,
'state_top_saltenv': (type(None), six.string_types),
# States to run when a minion starts up
'startup_states': six.string_types,
@ -405,7 +405,7 @@ VALID_OPTS = {
'log_level': six.string_types,
# The log level to log to a given file
'log_level_logfile': six.string_types,
'log_level_logfile': (type(None), six.string_types),
# The format to construct dates in log files
'log_datefmt': six.string_types,
@ -497,10 +497,10 @@ VALID_OPTS = {
'permissive_pki_access': bool,
# The passphrase of the master's private key
'key_pass': six.string_types,
'key_pass': (type(None), six.string_types),
# The passphrase of the master's private signing key
'signing_key_pass': six.string_types,
'signing_key_pass': (type(None), six.string_types),
# The path to a directory to pull in configuration file includes
'default_include': six.string_types,
@ -1027,8 +1027,8 @@ VALID_OPTS = {
'max_minions': int,
'username': six.string_types,
'password': six.string_types,
'username': (type(None), six.string_types),
'password': (type(None), six.string_types),
# Use zmq.SUSCRIBE to limit listening sockets to only process messages bound for them
'zmq_filtering': bool,
@ -1194,6 +1194,12 @@ VALID_OPTS = {
# Scheduler should be a dictionary
'schedule': dict,
# Whether to fire auth events
'auth_events': bool,
# Whether to fire Minion data cache refresh events
'minion_data_cache_events': bool,
# Enable calling ssh minions from the salt master
'enable_ssh_minions': bool,
}
@ -1356,7 +1362,7 @@ DEFAULT_MINION_OPTS = {
'mine_interval': 60,
'ipc_mode': _DFLT_IPC_MODE,
'ipc_write_buffer': _DFLT_IPC_WBUFFER,
'ipv6': None,
'ipv6': False,
'file_buffer_size': 262144,
'tcp_pub_port': 4510,
'tcp_pull_port': 4511,
@ -1832,6 +1838,8 @@ DEFAULT_MASTER_OPTS = {
'mapping': {},
},
'schedule': {},
'auth_events': True,
'minion_data_cache_events': True,
'enable_ssh': False,
'enable_ssh_minions': False,
}
@ -2008,8 +2016,10 @@ def _validate_opts(opts):
errors = []
err = ('Key \'{0}\' with value {1} has an invalid type of {2}, a {3} is '
'required for this value')
err = (
'Config option \'{0}\' with value {1} has an invalid type of {2}, a '
'{3} is required for this option'
)
for key, val in six.iteritems(opts):
if key in VALID_OPTS:
if val is None:
@ -2357,7 +2367,8 @@ def minion_config(path,
defaults=None,
cache_minion_id=False,
ignore_config_errors=True,
minion_id=None):
minion_id=None,
role='minion'):
'''
Reads in the minion configuration file and sets up special options
@ -2397,6 +2408,7 @@ def minion_config(path,
opts = apply_minion_config(overrides, defaults,
cache_minion_id=cache_minion_id,
minion_id=minion_id)
opts['__role'] = role
apply_sdb(opts)
_validate_opts(opts)
return opts

View File

@ -580,8 +580,9 @@ class AsyncAuth(object):
self._crypticle = Crypticle(self.opts, creds['aes'])
self._authenticate_future.set_result(True) # mark the sign-in as complete
# Notify the bus about creds change
event = salt.utils.event.get_event(self.opts.get('__role'), opts=self.opts, listen=False)
event.fire_event({'key': key, 'creds': creds}, salt.utils.event.tagify(prefix='auth', suffix='creds'))
if self.opts.get('auth_events') is True:
event = salt.utils.event.get_event(self.opts.get('__role'), opts=self.opts, listen=False)
event.fire_event({'key': key, 'creds': creds}, salt.utils.event.tagify(prefix='auth', suffix='creds'))
@tornado.gen.coroutine
def sign_in(self, timeout=60, safe=True, tries=1, channel=None):

View File

@ -400,7 +400,7 @@ class SaltRaetRoadStackJoiner(ioflo.base.deeding.Deed):
kind=kinds.applKinds.master))
except gaierror as ex:
log.warning("Unable to connect to master %s: %s", mha, ex)
if self.opts.value.get(u'master_type') not in (u'failover', u'distributed'):
if self.opts.value.get('master_type') not in ('failover', 'distributed'):
raise ex
if not stack.remotes:
raise ex

View File

@ -55,12 +55,12 @@ class SaltDummyPublisher(ioflo.base.deeding.Deed):
'retcode': 0,
'success': True,
'cmd': '_return',
'fun': u'test.ping',
'fun': 'test.ping',
'id': 'silver'
},
'route': {
'src': (u'silver_minion', u'jobber50e73ccefd052167c7', 'jid_ret'),
'dst': (u'silver_master_master', None, 'remote_cmd')
'src': ('silver_minion', 'jobber50e73ccefd052167c7', 'jid_ret'),
'dst': ('silver_master_master', None, 'remote_cmd')
}
}

View File

@ -716,7 +716,8 @@ class RemoteFuncs(object):
self.cache.store('minions/{0}'.format(load['id']),
'data',
{'grains': load['grains'], 'pillar': data})
self.event.fire_event('Minion data cache refresh', salt.utils.event.tagify(load['id'], 'refresh', 'minion'))
if self.opts.get('minion_data_cache_events') is True:
self.event.fire_event('Minion data cache refresh', salt.utils.event.tagify(load['id'], 'refresh', 'minion'))
return data
def _minion_event(self, load):

View File

@ -450,10 +450,10 @@ class StatsEventerTestCase(testing.FrameIofloTestCase):
self.assertEqual(len(testStack.rxMsgs), 1)
msg, sender = testStack.rxMsgs.popleft()
self.assertDictEqual(msg, {u'route': {u'src': [ns2u(minionName), u'manor', None],
u'dst': [ns2u(masterName), None, u'event_fire']},
u'tag': ns2u(tag),
u'data': {u'test_stats_event': 111}})
self.assertDictEqual(msg, {'route': {'src': [ns2u(minionName), 'manor', None],
'dst': [ns2u(masterName), None, 'event_fire']},
'tag': ns2u(tag),
'data': {'test_stats_event': 111}})
# Close active stacks servers
act.actor.lane_stack.value.server.close()
@ -507,10 +507,10 @@ class StatsEventerTestCase(testing.FrameIofloTestCase):
self.assertEqual(len(testStack.rxMsgs), 1)
msg, sender = testStack.rxMsgs.popleft()
self.assertDictEqual(msg, {u'route': {u'src': [ns2u(minionName), u'manor', None],
u'dst': [ns2u(masterName), None, u'event_fire']},
u'tag': ns2u(tag),
u'data': {u'test_stats_event': 111}})
self.assertDictEqual(msg, {'route': {'src': [ns2u(minionName), 'manor', None],
'dst': [ns2u(masterName), None, 'event_fire']},
'tag': ns2u(tag),
'data': {'test_stats_event': 111}})
# Close active stacks servers
act.actor.lane_stack.value.server.close()

View File

@ -51,7 +51,7 @@ Example of usage
.. code-block:: txt
[DEBUG ] Sending event: tag = salt/engines/ircbot/test/tag/ircbot; data = {'_stamp': '2016-11-28T14:34:16.633623', 'data': [u'irc', u'is', u'usefull']}
[DEBUG ] Sending event: tag = salt/engines/ircbot/test/tag/ircbot; data = {'_stamp': '2016-11-28T14:34:16.633623', 'data': ['irc', 'is', 'useful']}
'''
from __future__ import absolute_import, print_function, unicode_literals

216
salt/ext/backports_abc.py Normal file
View File

@ -0,0 +1,216 @@
"""
Patch recently added ABCs into the standard lib module
``collections.abc`` (Py3) or ``collections`` (Py2).
Usage::
import backports_abc
backports_abc.patch()
or::
try:
from collections.abc import Generator
except ImportError:
from backports_abc import Generator
"""
try:
import collections.abc as _collections_abc
except ImportError:
import collections as _collections_abc
def get_mro(cls):
try:
return cls.__mro__
except AttributeError:
return old_style_mro(cls)
def old_style_mro(cls):
yield cls
for base in cls.__bases__:
for c in old_style_mro(base):
yield c
def mk_gen():
from abc import abstractmethod
required_methods = (
'__iter__', '__next__' if hasattr(iter(()), '__next__') else 'next',
'send', 'throw', 'close')
class Generator(_collections_abc.Iterator):
__slots__ = ()
if '__next__' in required_methods:
def __next__(self):
return self.send(None)
else:
def next(self):
return self.send(None)
@abstractmethod
def send(self, value):
raise StopIteration
@abstractmethod
def throw(self, typ, val=None, tb=None):
if val is None:
if tb is None:
raise typ
val = typ()
if tb is not None:
val = val.with_traceback(tb)
raise val
def close(self):
try:
self.throw(GeneratorExit)
except (GeneratorExit, StopIteration):
pass
else:
raise RuntimeError('generator ignored GeneratorExit')
@classmethod
def __subclasshook__(cls, C):
if cls is Generator:
mro = get_mro(C)
for method in required_methods:
for base in mro:
if method in base.__dict__:
break
else:
return NotImplemented
return True
return NotImplemented
generator = type((lambda: (yield))())
Generator.register(generator)
return Generator
def mk_awaitable():
from abc import abstractmethod, ABCMeta
@abstractmethod
def __await__(self):
yield
@classmethod
def __subclasshook__(cls, C):
if cls is Awaitable:
for B in get_mro(C):
if '__await__' in B.__dict__:
if B.__dict__['__await__']:
return True
break
return NotImplemented
# calling metaclass directly as syntax differs in Py2/Py3
Awaitable = ABCMeta('Awaitable', (), {
'__slots__': (),
'__await__': __await__,
'__subclasshook__': __subclasshook__,
})
return Awaitable
def mk_coroutine():
from abc import abstractmethod
class Coroutine(Awaitable):
__slots__ = ()
@abstractmethod
def send(self, value):
"""Send a value into the coroutine.
Return next yielded value or raise StopIteration.
"""
raise StopIteration
@abstractmethod
def throw(self, typ, val=None, tb=None):
"""Raise an exception in the coroutine.
Return next yielded value or raise StopIteration.
"""
if val is None:
if tb is None:
raise typ
val = typ()
if tb is not None:
val = val.with_traceback(tb)
raise val
def close(self):
"""Raise GeneratorExit inside coroutine.
"""
try:
self.throw(GeneratorExit)
except (GeneratorExit, StopIteration):
pass
else:
raise RuntimeError('coroutine ignored GeneratorExit')
@classmethod
def __subclasshook__(cls, C):
if cls is Coroutine:
mro = get_mro(C)
for method in ('__await__', 'send', 'throw', 'close'):
for base in mro:
if method in base.__dict__:
break
else:
return NotImplemented
return True
return NotImplemented
return Coroutine
###
# make all ABCs available in this module
try:
Generator = _collections_abc.Generator
except AttributeError:
Generator = mk_gen()
try:
Awaitable = _collections_abc.Awaitable
except AttributeError:
Awaitable = mk_awaitable()
try:
Coroutine = _collections_abc.Coroutine
except AttributeError:
Coroutine = mk_coroutine()
try:
from inspect import isawaitable
except ImportError:
def isawaitable(obj):
return isinstance(obj, Awaitable)
###
# allow patching the stdlib
PATCHED = {}
def patch(patch_inspect=True):
"""
Main entry point for patching the ``collections.abc`` and ``inspect``
standard library modules.
"""
PATCHED['collections.abc.Generator'] = _collections_abc.Generator = Generator
PATCHED['collections.abc.Coroutine'] = _collections_abc.Coroutine = Coroutine
PATCHED['collections.abc.Awaitable'] = _collections_abc.Awaitable = Awaitable
if patch_inspect:
import inspect
PATCHED['inspect.isawaitable'] = inspect.isawaitable = isawaitable

View File

@ -1452,7 +1452,7 @@ def os_data():
"Unable to fetch data from /proc/1/cmdline"
)
if init_bin is not None and init_bin.endswith('bin/init'):
supported_inits = (six.b('upstart'), six.b('sysvinit'), six.b('systemd'))
supported_inits = (six.b(str('upstart')), six.b(str('sysvinit')), six.b(str('systemd'))) # future lint: disable=blacklisted-function
edge_len = max(len(x) for x in supported_inits) - 1
try:
buf_size = __opts__['file_buffer_size']
@ -1462,7 +1462,7 @@ def os_data():
try:
with salt.utils.files.fopen(init_bin, 'rb') as fp_:
buf = True
edge = six.b('')
edge = six.b(str()) # future lint: disable=blacklisted-function
buf = fp_.read(buf_size).lower()
while buf:
buf = edge + buf
@ -1471,7 +1471,7 @@ def os_data():
if six.PY3:
item = item.decode('utf-8')
grains['init'] = item
buf = six.b('')
buf = six.b(str()) # future lint: disable=blacklisted-function
break
edge = buf[-edge_len:]
buf = fp_.read(buf_size).lower()

View File

@ -743,6 +743,10 @@ class Master(SMaster):
kwargs=kwargs,
name='ReqServer')
self.process_manager.add_process(
FileserverUpdate,
args=(self.opts,))
# Fire up SSDP discovery publisher
if self.opts['discovery']:
if salt.utils.ssdp.SSDPDiscoveryServer.is_available():
@ -755,10 +759,6 @@ class Master(SMaster):
if sys.version_info.major == 2:
log.error('You are using Python 2, please install "trollius" module to enable SSDP discovery.')
self.process_manager.add_process(
FileserverUpdate,
args=(self.opts,))
# Install the SIGINT/SIGTERM handlers if not done so far
if signal.getsignal(signal.SIGINT) is signal.SIG_DFL:
# No custom signal handling was added, install our own
@ -1515,7 +1515,8 @@ class AESFuncs(object):
'data',
{'grains': load['grains'],
'pillar': data})
self.event.fire_event({'Minion data cache refresh': load['id']}, tagify(load['id'], 'refresh', 'minion'))
if self.opts.get('minion_data_cache_events') is True:
self.event.fire_event({'Minion data cache refresh': load['id']}, tagify(load['id'], 'refresh', 'minion'))
return data
def _minion_event(self, load):

View File

@ -734,12 +734,12 @@ class MinionBase(object):
break
if masters:
policy = self.opts.get(u'discovery', {}).get(u'match', DEFAULT_MINION_OPTS[u'discovery'][u'match'])
policy = self.opts.get('discovery', {}).get('match', DEFAULT_MINION_OPTS['discovery']['match'])
if policy not in ['any', 'all']:
log.error('SSDP configuration matcher failure: unknown value "{0}". '
'Should be "any" or "all"'.format(policy))
else:
mapping = self.opts[u'discovery'].get(u'mapping', {})
mapping = self.opts['discovery'].get('mapping', {})
for addr, mappings in masters.items():
for proto_data in mappings:
cnt = len([key for key, value in mapping.items()
@ -872,7 +872,11 @@ class MasterMinion(object):
matcher=True,
whitelist=None,
ignore_config_errors=True):
self.opts = salt.config.minion_config(opts['conf_file'], ignore_config_errors=ignore_config_errors)
self.opts = salt.config.minion_config(
opts['conf_file'],
ignore_config_errors=ignore_config_errors,
role='master'
)
self.opts.update(opts)
self.whitelist = whitelist
self.opts['grains'] = salt.loader.grains(opts)

View File

@ -25,7 +25,7 @@ Most parameters will fall back to cli.ini defaults if None is given.
'''
# Import python libs
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import logging
import datetime
import os
@ -129,10 +129,10 @@ def cert(name,
cert_file = _cert_file(name, 'cert')
if not __salt__['file.file_exists'](cert_file):
log.debug('Certificate {0} does not exist (yet)'.format(cert_file))
log.debug('Certificate %s does not exist (yet)', cert_file)
renew = False
elif needs_renewal(name, renew):
log.debug('Certificate {0} will be renewed'.format(cert_file))
log.debug('Certificate %s will be renewed', cert_file)
cmd.append('--renew-by-default')
renew = True
if server:

View File

@ -8,7 +8,7 @@ Manage groups on Solaris
*'group.info' is not available*), see :ref:`here
<module-provider-override>`.
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import python libs
import logging

View File

@ -7,7 +7,7 @@ Manage account locks on AIX systems
:depends: none
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import python librarie
import logging

View File

@ -2,7 +2,7 @@
'''
Manage the information in the aliases file
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import python libs
import os
@ -13,6 +13,7 @@ import tempfile
# Import salt libs
import salt.utils.files
import salt.utils.path
import salt.utils.stringutils
from salt.exceptions import SaltInvocationError
# Import third party libs
@ -51,6 +52,7 @@ def __parse_aliases():
return ret
with salt.utils.files.fopen(afn, 'r') as ifile:
for line in ifile:
line = salt.utils.stringutils.to_unicode(line)
match = __ALIAS_RE.match(line)
if match:
ret.append(match.groups())

View File

@ -4,7 +4,7 @@ Support for Alternatives system
:codeauthor: Radek Rada <radek.rada@gmail.com>
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import python libs
import os
@ -91,18 +91,14 @@ def show_link(name):
try:
with salt.utils.files.fopen(path, 'rb') as r_file:
contents = r_file.read()
if six.PY3:
contents = contents.decode(__salt_system_encoding__)
contents = salt.utils.stringutils.to_unicode(r_file.read())
return contents.splitlines(True)[1].rstrip('\n')
except OSError:
log.error(
'alternatives: {0} does not exist'.format(name)
)
log.error('alternatives: %s does not exist', name)
except (IOError, IndexError) as exc:
log.error(
'alternatives: unable to get master link for {0}. '
'Exception: {1}'.format(name, exc)
'alternatives: unable to get master link for %s. '
'Exception: %s', name, exc
)
return False
@ -122,9 +118,7 @@ def show_current(name):
try:
return _read_link(name)
except OSError:
log.error(
'alternative: {0} does not exist'.format(name)
)
log.error('alternative: %s does not exist', name)
return False
@ -176,7 +170,7 @@ def install(name, link, path, priority):
salt '*' alternatives.install editor /usr/bin/editor /usr/bin/emacs23 50
'''
cmd = [_get_cmd(), '--install', link, name, path, str(priority)]
cmd = [_get_cmd(), '--install', link, name, path, six.text_type(priority)]
out = __salt__['cmd.run_all'](cmd, python_shell=False)
if out['retcode'] > 0 and out['stderr'] != '':
return out['stderr']

View File

@ -28,7 +28,7 @@ The timeout is how many seconds Salt should wait for
any Ansible module to respond.
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import os
import sys
import logging
@ -40,6 +40,7 @@ import salt.utils.json
from salt.exceptions import LoaderError, CommandExecutionError
import salt.utils.timed_subprocess
import salt.utils.yaml
from salt.ext import six
try:
import ansible
@ -165,7 +166,7 @@ class AnsibleModuleCaller(object):
try:
out = salt.utils.json.loads(proc_exc.stdout)
except ValueError as ex:
out = {'Error': (proc_exc.stderr and (proc_exc.stderr + '.') or str(ex))}
out = {'Error': (proc_exc.stderr and (proc_exc.stderr + '.') or six.text_type(ex))}
if proc_exc.stdout:
out['Given JSON output'] = proc_exc.stdout
return out
@ -250,7 +251,7 @@ def help(module=None, *args):
if docset:
doc.update(docset)
except Exception as err:
log.error("Error parsing doc section: {0}".format(err))
log.error("Error parsing doc section: %s", err)
if not args:
if 'description' in doc:
description = doc.get('description') or ''

View File

@ -10,7 +10,7 @@ Support for Apache
'''
# Import python libs
from __future__ import absolute_import, generators, print_function, with_statement
from __future__ import absolute_import, generators, print_function, with_statement, unicode_literals
import re
import logging
@ -29,6 +29,7 @@ from salt.ext.six.moves.urllib.request import (
# pylint: enable=import-error,no-name-in-module
# Import salt libs
import salt.utils.data
import salt.utils.files
import salt.utils.path
@ -453,9 +454,9 @@ def config(name, config, edit=True):
configs.append(_parse_config(entry[key], key))
# Python auto-correct line endings
configstext = "\n".join(configs)
configstext = '\n'.join(salt.utils.data.decode(configs))
if edit:
with salt.utils.files.fopen(name, 'w') as configfile:
configfile.write('# This file is managed by Salt.\n')
configfile.write(configstext)
configfile.write(salt.utils.stringutils.to_str(configstext))
return configstext

View File

@ -2,7 +2,7 @@
'''
Module for apcupsd
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import Python libs
import logging

View File

@ -9,7 +9,7 @@ Support for Advanced Policy Firewall (APF)
'''
# Import Python Libs
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
try:
import iptc
IPTC_IMPORTED = True

View File

@ -11,7 +11,7 @@ Support for apk
.. versionadded: 2017.7.0
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import python libs
import copy
@ -510,9 +510,7 @@ def list_upgrades(refresh=True):
comment += call['stderr']
if 'stdout' in call:
comment += call['stdout']
raise CommandExecutionError(
'{0}'.format(comment)
)
raise CommandExecutionError(comment)
else:
out = call['stdout']

View File

@ -5,19 +5,20 @@ Aptly Debian repository manager.
.. versionadded:: Oxygen
'''
# Import python libs
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import logging
import os
import re
# Import salt libs
from salt.ext import six
from salt.exceptions import SaltInvocationError
import salt.utils.json
import salt.utils.path
import salt.utils.stringutils as stringutils
import salt.utils.stringutils
_DEFAULT_CONFIG_PATH = '/etc/aptly.conf'
_LOG = logging.getLogger(__name__)
log = logging.getLogger(__name__)
# Define the module's virtual name
__virtualname__ = 'aptly'
@ -43,7 +44,7 @@ def _cmd_run(cmd):
cmd_ret = __salt__['cmd.run_all'](cmd, ignore_retcode=True)
if cmd_ret['retcode'] != 0:
_LOG.debug('Unable to execute command: %s\nError: %s', cmd,
log.debug('Unable to execute command: %s\nError: %s', cmd,
cmd_ret['stderr'])
return cmd_ret['stdout']
@ -72,7 +73,7 @@ def _format_repo_args(comment=None, component=None, distribution=None,
cached_uploaders_path = __salt__['cp.cache_file'](uploaders_file, saltenv)
if not cached_uploaders_path:
_LOG.error('Unable to get cached copy of file: %s', uploaders_file)
log.error('Unable to get cached copy of file: %s', uploaders_file)
return False
for setting in settings:
@ -94,11 +95,11 @@ def _validate_config(config_path):
:return: None
:rtype: None
'''
_LOG.debug('Checking configuration file: %s', config_path)
log.debug('Checking configuration file: %s', config_path)
if not os.path.isfile(config_path):
message = 'Unable to get configuration file: {}'.format(config_path)
_LOG.error(message)
log.error(message)
raise SaltInvocationError(message)
@ -150,7 +151,7 @@ def list_repos(config_path=_DEFAULT_CONFIG_PATH, with_packages=False):
cmd_ret = _cmd_run(cmd)
repos = [line.strip() for line in cmd_ret.splitlines()]
_LOG.debug('Found repositories: %s', len(repos))
log.debug('Found repositories: %s', len(repos))
for name in repos:
ret[name] = get_repo(name=name, config_path=config_path,
@ -170,11 +171,11 @@ def get_repo(name, config_path=_DEFAULT_CONFIG_PATH, with_packages=False):
:rtype: dict
'''
_validate_config(config_path)
with_packages = six.text_type(bool(with_packages)).lower()
ret = dict()
cmd = ['repo', 'show', '-config={}'.format(config_path),
'-with-packages={}'.format(str(with_packages).lower()),
name]
'-with-packages={}'.format(with_packages), name]
cmd_ret = _cmd_run(cmd)
@ -185,15 +186,16 @@ def get_repo(name, config_path=_DEFAULT_CONFIG_PATH, with_packages=False):
items = line.split(':')
key = items[0].lower().replace('default', '').strip()
key = ' '.join(key.split()).replace(' ', '_')
ret[key] = stringutils.to_none(stringutils.to_num(items[1].strip()))
ret[key] = salt.utils.stringutils.to_none(
salt.utils.stringutils.to_num(items[1].strip()))
except (AttributeError, IndexError):
# If the line doesn't have the separator or is otherwise invalid, skip it.
_LOG.debug('Skipping line: %s', line)
log.debug('Skipping line: %s', line)
if ret:
_LOG.debug('Found repository: %s', name)
log.debug('Found repository: %s', name)
else:
_LOG.debug('Unable to find repository: %s', name)
log.debug('Unable to find repository: %s', name)
return ret
@ -226,7 +228,7 @@ def new_repo(name, config_path=_DEFAULT_CONFIG_PATH, comment=None, component=Non
current_repo = __salt__['aptly.get_repo'](name=name)
if current_repo:
_LOG.debug('Repository already exists: %s', name)
log.debug('Repository already exists: %s', name)
return True
cmd = ['repo', 'create', '-config={}'.format(config_path)]
@ -243,9 +245,9 @@ def new_repo(name, config_path=_DEFAULT_CONFIG_PATH, comment=None, component=Non
repo = __salt__['aptly.get_repo'](name=name)
if repo:
_LOG.debug('Created repo: %s', name)
log.debug('Created repo: %s', name)
return True
_LOG.error('Unable to create repo: %s', name)
log.error('Unable to create repo: %s', name)
return False
@ -287,7 +289,7 @@ def set_repo(name, config_path=_DEFAULT_CONFIG_PATH, comment=None, component=Non
current_settings = __salt__['aptly.get_repo'](name=name)
if not current_settings:
_LOG.error('Unable to get repo: %s', name)
log.error('Unable to get repo: %s', name)
return False
# Discard any additional settings that get_repo gives
@ -298,7 +300,7 @@ def set_repo(name, config_path=_DEFAULT_CONFIG_PATH, comment=None, component=Non
# Check the existing repo settings to see if they already have the desired values.
if settings == current_settings:
_LOG.debug('Settings already have the desired values for repository: %s', name)
log.debug('Settings already have the desired values for repository: %s', name)
return True
cmd = ['repo', 'edit', '-config={}'.format(config_path)]
@ -318,9 +320,9 @@ def set_repo(name, config_path=_DEFAULT_CONFIG_PATH, comment=None, component=Non
failed_settings.update({setting: settings[setting]})
if failed_settings:
_LOG.error('Unable to change settings for the repository: %s', name)
log.error('Unable to change settings for the repository: %s', name)
return False
_LOG.debug('Settings successfully changed to the desired values for repository: %s', name)
log.debug('Settings successfully changed to the desired values for repository: %s', name)
return True
@ -343,23 +345,24 @@ def delete_repo(name, config_path=_DEFAULT_CONFIG_PATH, force=False):
salt '*' aptly.delete_repo name="test-repo"
'''
_validate_config(config_path)
force = six.text_type(bool(force)).lower()
current_repo = __salt__['aptly.get_repo'](name=name)
if not current_repo:
_LOG.debug('Repository already absent: %s', name)
log.debug('Repository already absent: %s', name)
return True
cmd = ['repo', 'drop', '-config={}'.format(config_path),
'-force={}'.format(str(force).lower()), name]
'-force={}'.format(force), name]
_cmd_run(cmd)
repo = __salt__['aptly.get_repo'](name=name)
if repo:
_LOG.error('Unable to remove repo: %s', name)
log.error('Unable to remove repo: %s', name)
return False
_LOG.debug('Removed repo: %s', name)
log.debug('Removed repo: %s', name)
return True
@ -385,7 +388,7 @@ def list_mirrors(config_path=_DEFAULT_CONFIG_PATH):
cmd_ret = _cmd_run(cmd)
ret = [line.strip() for line in cmd_ret.splitlines()]
_LOG.debug('Found mirrors: %s', len(ret))
log.debug('Found mirrors: %s', len(ret))
return ret
@ -411,7 +414,7 @@ def list_published(config_path=_DEFAULT_CONFIG_PATH):
cmd_ret = _cmd_run(cmd)
ret = [line.strip() for line in cmd_ret.splitlines()]
_LOG.debug('Found published repositories: %s', len(ret))
log.debug('Found published repositories: %s', len(ret))
return ret
@ -443,7 +446,7 @@ def list_snapshots(config_path=_DEFAULT_CONFIG_PATH, sort_by_time=False):
cmd_ret = _cmd_run(cmd)
ret = [line.strip() for line in cmd_ret.splitlines()]
_LOG.debug('Found snapshots: %s', len(ret))
log.debug('Found snapshots: %s', len(ret))
return ret
@ -464,13 +467,13 @@ def cleanup_db(config_path=_DEFAULT_CONFIG_PATH, dry_run=False):
salt '*' aptly.cleanup_db
'''
_validate_config(config_path)
dry_run = six.text_type(bool(dry_run)).lower()
ret = {'deleted_keys': list(),
'deleted_files': list()}
cmd = ['db', 'cleanup', '-config={}'.format(config_path),
'-dry-run={}'.format(str(dry_run).lower()),
'-verbose=true']
'-dry-run={}'.format(dry_run), '-verbose=true']
cmd_ret = _cmd_run(cmd)
@ -493,6 +496,6 @@ def cleanup_db(config_path=_DEFAULT_CONFIG_PATH, dry_run=False):
if match:
current_block = match.group('package_type')
_LOG.debug('Package keys identified for deletion: %s', len(ret['deleted_keys']))
_LOG.debug('Package files identified for deletion: %s', len(ret['deleted_files']))
log.debug('Package keys identified for deletion: %s', len(ret['deleted_keys']))
log.debug('Package files identified for deletion: %s', len(ret['deleted_files']))
return ret

View File

@ -14,7 +14,7 @@ Support for APT (Advanced Packaging Tool)
For repository management, the ``python-apt`` package must be installed.
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import python libs
import copy
@ -44,6 +44,7 @@ import salt.utils.json
import salt.utils.path
import salt.utils.pkg
import salt.utils.pkg.deb
import salt.utils.stringutils
import salt.utils.systemd
import salt.utils.versions
import salt.utils.yaml
@ -210,7 +211,7 @@ def _warn_software_properties(repo):
log.warning('The \'python-software-properties\' package is not installed. '
'For more accurate support of PPA repositories, you should '
'install this package.')
log.warning('Best guess at ppa format: {0}'.format(repo))
log.warning('Best guess at ppa format: %s', repo)
def latest_version(*names, **kwargs):
@ -395,9 +396,7 @@ def refresh_db(cache_valid_time=0, failhard=False):
if 'stderr' in call:
comment += call['stderr']
raise CommandExecutionError(
'{0}'.format(comment)
)
raise CommandExecutionError(comment)
else:
out = call['stdout']
@ -669,9 +668,8 @@ def install(name=None,
deb_info = None
if deb_info is None:
log.error(
'pkg.install: Unable to get deb information for {0}. '
'Version comparisons will be unavailable.'
.format(pkg_source)
'pkg.install: Unable to get deb information for %s. '
'Version comparisons will be unavailable.', pkg_source
)
pkg_params_items.append([pkg_source])
else:
@ -763,7 +761,7 @@ def install(name=None,
downgrade.append(pkgstr)
if fromrepo and not sources:
log.info('Targeting repo \'{0}\''.format(fromrepo))
log.info('Targeting repo \'%s\'', fromrepo)
cmds = []
all_pkgs = []
@ -1536,7 +1534,8 @@ def version_cmp(pkg1, pkg2, ignore_epoch=False):
salt '*' pkg.version_cmp '0.2.4-0ubuntu1' '0.2.4.1-0ubuntu1'
'''
normalize = lambda x: str(x).split(':', 1)[-1] if ignore_epoch else str(x)
normalize = lambda x: six.text_type(x).split(':', 1)[-1] \
if ignore_epoch else six.text_type(x)
# both apt_pkg.version_compare and _cmd_quote need string arguments.
pkg1 = normalize(pkg1)
pkg2 = normalize(pkg2)
@ -1555,7 +1554,7 @@ def version_cmp(pkg1, pkg2, ignore_epoch=False):
try:
ret = apt_pkg.version_compare(pkg1, pkg2)
except TypeError:
ret = apt_pkg.version_compare(str(pkg1), str(pkg2))
ret = apt_pkg.version_compare(six.text_type(pkg1), six.text_type(pkg2))
return 1 if ret > 0 else -1 if ret < 0 else 0
except Exception:
# Try to use shell version in case of errors w/python bindings
@ -1602,8 +1601,10 @@ def _consolidate_repo_sources(sources):
for repo in repos:
repo.uri = repo.uri.rstrip('/')
# future lint: disable=blacklisted-function
key = str((getattr(repo, 'architectures', []),
repo.disabled, repo.type, repo.uri, repo.dist))
# future lint: enable=blacklisted-function
if key in consolidated:
combined = consolidated[key]
combined_comps = set(repo.comps).union(set(combined.comps))
@ -1917,7 +1918,7 @@ def _convert_if_int(value):
:rtype: bool|int|str
'''
try:
value = int(str(value))
value = int(str(value)) # future lint: disable=blacklisted-function
except ValueError:
pass
return value
@ -2356,8 +2357,7 @@ def mod_repo(repo, saltenv='base', **kwargs):
**kwargs)
if ret['retcode'] != 0:
raise CommandExecutionError(
'Error: key retrieval failed: {0}'
.format(ret['stdout'])
'Error: key retrieval failed: {0}'.format(ret['stdout'])
)
elif 'key_url' in kwargs:
@ -2659,7 +2659,8 @@ def set_selections(path=None, selection=None, clear=False, saltenv='base'):
if path:
path = __salt__['cp.cache_file'](path, saltenv)
with salt.utils.files.fopen(path, 'r') as ifile:
content = ifile.readlines()
content = [salt.utils.stringutils.to_unicode(x)
for x in ifile.readlines()]
selection = _parse_selections(content)
if selection:
@ -2696,8 +2697,8 @@ def set_selections(path=None, selection=None, clear=False, saltenv='base'):
output_loglevel='trace')
if result['retcode'] != 0:
log.error(
'failed to set state {0} for package '
'{1}'.format(_state, _pkg)
'failed to set state %s for package %s',
_state, _pkg
)
else:
ret[_pkg] = {'old': sel_revmap.get(_pkg),

View File

@ -4,7 +4,7 @@ A module to wrap (non-Windows) archive calls
.. versionadded:: 2014.1.0
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import contextlib # For < 2.7 compat
import copy
import errno
@ -38,6 +38,7 @@ import salt.utils.decorators.path
import salt.utils.files
import salt.utils.path
import salt.utils.platform
import salt.utils.stringutils
import salt.utils.templates
if salt.utils.platform.is_windows():
@ -454,7 +455,7 @@ def _expand_sources(sources):
if isinstance(sources, six.string_types):
sources = [x.strip() for x in sources.split(',')]
elif isinstance(sources, (float, six.integer_types)):
sources = [str(sources)]
sources = [six.text_type(sources)]
return [path
for source in sources
for path in _glob(source)]
@ -914,7 +915,7 @@ def cmd_unzip(zip_file,
if isinstance(excludes, six.string_types):
excludes = [x.strip() for x in excludes.split(',')]
elif isinstance(excludes, (float, six.integer_types)):
excludes = [str(excludes)]
excludes = [six.text_type(excludes)]
cmd = ['unzip']
if password:
@ -1059,7 +1060,7 @@ def unzip(zip_file,
if isinstance(excludes, six.string_types):
excludes = [x.strip() for x in excludes.split(',')]
elif isinstance(excludes, (float, six.integer_types)):
excludes = [str(excludes)]
excludes = [six.text_type(excludes)]
cleaned_files.extend([x for x in files if x not in excludes])
for target in cleaned_files:
@ -1311,7 +1312,7 @@ def _render_filenames(filenames, zip_file, saltenv, template):
# write out path to temp file
tmp_path_fn = salt.utils.files.mkstemp()
with salt.utils.files.fopen(tmp_path_fn, 'w+') as fp_:
fp_.write(contents)
fp_.write(salt.utils.stringutils.to_str(contents))
data = salt.utils.templates.TEMPLATE_REGISTRY[template](
tmp_path_fn,
to_str=True,

View File

@ -4,13 +4,14 @@ Module for fetching artifacts from Artifactory
'''
# Import python libs
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import os
import base64
import logging
# Import Salt libs
import salt.utils.files
import salt.utils.stringutils
import salt.ext.six.moves.http_client # pylint: disable=import-error,redefined-builtin,no-name-in-module
from salt.ext.six.moves import urllib # pylint: disable=no-name-in-module
from salt.ext.six.moves.urllib.error import HTTPError, URLError # pylint: disable=no-name-in-module
@ -442,7 +443,7 @@ def __save_artifact(artifact_url, target_file, headers):
}
if os.path.isfile(target_file):
log.debug("File {0} already exists, checking checksum...".format(target_file))
log.debug("File %s already exists, checking checksum...", target_file)
checksum_url = artifact_url + ".sha1"
checksum_success, artifact_sum, checksum_comment = __download(checksum_url, headers)
@ -466,13 +467,13 @@ def __save_artifact(artifact_url, target_file, headers):
result['comment'] = checksum_comment
return result
log.debug('Downloading: {url} -> {target_file}'.format(url=artifact_url, target_file=target_file))
log.debug('Downloading: %s -> %s', artifact_url, target_file)
try:
request = urllib.request.Request(artifact_url, None, headers)
f = urllib.request.urlopen(request)
with salt.utils.files.fopen(target_file, "wb") as local_file:
local_file.write(f.read())
local_file.write(salt.utils.stringutils.to_bytes(f.read()))
result['status'] = True
result['comment'] = __append_comment(('Artifact downloaded from URL: {0}'.format(artifact_url)), result['comment'])
result['changes']['downloaded_file'] = target_file
@ -497,7 +498,7 @@ def __get_classifier_url(classifier):
def __download(request_url, headers):
log.debug('Downloading content from {0}'.format(request_url))
log.debug('Downloading content from %s', request_url)
success = False
content = None

View File

@ -9,7 +9,7 @@ easily tag jobs.
.. versionchanged:: 2017.7.0
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import python libs
import re
@ -21,8 +21,10 @@ import datetime
from salt.ext.six.moves import map
# pylint: enable=import-error,redefined-builtin
from salt.exceptions import CommandNotFoundError
from salt.ext import six
# Import salt libs
import salt.utils.data
import salt.utils.path
import salt.utils.platform
@ -131,7 +133,7 @@ def atq(tag=None):
job_tag = tmp.groups()[0]
if __grains__['os'] in BSD:
job = str(job)
job = six.text_type(job)
else:
job = int(job)
@ -171,7 +173,7 @@ def atrm(*args):
return {'jobs': {'removed': [], 'tag': None}}
# Convert all to strings
args = [str(arg) for arg in args]
args = salt.utils.data.stringify(args)
if args[0] == 'all':
if len(args) > 1:
@ -182,7 +184,7 @@ def atrm(*args):
ret = {'jobs': {'removed': opts, 'tag': None}}
else:
opts = list(list(map(str, [i['job'] for i in atq()['jobs']
if str(i['job']) in args])))
if six.text_type(i['job']) in args])))
ret = {'jobs': {'removed': opts, 'tag': None}}
# Shim to produce output similar to what __virtual__() should do
@ -245,7 +247,7 @@ def at(*args, **kwargs): # pylint: disable=C0103
output = output.split()[1]
if __grains__['os'] in BSD:
return atq(str(output))
return atq(six.text_type(output))
else:
return atq(int(output))
@ -264,7 +266,7 @@ def atc(jobid):
'''
# Shim to produce output similar to what __virtual__() should do
# but __salt__ isn't available in __virtual__()
output = _cmd('at', '-c', str(jobid))
output = _cmd('at', '-c', six.text_type(jobid))
if output is None:
return '\'at.atc\' is not available.'
@ -288,7 +290,7 @@ def _atq(**kwargs):
day = kwargs.get('day', None)
month = kwargs.get('month', None)
year = kwargs.get('year', None)
if year and len(str(year)) == 2:
if year and len(six.text_type(year)) == 2:
year = '20{0}'.format(year)
jobinfo = atq()['jobs']

View File

@ -12,7 +12,7 @@ Wrapper for at(1) on Solaris-like systems
.. versionadded:: 2017.7.0
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import python libs
import re
@ -23,6 +23,7 @@ import logging
# Import 3rd-party libs
# pylint: disable=import-error,redefined-builtin
from salt.ext.six.moves import map
from salt.ext import six
# Import salt libs
import salt.utils.files
@ -95,7 +96,7 @@ def atq(tag=None):
specs.append(tmp[5])
# make sure job is str
job = str(job)
job = six.text_type(job)
# search for any tags
atjob_file = '/var/spool/cron/atjobs/{job}'.format(
@ -104,6 +105,7 @@ def atq(tag=None):
if __salt__['file.file_exists'](atjob_file):
with salt.utils.files.fopen(atjob_file, 'r') as atjob:
for line in atjob:
line = salt.utils.stringutils.to_unicode(line)
tmp = job_kw_regex.match(line)
if tmp:
job_tag = tmp.groups()[0]
@ -205,7 +207,7 @@ def at(*args, **kwargs): # pylint: disable=C0103
return {'jobs': [], 'error': res['stderr']}
else:
jobid = res['stderr'].splitlines()[1]
jobid = str(jobid.split()[1])
jobid = six.text_type(jobid.split()[1])
return atq(jobid)
@ -227,7 +229,8 @@ def atc(jobid):
)
if __salt__['file.file_exists'](atjob_file):
with salt.utils.files.fopen(atjob_file, 'r') as rfh:
return "".join(rfh.readlines())
return ''.join([salt.utils.stringutils.to_unicode(x)
for x in rfh.readlines()])
else:
return {'error': 'invalid job id \'{0}\''.format(jobid)}
@ -246,7 +249,7 @@ def _atq(**kwargs):
day = kwargs.get('day', None)
month = kwargs.get('month', None)
year = kwargs.get('year', None)
if year and len(str(year)) == 2:
if year and len(six.text_type(year)) == 2:
year = '20{0}'.format(year)
jobinfo = atq()['jobs']

View File

@ -23,7 +23,7 @@ This module requires the ``augeas`` Python module.
For affected Debian/Ubuntu hosts, installing ``libpython2.7`` has been
known to resolve the issue.
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import python libs
import os
@ -95,8 +95,8 @@ def _lstrip_word(word, prefix):
from the beginning of the string
'''
if str(word).startswith(prefix):
return str(word)[len(prefix):]
if six.text_type(word).startswith(prefix):
return six.text_type(word)[len(prefix):]
return word
@ -231,7 +231,7 @@ def execute(context=None, lens=None, commands=(), load_path=None):
path = make_path(parts[0])
args = {'path': path}
except ValueError as err:
log.error(str(err))
log.error(err)
# if command.split fails arg will not be set
if 'arg' not in locals():
arg = command
@ -239,7 +239,7 @@ def execute(context=None, lens=None, commands=(), load_path=None):
'see debug log for details: {0}'.format(arg)
return ret
log.debug('{0}: {1}'.format(method, args))
log.debug('%s: %s', method, args)
func = getattr(aug, method)
func(**args)
@ -248,7 +248,7 @@ def execute(context=None, lens=None, commands=(), load_path=None):
aug.save()
ret['retval'] = True
except IOError as err:
ret['error'] = str(err)
ret['error'] = six.text_type(err)
if lens and not lens.endswith('.lns'):
ret['error'] += '\nLenses are normally configured as "name.lns". ' \
@ -293,7 +293,7 @@ def get(path, value='', load_path=None):
try:
_match = aug.match(path)
except RuntimeError as err:
return {'error': str(err)}
return {'error': six.text_type(err)}
if _match:
ret[path] = aug.get(path)
@ -341,7 +341,7 @@ def setvalue(*args):
%wheel ALL = PASSWD : ALL , NOPASSWD : /usr/bin/apt-get , /usr/bin/aptitude
'''
load_path = None
load_paths = [x for x in args if str(x).startswith('load_path=')]
load_paths = [x for x in args if six.text_type(x).startswith('load_path=')]
if load_paths:
if len(load_paths) > 1:
raise SaltInvocationError(
@ -356,9 +356,9 @@ def setvalue(*args):
tuples = [
x for x in args
if not str(x).startswith('prefix=') and
not str(x).startswith('load_path=')]
prefix = [x for x in args if str(x).startswith('prefix=')]
if not six.text_type(x).startswith('prefix=') and
not six.text_type(x).startswith('load_path=')]
prefix = [x for x in args if six.text_type(x).startswith('prefix=')]
if prefix:
if len(prefix) > 1:
raise SaltInvocationError(
@ -376,7 +376,7 @@ def setvalue(*args):
if prefix:
target_path = os.path.join(prefix.rstrip('/'), path.lstrip('/'))
try:
aug.set(target_path, str(value))
aug.set(target_path, six.text_type(value))
except ValueError as err:
ret['error'] = 'Multiple values: {0}'.format(err)
@ -384,7 +384,7 @@ def setvalue(*args):
aug.save()
ret['retval'] = True
except IOError as err:
ret['error'] = str(err)
ret['error'] = six.text_type(err)
return ret
@ -462,7 +462,7 @@ def remove(path, load_path=None):
else:
ret['retval'] = True
except (RuntimeError, IOError) as err:
ret['error'] = str(err)
ret['error'] = six.text_type(err)
ret['count'] = count

View File

@ -4,7 +4,7 @@ Support for the Amazon Simple Queue Service.
'''
# Import Python libs
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import logging
# Import salt libs
@ -103,7 +103,7 @@ def receive_message(queue, region, num=1, opts=None, user=None):
queues = list_queues(region, opts, user)
url_map = _parse_queue_list(queues)
if queue not in url_map:
log.info('"{0}" queue does not exist.'.format(queue))
log.info('"%s" queue does not exist.', queue)
return ret
out = _run_aws('receive-message', region, opts, user, queue=url_map[queue],
@ -144,7 +144,7 @@ def delete_message(queue, region, receipthandle, opts=None, user=None):
queues = list_queues(region, opts, user)
url_map = _parse_queue_list(queues)
if queue not in url_map:
log.info('"{0}" queue does not exist.'.format(queue))
log.info('"%s" queue does not exist.', queue)
return False
out = _run_aws('delete-message', region, opts, user,

View File

@ -364,7 +364,7 @@ def make_src_pkg(dest_dir, spec, sources, env=None, template=None, saltenv='base
__salt__['cmd.run'](cmd, cwd=abspath_debname)
cmd = 'rm -f {0}'.format(os.path.basename(spec_pathfile))
__salt__['cmd.run'](cmd, cwd=abspath_debname)
cmd = 'debuild -S -uc -us'
cmd = 'debuild -S -uc -us -sa'
__salt__['cmd.run'](cmd, cwd=abspath_debname, python_shell=True)
cmd = 'rm -fR {0}'.format(abspath_debname)

View File

@ -1714,7 +1714,7 @@ def _regex_to_static(src, regex):
except Exception as ex:
raise CommandExecutionError("{0}: '{1}'".format(_get_error_message(ex), regex))
return src and src.group() or regex
return src and src.group().rstrip('\r') or regex
def _assert_occurrence(src, probe, target, amount=1):

View File

@ -112,11 +112,11 @@ def config(group=None, neighbor=None, **kwargs):
{
'PEERS-GROUP-NAME':{
'type' : u'external',
'description' : u'Here we should have a nice description',
'apply_groups' : [u'BGP-PREFIX-LIMIT'],
'import_policy' : u'PUBLIC-PEER-IN',
'export_policy' : u'PUBLIC-PEER-OUT',
'type' : 'external',
'description' : 'Here we should have a nice description',
'apply_groups' : ['BGP-PREFIX-LIMIT'],
'import_policy' : 'PUBLIC-PEER-IN',
'export_policy' : 'PUBLIC-PEER-OUT',
'remove_private': True,
'multipath' : True,
'multihop_ttl' : 30,
@ -232,23 +232,23 @@ def neighbors(neighbor=None, **kwargs):
'up' : True,
'local_as' : 13335,
'remote_as' : 8121,
'local_address' : u'172.101.76.1',
'local_address' : '172.101.76.1',
'local_address_configured' : True,
'local_port' : 179,
'remote_address' : u'192.247.78.0',
'router_id' : u'192.168.0.1',
'remote_address' : '192.247.78.0',
'router_id' : '192.168.0.1',
'remote_port' : 58380,
'multihop' : False,
'import_policy' : u'4-NTT-TRANSIT-IN',
'export_policy' : u'4-NTT-TRANSIT-OUT',
'import_policy' : '4-NTT-TRANSIT-IN',
'export_policy' : '4-NTT-TRANSIT-OUT',
'input_messages' : 123,
'output_messages' : 13,
'input_updates' : 123,
'output_updates' : 5,
'messages_queued_out' : 23,
'connection_state' : u'Established',
'previous_connection_state' : u'EstabSync',
'last_event' : u'RecvKeepAlive',
'connection_state' : 'Established',
'previous_connection_state' : 'EstabSync',
'last_event' : 'RecvKeepAlive',
'suppress_4byte_as' : False,
'local_as_prepend' : False,
'holdtime' : 90,

View File

@ -273,7 +273,7 @@ def facts(**kwargs): # pylint: disable=unused-argument
.. code-block:: python
{
'os_version': u'13.3R6.5',
'os_version': '13.3R6.5',
'uptime': 10117140,
'interface_list': [
'lc-0/0/0',
@ -286,11 +286,11 @@ def facts(**kwargs): # pylint: disable=unused-argument
'gr-0/0/10',
'ip-0/0/10'
],
'vendor': u'Juniper',
'serial_number': u'JN131356FBFA',
'model': u'MX480',
'hostname': u're0.edge05.syd01',
'fqdn': u're0.edge05.syd01'
'vendor': 'Juniper',
'serial_number': 'JN131356FBFA',
'model': 'MX480',
'hostname': 're0.edge05.syd01',
'fqdn': 're0.edge05.syd01'
}
'''
@ -510,14 +510,14 @@ def cli(*commands, **kwargs): # pylint: disable=unused-argument
.. code-block:: python
{
u'show version and haiku': u'Hostname: re0.edge01.arn01
'show version and haiku': 'Hostname: re0.edge01.arn01
Model: mx480
Junos: 13.3R6.5
Help me, Obi-Wan
I just saw Episode Two
You're my only hope
',
u'show chassis fan' : u'Item Status RPM Measurement
'show chassis fan' : 'Item Status RPM Measurement
Top Rear Fan OK 3840 Spinning at intermediate-speed
Bottom Rear Fan OK 3840 Spinning at intermediate-speed
Top Middle Fan OK 3900 Spinning at intermediate-speed
@ -850,28 +850,28 @@ def ipaddrs(**kwargs): # pylint: disable=unused-argument
.. code-block:: python
{
u'FastEthernet8': {
u'ipv4': {
u'10.66.43.169': {
'FastEthernet8': {
'ipv4': {
'10.66.43.169': {
'prefix_length': 22
}
}
},
u'Loopback555': {
u'ipv4': {
u'192.168.1.1': {
'Loopback555': {
'ipv4': {
'192.168.1.1': {
'prefix_length': 24
}
},
u'ipv6': {
u'1::1': {
'ipv6': {
'1::1': {
'prefix_length': 64
},
u'2001:DB8:1::1': {
'2001:DB8:1::1': {
'prefix_length': 64
},
u'FE80::3': {
'prefix_length': u'N/A'
'FE80::3': {
'prefix_length': 'N/A'
}
}
}
@ -906,21 +906,21 @@ def interfaces(**kwargs): # pylint: disable=unused-argument
.. code-block:: python
{
u'Management1': {
'Management1': {
'is_up': False,
'is_enabled': False,
'description': u'',
'description': '',
'last_flapped': -1,
'speed': 1000,
'mac_address': u'dead:beef:dead',
'mac_address': 'dead:beef:dead',
},
u'Ethernet1':{
'Ethernet1':{
'is_up': True,
'is_enabled': True,
'description': u'foo',
'description': 'foo',
'last_flapped': 1429978575.1554043,
'speed': 1000,
'mac_address': u'beef:dead:beef',
'mac_address': 'beef:dead:beef',
}
}
'''
@ -957,17 +957,17 @@ def lldp(interface='', **kwargs): # pylint: disable=unused-argument
{
'TenGigE0/0/0/8': [
{
'parent_interface': u'Bundle-Ether8',
'interface_description': u'TenGigE0/0/0/8',
'remote_chassis_id': u'8c60.4f69.e96c',
'remote_system_name': u'switch',
'remote_port': u'Eth2/2/1',
'remote_port_description': u'Ethernet2/2/1',
'remote_system_description': u'Cisco Nexus Operating System (NX-OS) Software 7.1(0)N1(1a)
'parent_interface': 'Bundle-Ether8',
'interface_description': 'TenGigE0/0/0/8',
'remote_chassis_id': '8c60.4f69.e96c',
'remote_system_name': 'switch',
'remote_port': 'Eth2/2/1',
'remote_port_description': 'Ethernet2/2/1',
'remote_system_description': 'Cisco Nexus Operating System (NX-OS) Software 7.1(0)N1(1a)
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2015, Cisco Systems, Inc. All rights reserved.',
'remote_system_capab': u'B, R',
'remote_system_enable_capab': u'B'
'remote_system_capab': 'B, R',
'remote_system_enable_capab': 'B'
}
]
}

View File

@ -176,12 +176,12 @@ def stats(peer=None, **kwargs): # pylint: disable=unused-argument
[
{
'remote' : u'188.114.101.4',
'referenceid' : u'188.114.100.1',
'remote' : '188.114.101.4',
'referenceid' : '188.114.100.1',
'synchronized' : True,
'stratum' : 4,
'type' : u'-',
'when' : u'107',
'type' : '-',
'when' : '107',
'hostpoll' : 256,
'reachability' : 377,
'delay' : 164.228,

View File

@ -18,7 +18,7 @@ necessary):
.. versionadded:: 2016.3.0
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import python libs
import re
@ -62,9 +62,9 @@ def _normalize_args(args):
return shlex.split(args)
if isinstance(args, (tuple, list)):
return [str(arg) for arg in args]
return [six.text_type(arg) for arg in args]
else:
return [str(args)]
return [six.text_type(args)]
def _find_guids(guid_string):
@ -462,7 +462,7 @@ def snapshot_id_to_name(name, snap_id, strict=False, runas=None):
name = salt.utils.locales.sdecode(name)
if not re.match(GUID_REGEX, snap_id):
raise SaltInvocationError(
u'Snapshot ID "{0}" is not a GUID'.format(salt.utils.locales.sdecode(snap_id))
'Snapshot ID "{0}" is not a GUID'.format(salt.utils.locales.sdecode(snap_id))
)
# Get the snapshot information of the snapshot having the requested ID
@ -471,7 +471,7 @@ def snapshot_id_to_name(name, snap_id, strict=False, runas=None):
# Parallels desktop returned no information for snap_id
if not len(info):
raise SaltInvocationError(
u'No snapshots for VM "{0}" have ID "{1}"'.format(name, snap_id)
'No snapshots for VM "{0}" have ID "{1}"'.format(name, snap_id)
)
# Try to interpret the information
@ -479,8 +479,7 @@ def snapshot_id_to_name(name, snap_id, strict=False, runas=None):
data = salt.utils.yaml.safe_load(info)
except salt.utils.yaml.YAMLError as err:
log.warning(
'Could not interpret snapshot data returned from prlctl: '
'{0}'.format(err)
'Could not interpret snapshot data returned from prlctl: %s', err
)
data = {}
@ -492,16 +491,16 @@ def snapshot_id_to_name(name, snap_id, strict=False, runas=None):
snap_name = ''
else:
log.warning(
u'Could not interpret snapshot data returned from prlctl: '
u'data is not formed as a dictionary: {0}'.format(data)
'Could not interpret snapshot data returned from prlctl: '
'data is not formed as a dictionary: %s', data
)
snap_name = ''
# Raise or return the result
if not snap_name and strict:
raise SaltInvocationError(
u'Could not find a snapshot name for snapshot ID "{0}" of VM '
u'"{1}"'.format(snap_id, name)
'Could not find a snapshot name for snapshot ID "{0}" of VM '
'"{1}"'.format(snap_id, name)
)
return salt.utils.locales.sdecode(snap_name)
@ -550,13 +549,13 @@ def snapshot_name_to_id(name, snap_name, strict=False, runas=None):
# non-singular names
if len(named_ids) == 0:
raise SaltInvocationError(
u'No snapshots for VM "{0}" have name "{1}"'.format(name, snap_name)
'No snapshots for VM "{0}" have name "{1}"'.format(name, snap_name)
)
elif len(named_ids) == 1:
return named_ids[0]
else:
multi_msg = (u'Multiple snapshots for VM "{0}" have name '
u'"{1}"'.format(name, snap_name))
multi_msg = ('Multiple snapshots for VM "{0}" have name '
'"{1}"'.format(name, snap_name))
if strict:
raise SaltInvocationError(multi_msg)
else:
@ -643,7 +642,7 @@ def list_snapshots(name, snap_name=None, tree=False, names=False, runas=None):
ret = '{0:<38} {1}\n'.format('Snapshot ID', 'Snapshot Name')
for snap_id in snap_ids:
snap_name = snapshot_id_to_name(name, snap_id, runas=runas)
ret += (u'{{{0}}} {1}\n'.format(snap_id, salt.utils.locales.sdecode(snap_name)))
ret += ('{{{0}}} {1}\n'.format(snap_id, salt.utils.locales.sdecode(snap_name)))
return ret
# Return information directly from parallels desktop

View File

@ -47,7 +47,7 @@ def __virtual__():
def __init__(self):
if HAS_DOCKER:
__context__['client'] = docker.from_env()
__context__['server_name'] = __grains__['id']
__context__['server_name'] = __grains__['id']
def swarm_tokens():

View File

@ -527,7 +527,7 @@ def destroy(name):
output_loglevel='info')
if ret['retcode'] == 0:
_erase_vm_info(name)
return u'Destroyed VM {0}'.format(name)
return 'Destroyed VM {0}'.format(name)
return False

View File

@ -106,10 +106,11 @@ list of hosts associated with that vCenter Server:
However, some functions should be used against ESXi hosts, not vCenter Servers.
Functionality such as getting a host's coredump network configuration should be
performed against a host and not a vCenter server. If the authentication information
you're using is against a vCenter server and not an ESXi host, you can provide the
host name that is associated with the vCenter server in the command, as a list, using
the ``host_names`` or ``esxi_host`` kwarg. For example:
performed against a host and not a vCenter server. If the authentication
information you're using is against a vCenter server and not an ESXi host, you
can provide the host name that is associated with the vCenter server in the
command, as a list, using the ``host_names`` or ``esxi_host`` kwarg. For
example:
.. code-block:: bash

View File

@ -3598,9 +3598,9 @@ def _checkAllAdmxPolicies(policy_class,
full_names = {}
if policy_filedata:
log.debug('POLICY CLASS {0} has file data'.format(policy_class))
policy_filedata_split = re.sub(salt.utils.to_bytes(r'\]{0}$'.format(chr(0))),
policy_filedata_split = re.sub(salt.utils.stringutils.to_bytes(r'\]{0}$'.format(chr(0))),
b'',
re.sub(salt.utils.to_bytes(r'^\[{0}'.format(chr(0))),
re.sub(salt.utils.stringutils.to_bytes(r'^\[{0}'.format(chr(0))),
b'',
re.sub(re.escape(module_policy_data.reg_pol_header.encode('utf-16-le')), b'', policy_filedata))
).split(']['.encode('utf-16-le'))
@ -3916,7 +3916,7 @@ def _checkAllAdmxPolicies(policy_class,
admx_policy,
elements_item,
check_deleted=False)
) + salt.utils.to_bytes(r'(?!\*\*delvals\.)'),
) + salt.utils.stringutils.to_bytes(r'(?!\*\*delvals\.)'),
policy_filedata):
configured_value = _getDataFromRegPolData(_processValueItem(child_item,
child_key,
@ -4118,8 +4118,8 @@ def _regexSearchKeyValueCombo(policy_data, policy_regpath, policy_regkey):
for a policy_regpath and policy_regkey combo
'''
if policy_data:
specialValueRegex = salt.utils.to_bytes(r'(\*\*Del\.|\*\*DelVals\.){0,1}')
_thisSearch = b''.join([salt.utils.to_bytes(r'\['),
specialValueRegex = salt.utils.stringutils.to_bytes(r'(\*\*Del\.|\*\*DelVals\.){0,1}')
_thisSearch = b''.join([salt.utils.stringutils.to_bytes(r'\['),
re.escape(policy_regpath),
b'\00;',
specialValueRegex,
@ -4235,7 +4235,7 @@ def _policyFileReplaceOrAppendList(string_list, policy_data):
if not policy_data:
policy_data = b''
# we are going to clean off the special pre-fixes, so we get only the valuename
specialValueRegex = salt.utils.to_bytes(r'(\*\*Del\.|\*\*DelVals\.){0,1}')
specialValueRegex = salt.utils.stringutils.to_bytes(r'(\*\*Del\.|\*\*DelVals\.){0,1}')
for this_string in string_list:
list_item_key = this_string.split(b'\00;')[0].lstrip(b'[')
list_item_value_name = re.sub(specialValueRegex,
@ -4263,7 +4263,7 @@ def _policyFileReplaceOrAppend(this_string, policy_data, append_only=False):
# we are going to clean off the special pre-fixes, so we get only the valuename
if not policy_data:
policy_data = b''
specialValueRegex = salt.utils.to_bytes(r'(\*\*Del\.|\*\*DelVals\.){0,1}')
specialValueRegex = salt.utils.stringutils.to_bytes(r'(\*\*Del\.|\*\*DelVals\.){0,1}')
item_key = None
item_value_name = None
data_to_replace = None

View File

@ -158,7 +158,7 @@ def latest_version(*names, **kwargs):
# check, whether latest available version
# is newer than latest installed version
if compare_versions(ver1=six.text_type(latest_available),
oper=six.text_type('>'),
oper='>',
ver2=six.text_type(latest_installed)):
log.debug('Upgrade of {0} from {1} to {2} '
'is available'.format(name,
@ -1131,7 +1131,10 @@ def install(name=None, refresh=False, pkgs=None, **kwargs):
version_num = six.text_type(version_num)
if not version_num:
# following can be version number or latest
# following can be version number or latest or Not Found
version_num = _get_latest_pkg_version(pkginfo)
if version_num == 'latest' and 'latest' not in pkginfo:
version_num = _get_latest_pkg_version(pkginfo)
# Check if the version is already installed
@ -1140,9 +1143,8 @@ def install(name=None, refresh=False, pkgs=None, **kwargs):
# Desired version number already installed
ret[pkg_name] = {'current': version_num}
continue
# If version number not installed, is the version available?
elif version_num not in pkginfo:
elif version_num != 'latest' and version_num not in pkginfo:
log.error('Version {0} not found for package '
'{1}'.format(version_num, pkg_name))
ret[pkg_name] = {'not found': version_num}

View File

@ -195,7 +195,10 @@ class PyWinUpdater(object):
# if this update is already downloaded, it doesn't need to be in
# the download_collection. so skipping it unless the user mandates re-download.
if self.skipDownloaded and update.IsDownloaded:
log.debug(u'Skipped update {0} - already downloaded'.format(update.title))
log.debug(
'Skipped update %s - already downloaded',
update.title
)
continue
# check this update's categories against the ones desired.
@ -206,7 +209,7 @@ class PyWinUpdater(object):
if self.categories is None or category.Name in self.categories:
# adds it to the list to be downloaded.
self.download_collection.Add(update)
log.debug(u'added update {0}'.format(update.title))
log.debug('added update %s', update.title)
# ever update has 2 categories. this prevents the
# from being added twice.
break
@ -294,10 +297,10 @@ class PyWinUpdater(object):
try:
for update in self.search_results.Updates:
if not update.EulaAccepted:
log.debug(u'Accepting EULA: {0}'.format(update.Title))
log.debug('Accepting EULA: %s', update.Title)
update.AcceptEula()
except Exception as exc:
log.info('Accepting Eula failed: {0}'.format(exc))
log.info('Accepting Eula failed: %s', exc)
return exc
# if the blugger is empty. no point it starting the install process.
@ -309,7 +312,7 @@ class PyWinUpdater(object):
log.info('Installation of updates complete')
return True
except Exception as exc:
log.info('Installation failed: {0}'.format(exc))
log.info('Installation failed: %s', exc)
return exc
else:
log.info('no new updates.')
@ -371,7 +374,7 @@ class PyWinUpdater(object):
for update in self.download_collection:
if update.InstallationBehavior.CanRequestUserInput:
log.debug(u'Skipped update {0}'.format(update.title))
log.debug('Skipped update %s', update.title)
continue
# More fields can be added from https://msdn.microsoft.com/en-us/library/windows/desktop/aa386099(v=vs.85).aspx
update_com_fields = ['Categories', 'Deadline', 'Description',
@ -401,7 +404,7 @@ class PyWinUpdater(object):
'UpdateID': v.UpdateID}
update_dict[f] = v
updates.append(update_dict)
log.debug(u'added update {0}'.format(update.title))
log.debug('added update %s', update.title)
return updates
def GetSearchResults(self, fields=None):
@ -670,8 +673,8 @@ def download_updates(skips=None, retries=5, categories=None):
try:
comment = quidditch.GetDownloadResults()
except Exception as exc:
comment = u'could not get results, but updates were installed. {0}'.format(exc)
return u'Windows is up to date. \n{0}'.format(comment)
comment = 'could not get results, but updates were installed. {0}'.format(exc)
return 'Windows is up to date. \n{0}'.format(comment)
def install_updates(skips=None, retries=5, categories=None):

View File

@ -462,12 +462,12 @@ def getUserSid(username):
username = _to_unicode(username)
domain = win32api.GetComputerName()
if username.find(u'\\') != -1:
domain = username.split(u'\\')[0]
username = username.split(u'\\')[-1]
if username.find('\\') != -1:
domain = username.split('\\')[0]
username = username.split('\\')[-1]
domain = domain.upper()
return win32security.ConvertSidToStringSid(
win32security.LookupAccountName(None, domain + u'\\' + username)[0])
win32security.LookupAccountName(None, domain + '\\' + username)[0])
def setpassword(name, password):
@ -843,10 +843,13 @@ def _get_userprofile_from_registry(user, sid):
'''
profile_dir = __salt__['reg.read_value'](
'HKEY_LOCAL_MACHINE',
u'SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\ProfileList\\{0}'.format(sid),
'SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\ProfileList\\{0}'.format(sid),
'ProfileImagePath'
)['vdata']
log.debug(u'user {0} with sid={2} profile is located at "{1}"'.format(user, profile_dir, sid))
log.debug(
'user %s with sid=%s profile is located at "%s"',
user, sid, profile_dir
)
return profile_dir

View File

@ -61,8 +61,8 @@ BASE_STATUS = {
'outlog_by_level': None,
}
_URL_VERSIONS = {
1: u'http://downloads.buildout.org/1/bootstrap.py',
2: u'http://downloads.buildout.org/2/bootstrap.py',
1: 'http://downloads.buildout.org/1/bootstrap.py',
2: 'http://downloads.buildout.org/2/bootstrap.py',
}
DEFAULT_VER = 2
_logger = logging.getLogger(__name__)
@ -296,7 +296,7 @@ def _Popen(command,
directory = os.path.abspath(directory)
if isinstance(command, list):
command = ' '.join(command)
LOG.debug(u'Running {0}'.format(command))
LOG.debug('Running {0}'.format(command)) # future lint: disable=str-format-in-logging
if not loglevel:
loglevel = 'debug'
ret = __salt__['cmd.run_all'](
@ -501,7 +501,7 @@ def upgrade_bootstrap(directory='.',
else:
buildout_ver = _get_buildout_ver(directory)
booturl = _get_bootstrap_url(directory)
LOG.debug('Using {0}'.format(booturl))
LOG.debug('Using {0}'.format(booturl)) # future lint: disable=str-format-in-logging
# try to download an up-to-date bootstrap
# set defaulttimeout
# and add possible content
@ -823,24 +823,24 @@ def run_buildout(directory='.',
installed_cfg = os.path.join(directory, '.installed.cfg')
argv = []
if verbose:
LOG.debug(u'Buildout is running in verbose mode!')
LOG.debug('Buildout is running in verbose mode!')
argv.append('-vvvvvvv')
if not newest and os.path.exists(installed_cfg):
LOG.debug(u'Buildout is running in non newest mode!')
LOG.debug('Buildout is running in non newest mode!')
argv.append('-N')
if newest:
LOG.debug(u'Buildout is running in newest mode!')
LOG.debug('Buildout is running in newest mode!')
argv.append('-n')
if offline:
LOG.debug(u'Buildout is running in offline mode!')
LOG.debug('Buildout is running in offline mode!')
argv.append('-o')
if debug:
LOG.debug(u'Buildout is running in debug mode!')
LOG.debug('Buildout is running in debug mode!')
argv.append('-D')
cmds, outputs = [], []
if parts:
for part in parts:
LOG.info(u'Installing single part: {0}'.format(part))
LOG.info('Installing single part: {0}'.format(part)) # future lint: disable=str-format-in-logging
cmd = '{0} -c {1} {2} install {3}'.format(
bcmd, config, ' '.join(argv), part)
cmds.append(cmd)
@ -854,7 +854,7 @@ def run_buildout(directory='.',
use_vt=use_vt)
)
else:
LOG.info(u'Installing all buildout parts')
LOG.info('Installing all buildout parts')
cmd = '{0} -c {1} {2}'.format(
bcmd, config, ' '.join(argv))
cmds.append(cmd)

View File

@ -5,18 +5,18 @@ data in a compatible format) via an HTML email or HTML file.
.. versionadded:: 2017.7.0
Similar results can be achieved by using smtp returner with a custom template,
Similar results can be achieved by using the smtp returner with a custom template,
except an attempt at writing such a template for the complex data structure
returned by highstate function had proven to be a challenge, not to mention
that smtp module doesn't support sending HTML mail at the moment.
that the smtp module doesn't support sending HTML mail at the moment.
The main goal of this returner was producing an easy to read email similar
The main goal of this returner was to produce an easy to read email similar
to the output of highstate outputter used by the CLI.
This returner could be very useful during scheduled executions,
but could also be useful for communicating the results of a manual execution.
Returner configuration is controlled in a standart fashion either via
Returner configuration is controlled in a standard fashion either via
highstate group or an alternatively named group.
.. code-block:: bash
@ -29,7 +29,7 @@ To use the alternative configuration, append '--return_config config-name'
salt '*' state.highstate --return highstate --return_config simple
Here is an example of what configuration might look like:
Here is an example of what the configuration might look like:
.. code-block:: yaml
@ -49,18 +49,18 @@ Here is an example of what configuration might look like:
The *report_failures*, *report_changes*, and *report_everything* flags provide
filtering of the results. If you want an email to be sent every time, then
*reprot_everything* is your choice. If you want to be notified only when
*report_everything* is your choice. If you want to be notified only when
changes were successfully made use *report_changes*. And *report_failures* will
generate an email if there were failures.
The configuration allows you to run salt module function in case of
The configuration allows you to run a salt module function in case of
success (*success_function*) or failure (*failure_function*).
Any salt function, including ones defined in _module folder of your salt
repo could be used here. Their output will be displayed under the 'extra'
Any salt function, including ones defined in the _module folder of your salt
repo, could be used here and its output will be displayed under the 'extra'
heading of the email.
Supported values for *report_format* are html, json, and yaml. The later two
Supported values for *report_format* are html, json, and yaml. The latter two
are typically used for debugging purposes, but could be used for applying
a template at some later stage.
@ -70,7 +70,7 @@ the only other applicable option is *file_output*.
In case of smtp delivery, smtp_* options demonstrated by the example above
could be used to customize the email.
As you might have noticed success and failure subject contain {id} and {host}
As you might have noticed, the success and failure subjects contain {id} and {host}
values. Any other grain name could be used. As opposed to using
{{grains['id']}}, which will be rendered by the master and contain master's
values at the time of pillar generation, these will contain minion values at

View File

@ -1959,6 +1959,7 @@ class State(object):
# run the state call in parallel, but only if not in a prereq
ret = self.call_parallel(cdata, low)
else:
self.format_slots(cdata)
ret = self.states[cdata['full']](*cdata['args'],
**cdata['kwargs'])
self.states.inject_globals = {}
@ -2068,6 +2069,42 @@ class State(object):
low['retry']['splay'])])
return ret
def __eval_slot(self, slot):
log.debug('Evaluating slot: %s', slot)
fmt = slot.split(':', 2)
if len(fmt) != 3:
log.warning('Malformed slot: %s', slot)
return slot
if fmt[1] != 'salt':
log.warning('Malformed slot: %s', slot)
log.warning('Only execution modules are currently supported in slots. This means slot '
'should start with "__slot__:salt:"')
return slot
fun, args, kwargs = salt.utils.args.parse_function(fmt[2])
if not fun or fun not in self.functions:
log.warning('Malformed slot: %s', slot)
log.warning('Execution module should be specified in a function call format: '
'test.arg(\'arg\', kw=\'kwarg\')')
return slot
log.debug('Calling slot: %s(%s, %s)', fun, args, kwargs)
return self.functions[fun](*args, **kwargs)
def format_slots(self, cdata):
'''
Read in the arguments from the low level slot syntax to make a last
minute runtime call to gather relevant data for the specific routine
'''
# __slot__:salt.cmd.run(foo, bar, baz=qux)
ctx = (('args', enumerate(cdata['args'])),
('kwargs', cdata['kwargs'].items()))
for atype, avalues in ctx:
for ind, arg in avalues:
arg = sdecode(arg)
if not isinstance(arg, six.string_types) or not arg.startswith('__slot__:'):
# Not a slot, skip it
continue
cdata[atype][ind] = self.__eval_slot(arg)
def verify_retry_data(self, retry_data):
'''
verifies the specified retry data

View File

@ -26,7 +26,7 @@ See also the module documentation
'''
# Import python libs
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import logging
log = logging.getLogger(__name__)

View File

@ -26,6 +26,7 @@ file from the default location, set the following in your minion config:
aliases.file: /my/alias/file
'''
from __future__ import absolute_import, print_function, unicode_literals
def present(name, target):

View File

@ -26,6 +26,7 @@ Control the alternatives system
- path: {{ my_hadoop_conf }}
'''
from __future__ import absolute_import, print_function, unicode_literals
# Define a function alias in order not to shadow built-in's
__func_alias__ = {

View File

@ -35,7 +35,7 @@ state:
- state: installed
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import sys
try:
import ansible

View File

@ -37,15 +37,14 @@ the above word between angle brackets (<>).
- FollowSymlinks
AllowOverride: All
'''
from __future__ import with_statement, print_function
from __future__ import absolute_import
from __future__ import absolute_import, with_statement, print_function, unicode_literals
# Import python libs
import os.path
import os
# Import Salt libs
import salt.utils.files
import salt.utils.stringutils
def __virtual__():
@ -62,7 +61,7 @@ def configfile(name, config):
current_configs = ''
if os.path.exists(name):
with salt.utils.files.fopen(name) as config_file:
current_configs = config_file.read()
current_configs = salt.utils.stringutils.to_unicode(config_file.read())
if configs == current_configs.strip():
ret['result'] = True
@ -79,7 +78,7 @@ def configfile(name, config):
try:
with salt.utils.files.fopen(name, 'w') as config_file:
print(configs, file=config_file)
print(salt.utils.stringutils.to_str(configs), file=config_file)
ret['changes'] = {
'old': current_configs,
'new': configs

View File

@ -16,7 +16,7 @@ Enable and disable apache confs.
apache_conf.disabled:
- name: security
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
from salt.ext import six
# Import salt libs

View File

@ -18,10 +18,10 @@ Enable and disable apache modules.
'''
# Import Python libs
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import salt libs
from salt.ext.six import string_types
from salt.ext import six
def __virtual__():
@ -52,14 +52,14 @@ def enabled(name):
ret['result'] = None
return ret
status = __salt__['apache.a2enmod'](name)['Status']
if isinstance(status, string_types) and 'enabled' in status:
if isinstance(status, six.string_types) and 'enabled' in status:
ret['result'] = True
ret['changes']['old'] = None
ret['changes']['new'] = name
else:
ret['result'] = False
ret['comment'] = 'Failed to enable {0} Apache module'.format(name)
if isinstance(status, string_types):
if isinstance(status, six.string_types):
ret['comment'] = ret['comment'] + ' ({0})'.format(status)
return ret
else:
@ -88,14 +88,14 @@ def disabled(name):
ret['result'] = None
return ret
status = __salt__['apache.a2dismod'](name)['Status']
if isinstance(status, string_types) and 'disabled' in status:
if isinstance(status, six.string_types) and 'disabled' in status:
ret['result'] = True
ret['changes']['old'] = name
ret['changes']['new'] = None
else:
ret['result'] = False
ret['comment'] = 'Failed to disable {0} Apache module'.format(name)
if isinstance(status, string_types):
if isinstance(status, six.string_types):
ret['comment'] = ret['comment'] + ' ({0})'.format(status)
return ret
else:

View File

@ -16,10 +16,10 @@ Enable and disable apache sites.
apache_site.disabled:
- name: default
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import salt libs
from salt.ext.six import string_types
from salt.ext import six
def __virtual__():
@ -48,14 +48,14 @@ def enabled(name):
ret['result'] = None
return ret
status = __salt__['apache.a2ensite'](name)['Status']
if isinstance(status, string_types) and 'enabled' in status:
if isinstance(status, six.string_types) and 'enabled' in status:
ret['result'] = True
ret['changes']['old'] = None
ret['changes']['new'] = name
else:
ret['result'] = False
ret['comment'] = 'Failed to enable {0} Apache site'.format(name)
if isinstance(status, string_types):
if isinstance(status, six.string_types):
ret['comment'] = ret['comment'] + ' ({0})'.format(status)
return ret
else:
@ -82,14 +82,14 @@ def disabled(name):
ret['result'] = None
return ret
status = __salt__['apache.a2dissite'](name)['Status']
if isinstance(status, string_types) and 'disabled' in status:
if isinstance(status, six.string_types) and 'disabled' in status:
ret['result'] = True
ret['changes']['old'] = name
ret['changes']['new'] = None
else:
ret['result'] = False
ret['comment'] = 'Failed to disable {0} Apache site'.format(name)
if isinstance(status, string_types):
if isinstance(status, six.string_types):
ret['comment'] = ret['comment'] + ' ({0})'.format(status)
return ret
else:

View File

@ -3,7 +3,7 @@
Package management operations specific to APT- and DEB-based systems
====================================================================
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import python libs
import logging

View File

@ -6,7 +6,7 @@ Extract an archive
'''
# Import Python libs
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import errno
import logging
import os
@ -821,7 +821,7 @@ def extracted(name,
return ret
if options is not None and not isinstance(options, six.string_types):
options = str(options)
options = six.text_type(options)
strip_components = None
if options and archive_format == 'tar':
@ -962,7 +962,7 @@ def extracted(name,
ret['comment'] = msg
return ret
else:
log.debug('file.cached: {0}'.format(result))
log.debug('file.cached: %s', result)
if result['result']:
# Get the path of the file in the minion cache
@ -1233,7 +1233,7 @@ def extracted(name,
__states__['file.directory'](name, user=user, makedirs=True)
created_destdir = True
log.debug('Extracting {0} to {1}'.format(cached, name))
log.debug('Extracting %s to %s', cached, name)
try:
if archive_format == 'zip':
if use_cmd_unzip:

View File

@ -5,8 +5,9 @@ This state downloads artifacts from artifactory.
'''
# Import python libs
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import logging
from salt.ext import six
log = logging.getLogger(__name__)
@ -87,15 +88,15 @@ def downloaded(name, artifact, target_dir='/tmp', target_file=None, use_literal_
fetch_result = __fetch_from_artifactory(artifact, target_dir, target_file, use_literal_group_id)
except Exception as exc:
ret['result'] = False
ret['comment'] = str(exc)
ret['comment'] = six.text_type(exc)
return ret
log.debug("fetch_result=%s", str(fetch_result))
log.debug('fetch_result = %s', fetch_result)
ret['result'] = fetch_result['status']
ret['comment'] = fetch_result['comment']
ret['changes'] = fetch_result['changes']
log.debug("ret=%s", str(ret))
log.debug('ret = %s', ret)
return ret

View File

@ -5,7 +5,7 @@ Configuration disposable regularly scheduled tasks for at.
The at state can be add disposable regularly scheduled tasks for your system.
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import Python libs
import logging

View File

@ -27,7 +27,7 @@ Augeas_ can be used to manage configuration files.
known to resolve the issue.
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import python libs
import re
@ -38,6 +38,8 @@ import difflib
# Import Salt libs
import salt.utils.args
import salt.utils.files
import salt.utils.stringutils
from salt.ext import six
from salt.modules.augeas_cfg import METHOD_MAP
@ -94,13 +96,13 @@ def _check_filepath(changes):
raise ValueError(error)
filename = filename_
except (ValueError, IndexError) as err:
log.error(str(err))
log.error(err)
if 'error' not in locals():
error = 'Invalid formatted command, ' \
'see debug log for details: {0}' \
.format(change_)
else:
error = str(err)
error = six.text_type(err)
raise ValueError(error)
filename = _workout_filename(filename)
@ -273,7 +275,7 @@ def change(name, context=None, changes=None, lens=None,
try:
filename = _check_filepath(changes)
except ValueError as err:
ret['comment'] = 'Error: {0}'.format(str(err))
ret['comment'] = 'Error: {0}'.format(err)
return ret
else:
filename = re.sub('^/files|/$', '', context)
@ -287,10 +289,10 @@ def change(name, context=None, changes=None, lens=None,
return ret
old_file = []
if filename is not None:
if os.path.isfile(filename):
with salt.utils.files.fopen(filename, 'r') as file_:
old_file = file_.readlines()
if filename is not None and os.path.isfile(filename):
with salt.utils.files.fopen(filename, 'r') as file_:
old_file = [salt.utils.stringutils.to_unicode(x)
for x in file_.readlines()]
result = __salt__['augeas.execute'](
context=context, lens=lens,
@ -301,10 +303,12 @@ def change(name, context=None, changes=None, lens=None,
ret['comment'] = 'Error: {0}'.format(result['error'])
return ret
if old_file:
if filename is not None and os.path.isfile(filename):
with salt.utils.files.fopen(filename, 'r') as file_:
new_file = [salt.utils.stringutils.to_unicode(x)
for x in file_.readlines()]
diff = ''.join(
difflib.unified_diff(old_file, file_.readlines(), n=0))
difflib.unified_diff(old_file, new_file, n=0))
if diff:
ret['comment'] = 'Changes have been saved'

View File

@ -15,6 +15,7 @@ information.
aws_sqs.exists:
- region: eu-west-1
'''
from __future__ import absolute_import, print_function, unicode_literals
def __virtual__():

View File

@ -24,7 +24,7 @@ for this the :mod:`module.wait <salt.states.module.wait>` state can be used:
fetch_out_of_band:
module.run:
git.fetch:
- git.fetch:
- cwd: /path/to/my/repo
- user: myuser
- opts: '--all'
@ -35,7 +35,7 @@ Another example:
mine.send:
module.run:
network.ip_addrs:
- network.ip_addrs:
- interface: eth0
And more complex example:
@ -44,7 +44,7 @@ And more complex example:
eventsviewer:
module.run:
task.create_task:
- task.create_task:
- name: events-viewer
- user_name: System
- action_type: Execute
@ -158,7 +158,7 @@ functions at once the following way:
call_something:
module.run:
git.fetch:
- git.fetch:
- cwd: /path/to/my/repo
- user: myuser
- opts: '--all'

View File

@ -203,7 +203,8 @@ class AESReqServerMixin(object):
'id': load['id'],
'pub': load['pub']}
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
if self.opts.get('auth_events') is True:
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
return {'enc': 'clear',
'load': {'ret': 'full'}}
@ -234,7 +235,8 @@ class AESReqServerMixin(object):
eload = {'result': False,
'id': load['id'],
'pub': load['pub']}
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
if self.opts.get('auth_events') is True:
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
return {'enc': 'clear',
'load': {'ret': False}}
@ -254,7 +256,8 @@ class AESReqServerMixin(object):
'id': load['id'],
'act': 'denied',
'pub': load['pub']}
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
if self.opts.get('auth_events') is True:
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
return {'enc': 'clear',
'load': {'ret': False}}
@ -266,7 +269,8 @@ class AESReqServerMixin(object):
eload = {'result': False,
'id': load['id'],
'pub': load['pub']}
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
if self.opts.get('auth_events') is True:
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
return {'enc': 'clear',
'load': {'ret': False}}
@ -295,7 +299,8 @@ class AESReqServerMixin(object):
'act': key_act,
'id': load['id'],
'pub': load['pub']}
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
if self.opts.get('auth_events') is True:
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
return ret
elif os.path.isfile(pubfn_pend):
@ -316,7 +321,8 @@ class AESReqServerMixin(object):
'act': 'reject',
'id': load['id'],
'pub': load['pub']}
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
if self.opts.get('auth_events') is True:
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
return ret
elif not auto_sign:
@ -338,7 +344,8 @@ class AESReqServerMixin(object):
'id': load['id'],
'act': 'denied',
'pub': load['pub']}
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
if self.opts.get('auth_events') is True:
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
return {'enc': 'clear',
'load': {'ret': False}}
else:
@ -351,7 +358,8 @@ class AESReqServerMixin(object):
'act': 'pend',
'id': load['id'],
'pub': load['pub']}
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
if self.opts.get('auth_events') is True:
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
return {'enc': 'clear',
'load': {'ret': True}}
else:
@ -372,7 +380,8 @@ class AESReqServerMixin(object):
eload = {'result': False,
'id': load['id'],
'pub': load['pub']}
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
if self.opts.get('auth_events') is True:
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
return {'enc': 'clear',
'load': {'ret': False}}
else:
@ -384,7 +393,8 @@ class AESReqServerMixin(object):
eload = {'result': False,
'id': load['id'],
'pub': load['pub']}
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
if self.opts.get('auth_events') is True:
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
return {'enc': 'clear',
'load': {'ret': False}}
@ -478,5 +488,6 @@ class AESReqServerMixin(object):
'act': 'accept',
'id': load['id'],
'pub': load['pub']}
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
if self.opts.get('auth_events') is True:
self.event.fire_event(eload, salt.utils.event.tagify(prefix='auth'))
return ret

View File

@ -410,9 +410,7 @@ class AsyncZeroMQPubChannel(salt.transport.mixins.auth.AESPubClientMixin, salt.t
self._monitor.stop()
self._monitor = None
if hasattr(self, '_stream'):
# TODO: Optionally call stream.close() on newer pyzmq? Its broken on some
self._stream.io_loop.remove_handler(self._stream.socket)
self._stream.socket.close(0)
self._stream.close(0)
elif hasattr(self, '_socket'):
self._socket.close(0)
if hasattr(self, 'context') and self.context.closed is False:
@ -968,14 +966,9 @@ class AsyncReqMessageClient(object):
# TODO: timeout all in-flight sessions, or error
def destroy(self):
if hasattr(self, 'stream') and self.stream is not None:
# TODO: Optionally call stream.close() on newer pyzmq? It is broken on some.
if self.stream.socket:
self.stream.socket.close()
self.stream.io_loop.remove_handler(self.stream.socket)
# set this to None, more hacks for messed up pyzmq
self.stream.socket = None
self.stream.close()
self.socket = None
self.stream = None
self.socket.close()
if self.context.closed is False:
self.context.term()

View File

@ -104,9 +104,9 @@
'''
# Import python libs
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import copy
import logging
from copy import copy
# Import Salt libs
from salt.utils.odict import OrderedDict
@ -215,10 +215,10 @@ def aggregate(obj_a, obj_b, level=False, map_class=Map, sequence_class=Sequence)
if isinstance(obj_a, dict) and isinstance(obj_b, dict):
if isinstance(obj_a, Aggregate) and isinstance(obj_b, Aggregate):
# deep merging is more or less a.update(obj_b)
response = copy(obj_a)
response = copy.copy(obj_a)
else:
# introspection on obj_b keys only
response = copy(obj_b)
response = copy.copy(obj_b)
for key, value in six.iteritems(obj_b):
if key in obj_a:
@ -234,7 +234,7 @@ def aggregate(obj_a, obj_b, level=False, map_class=Map, sequence_class=Sequence)
response.append(value)
return response
response = copy(obj_b)
response = copy.copy(obj_b)
if isinstance(obj_a, Aggregate) or isinstance(obj_b, Aggregate):
log.info('only one value marked as aggregate. keep `obj_b` value')

View File

@ -4,7 +4,7 @@ Functions used for CLI argument handling
'''
# Import python libs
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import copy
import fnmatch
import inspect
@ -69,9 +69,9 @@ def condition_input(args, kwargs):
'''
ret = []
for arg in args:
if (six.PY3 and isinstance(arg, six.integer_types) and salt.utils.jid.is_jid(str(arg))) or \
if (six.PY3 and isinstance(arg, six.integer_types) and salt.utils.jid.is_jid(six.text_type(arg))) or \
(six.PY2 and isinstance(arg, long)): # pylint: disable=incompatible-py3-code
ret.append(str(arg))
ret.append(six.text_type(arg))
else:
ret.append(arg)
if isinstance(kwargs, dict) and kwargs:
@ -342,7 +342,7 @@ def split_input(val):
try:
return [x.strip() for x in val.split(',')]
except AttributeError:
return [x.strip() for x in str(val).split(',')]
return [x.strip() for x in six.text_type(val).split(',')]
def test_mode(**kwargs):
@ -501,3 +501,62 @@ def format_call(fun,
# Lets pack the current extra kwargs as template context
ret.setdefault('context', {}).update(extra)
return ret
def parse_function(s):
'''
Parse a python-like function call syntax.
For example: module.function(arg, arg, kw=arg, kw=arg)
This function takes care only about the function name and arguments list carying on quoting
and bracketing. It doesn't perform identifiers and other syntax validity check.
Returns a tuple of three values: function name string, arguments list and keyword arguments
dictionary.
'''
sh = shlex.shlex(s, posix=True)
sh.escapedquotes = '"\''
word = []
args = []
kwargs = {}
brackets = []
key = None
token = None
for token in sh:
if token == '(':
break
word.append(token)
if not word or token != '(':
return None, None, None
fname = ''.join(word)
word = []
good = False
for token in sh:
if token in '[{(':
word.append(token)
brackets.append(token)
elif (token == ',' or token == ')') and not brackets:
if key:
kwargs[key] = ''.join(word)
elif word:
args.append(''.join(word))
if token == ')':
good = True
break
key = None
word = []
elif token in ']})':
if not brackets or token != {'[': ']', '{': '}', '(': ')'}[brackets.pop()]:
break
word.append(token)
elif token == '=' and not brackets:
key = ''.join(word)
word = []
continue
else:
word.append(token)
if good:
return fname, args, kwargs
else:
return None, None, None

View File

@ -3,7 +3,7 @@
Helpers/utils for working with tornado async stuff
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import tornado.ioloop
import tornado.concurrent
@ -94,10 +94,8 @@ class SyncWrapper(object):
# their associated io_loop is closed to allow for proper
# cleanup.
self.async.close()
self.io_loop.close()
# Other things should be deallocated after the io_loop closes.
# See Issue #26889.
del self.async
self.io_loop.close()
del self.io_loop
elif hasattr(self, 'io_loop'):
self.io_loop.close()

View File

@ -5,7 +5,7 @@ atomic way
'''
# Import python libs
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import os
import tempfile
import sys

View File

@ -8,7 +8,7 @@ This is a base library used by a number of AWS services.
:depends: requests
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import Python libs
import sys
@ -37,7 +37,7 @@ from salt.ext.six.moves import map, range, zip
from salt.ext.six.moves.urllib.parse import urlencode, urlparse
# pylint: enable=import-error,redefined-builtin,no-name-in-module
LOG = logging.getLogger(__name__)
log = logging.getLogger(__name__)
DEFAULT_LOCATION = 'us-east-1'
DEFAULT_AWS_API_VERSION = '2014-10-01'
AWS_RETRY_CODES = [
@ -195,7 +195,7 @@ def assumed_creds(prov_dict, role_arn, location=None):
verify=True)
if result.status_code >= 400:
LOG.info('AssumeRole response: {0}'.format(result.content))
log.info('AssumeRole response: %s', result.content)
result.raise_for_status()
resp = result.json()
@ -410,12 +410,12 @@ def query(params=None, setname=None, requesturl=None, location=None,
'like https://some.aws.endpoint/?args').format(
requesturl
)
LOG.error(endpoint_err)
log.error(endpoint_err)
if return_url is True:
return {'error': endpoint_err}, requesturl
return {'error': endpoint_err}
LOG.debug('Using AWS endpoint: {0}'.format(endpoint))
log.debug('Using AWS endpoint: %s', endpoint)
method = 'GET'
aws_api_version = prov_dict.get(
@ -443,21 +443,14 @@ def query(params=None, setname=None, requesturl=None, location=None,
attempts = 5
while attempts > 0:
LOG.debug('AWS Request: {0}'.format(requesturl))
LOG.trace('AWS Request Parameters: {0}'.format(params_with_headers))
log.debug('AWS Request: %s', requesturl)
log.trace('AWS Request Parameters: %s', params_with_headers)
try:
result = requests.get(requesturl, headers=headers, params=params_with_headers)
LOG.debug(
'AWS Response Status Code: {0}'.format(
result.status_code
)
)
LOG.trace(
'AWS Response Text: {0}'.format(
result.text.encode(
result.encoding if result.encoding else 'utf-8'
)
)
log.debug('AWS Response Status Code: %s', result.status_code)
log.trace(
'AWS Response Text: %s',
result.text.encode(result.encoding if result.encoding else 'utf-8')
)
result.raise_for_status()
break
@ -469,29 +462,26 @@ def query(params=None, setname=None, requesturl=None, location=None,
err_code = data.get('Errors', {}).get('Error', {}).get('Code', '')
if attempts > 0 and err_code and err_code in AWS_RETRY_CODES:
attempts -= 1
LOG.error(
'AWS Response Status Code and Error: [{0} {1}] {2}; '
'Attempts remaining: {3}'.format(
exc.response.status_code, exc, data, attempts
)
log.error(
'AWS Response Status Code and Error: [%s %s] %s; '
'Attempts remaining: %s',
exc.response.status_code, exc, data, attempts
)
# Wait a bit before continuing to prevent throttling
time.sleep(2)
continue
LOG.error(
'AWS Response Status Code and Error: [{0} {1}] {2}'.format(
exc.response.status_code, exc, data
)
log.error(
'AWS Response Status Code and Error: [%s %s] %s',
exc.response.status_code, exc, data
)
if return_url is True:
return {'error': data}, requesturl
return {'error': data}
else:
LOG.error(
'AWS Response Status Code and Error: [{0} {1}] {2}'.format(
exc.response.status_code, exc, data
)
log.error(
'AWS Response Status Code and Error: [%s %s] %s',
exc.response.status_code, exc, data
)
if return_url is True:
return {'error': data}, requesturl
@ -536,7 +526,7 @@ def get_region_from_metadata():
global __Location__
if __Location__ == 'do-not-get-from-metadata':
LOG.debug('Previously failed to get AWS region from metadata. Not trying again.')
log.debug('Previously failed to get AWS region from metadata. Not trying again.')
return None
# Cached region
@ -550,7 +540,7 @@ def get_region_from_metadata():
proxies={'http': ''}, timeout=AWS_METADATA_TIMEOUT,
)
except requests.exceptions.RequestException:
LOG.warning('Failed to get AWS region from instance metadata.', exc_info=True)
log.warning('Failed to get AWS region from instance metadata.', exc_info=True)
# Do not try again
__Location__ = 'do-not-get-from-metadata'
return None
@ -560,7 +550,7 @@ def get_region_from_metadata():
__Location__ = region
return __Location__
except (ValueError, KeyError):
LOG.warning('Failed to decode JSON from instance metadata.')
log.warning('Failed to decode JSON from instance metadata.')
return None
return None

View File

@ -35,7 +35,7 @@ Example Usage:
'''
# Import Python libs
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import hashlib
import logging
import sys
@ -178,8 +178,10 @@ def get_connection(service, module=None, region=None, key=None, keyid=None,
conn = __utils__['boto.get_connection']('ec2', profile='custom_profile')
'''
module = module or service
module, submodule = ('boto.' + module).rsplit('.', 1)
# future lint: disable=blacklisted-function
module = str(module or service)
module, submodule = (str('boto.') + module).rsplit(str('.'), 1)
# future lint: enable=blacklisted-function
svc_mod = getattr(__import__(module, fromlist=[submodule]), submodule)

View File

@ -35,7 +35,7 @@ Example Usage:
'''
# Import Python libs
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import hashlib
import logging
import sys
@ -120,7 +120,7 @@ def _get_profile(service, region, key, keyid, profile):
if not region:
region = 'us-east-1'
log.info('Assuming default region {0}'.format(region))
log.info('Assuming default region %s', region)
if not key and _option(service + '.key'):
key = _option(service + '.key')
@ -260,8 +260,8 @@ def get_error(e):
aws['status'] = e.status
if hasattr(e, 'reason'):
aws['reason'] = e.reason
if str(e) != '':
aws['message'] = str(e)
if six.text_type(e) != '':
aws['message'] = six.text_type(e)
if hasattr(e, 'error_code') and e.error_code is not None:
aws['code'] = e.error_code

View File

@ -21,6 +21,8 @@
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
from __future__ import absolute_import, print_function, unicode_literals
def __virtual__():
return True

View File

@ -3,7 +3,7 @@
In-memory caching used by Salt
'''
# Import Python libs
from __future__ import absolute_import, print_function
from __future__ import absolute_import, print_function, unicode_literals
import os
import re
import time
@ -17,6 +17,7 @@ except ImportError:
# Import salt libs
import salt.config
import salt.payload
import salt.utils.data
import salt.utils.dictupdate
import salt.utils.files
@ -37,7 +38,7 @@ class CacheFactory(object):
'''
@classmethod
def factory(cls, backend, ttl, *args, **kwargs):
log.info('Factory backend: {0}'.format(backend))
log.info('Factory backend: %s', backend)
if backend == 'memory':
return CacheDict(ttl, *args, **kwargs)
elif backend == 'disk':
@ -142,7 +143,7 @@ class CacheDisk(CacheDict):
if not HAS_MSGPACK or not os.path.exists(self._path):
return
with salt.utils.files.fopen(self._path, 'rb') as fp_:
cache = msgpack.load(fp_, encoding=__salt_system_encoding__)
cache = salt.utils.data.decode(msgpack.load(fp_, encoding=__salt_system_encoding__))
if "CacheDisk_cachetime" in cache: # new format
self._dict = cache["CacheDisk_data"]
self._key_cache_time = cache["CacheDisk_cachetime"]
@ -152,7 +153,7 @@ class CacheDisk(CacheDict):
for key in self._dict:
self._key_cache_time[key] = timestamp
if log.isEnabledFor(logging.DEBUG):
log.debug('Disk cache retrieved: {0}'.format(cache))
log.debug('Disk cache retrieved: %s', cache)
def _write(self):
'''
@ -295,7 +296,7 @@ class ContextCache(object):
Retrieve a context cache from disk
'''
with salt.utils.files.fopen(self.cache_path, 'rb') as cache:
return self.serial.load(cache)
return salt.utils.data.decode(self.serial.load(cache))
def context_cache(func):

View File

@ -4,7 +4,7 @@ Utility functions for salt.cloud
'''
# Import python libs
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import errno
import os
import stat
@ -48,9 +48,11 @@ import salt.loader
import salt.template
import salt.utils.compat
import salt.utils.crypt
import salt.utils.data
import salt.utils.event
import salt.utils.files
import salt.utils.platform
import salt.utils.stringutils
import salt.utils.versions
import salt.utils.vt
import salt.utils.yaml
@ -107,15 +109,15 @@ def __render_script(path, vm_=None, opts=None, minion=''):
'''
Return the rendered script
'''
log.info('Rendering deploy script: {0}'.format(path))
log.info('Rendering deploy script: %s', path)
try:
with salt.utils.files.fopen(path, 'r') as fp_:
template = Template(fp_.read())
return str(template.render(opts=opts, vm=vm_, minion=minion))
template = Template(salt.utils.stringutils.to_unicode(fp_.read()))
return six.text_type(template.render(opts=opts, vm=vm_, minion=minion))
except AttributeError:
# Specified renderer was not found
with salt.utils.files.fopen(path, 'r') as fp_:
return fp_.read()
return six.text_type(fp_.read())
def os_script(os_, vm_=None, opts=None, minion=''):
@ -162,9 +164,9 @@ def gen_keys(keysize=2048):
priv_path = os.path.join(tdir, 'minion.pem')
pub_path = os.path.join(tdir, 'minion.pub')
with salt.utils.files.fopen(priv_path) as fp_:
priv = fp_.read()
priv = salt.utils.stringutils.to_unicode(fp_.read())
with salt.utils.files.fopen(pub_path) as fp_:
pub = fp_.read()
pub = salt.utils.stringutils.to_unicode(fp_.read())
shutil.rmtree(tdir)
return priv, pub
@ -182,7 +184,7 @@ def accept_key(pki_dir, pub, id_):
key = os.path.join(pki_dir, 'minions', id_)
with salt.utils.files.fopen(key, 'w+') as fp_:
fp_.write(pub)
fp_.write(salt.utils.stringutils.to_str(pub))
oldkey = os.path.join(pki_dir, 'minions_pre', id_)
if os.path.isfile(oldkey):
@ -198,7 +200,7 @@ def remove_key(pki_dir, id_):
key = os.path.join(pki_dir, 'minions', id_)
if os.path.isfile(key):
os.remove(key)
log.debug('Deleted \'{0}\''.format(key))
log.debug('Deleted \'%s\'', key)
def rename_key(pki_dir, id_, new_id):
@ -370,7 +372,7 @@ def bootstrap(vm_, opts=None):
# If we haven't generated any keys yet, do so now.
if 'pub_key' not in vm_ and 'priv_key' not in vm_:
log.debug('Generating keys for \'{0[name]}\''.format(vm_))
log.debug('Generating keys for \'%s\'', vm_['name'])
vm_['priv_key'], vm_['pub_key'] = gen_keys(
salt.config.get_cloud_config_value(
@ -550,7 +552,7 @@ def bootstrap(vm_, opts=None):
if inline_script_config and deploy_config is False:
inline_script_deployed = run_inline_script(**inline_script_kwargs)
if inline_script_deployed is not False:
log.info('Inline script(s) ha(s|ve) run on {0}'.format(vm_['name']))
log.info('Inline script(s) ha(s|ve) run on %s', vm_['name'])
ret['deployed'] = False
return ret
else:
@ -562,16 +564,16 @@ def bootstrap(vm_, opts=None):
if inline_script_config:
inline_script_deployed = run_inline_script(**inline_script_kwargs)
if inline_script_deployed is not False:
log.info('Inline script(s) ha(s|ve) run on {0}'.format(vm_['name']))
log.info('Inline script(s) ha(s|ve) run on %s', vm_['name'])
if deployed is not False:
ret['deployed'] = True
if deployed is not True:
ret.update(deployed)
log.info('Salt installed on {0}'.format(vm_['name']))
log.info('Salt installed on %s', vm_['name'])
return ret
log.error('Failed to start Salt on host {0}'.format(vm_['name']))
log.error('Failed to start Salt on host %s', vm_['name'])
return {
'Error': {
'Not Deployed': 'Failed to start Salt on host {0}'.format(
@ -617,7 +619,7 @@ def wait_for_fun(fun, timeout=900, **kwargs):
Wait until a function finishes, or times out
'''
start = time.time()
log.debug('Attempting function {0}'.format(fun))
log.debug('Attempting function %s', fun)
trycount = 0
while True:
trycount += 1
@ -626,15 +628,11 @@ def wait_for_fun(fun, timeout=900, **kwargs):
if not isinstance(response, bool):
return response
except Exception as exc:
log.debug('Caught exception in wait_for_fun: {0}'.format(exc))
log.debug('Caught exception in wait_for_fun: %s', exc)
time.sleep(1)
log.debug(
'Retrying function {0} on (try {1})'.format(
fun, trycount
)
)
log.debug('Retrying function %s on (try %s)', fun, trycount)
if time.time() - start > timeout:
log.error('Function timed out: {0}'.format(timeout))
log.error('Function timed out: %s', timeout)
return False
@ -661,17 +659,12 @@ def wait_for_port(host, port=22, timeout=900, gateway=None):
test_ssh_host = ssh_gateway
test_ssh_port = ssh_gateway_port
log.debug(
'Attempting connection to host {0} on port {1} '
'via gateway {2} on port {3}'.format(
host, port, ssh_gateway, ssh_gateway_port
)
'Attempting connection to host %s on port %s '
'via gateway %s on port %s',
host, port, ssh_gateway, ssh_gateway_port
)
else:
log.debug(
'Attempting connection to host {0} on port {1}'.format(
host, port
)
)
log.debug('Attempting connection to host %s on port %s', host, port)
trycount = 0
while True:
trycount += 1
@ -691,33 +684,19 @@ def wait_for_port(host, port=22, timeout=900, gateway=None):
sock.close()
break
except socket.error as exc:
log.debug('Caught exception in wait_for_port: {0}'.format(exc))
log.debug('Caught exception in wait_for_port: %s', exc)
time.sleep(1)
if time.time() - start > timeout:
log.error('Port connection timed out: {0}'.format(timeout))
log.error('Port connection timed out: %s', timeout)
return False
if not gateway:
log.debug(
'Retrying connection to host {0} on port {1} '
'(try {2})'.format(
test_ssh_host, test_ssh_port, trycount
)
)
else:
log.debug(
'Retrying connection to Gateway {0} on port {1} '
'(try {2})'.format(
test_ssh_host, test_ssh_port, trycount
)
)
log.debug(
'Retrying connection to %s %s on port %s (try %s)',
'gateway' if gateway else 'host', test_ssh_host, test_ssh_port, trycount
)
if not gateway:
return True
# Let the user know that his gateway is good!
log.debug(
'Gateway {0} on port {1} is reachable.'.format(
test_ssh_host, test_ssh_port
)
)
log.debug('Gateway %s on port %s is reachable.', test_ssh_host, test_ssh_port)
# Now we need to test the host via the gateway.
# We will use netcat on the gateway to test the port
@ -756,7 +735,7 @@ def wait_for_port(host, port=22, timeout=900, gateway=None):
' '.join(ssh_args), gateway['ssh_gateway_user'], ssh_gateway,
ssh_gateway_port, pipes.quote(command)
)
log.debug('SSH command: \'{0}\''.format(cmd))
log.debug('SSH command: \'%s\'', cmd)
kwargs = {'display_ssh_output': False,
'password': gateway.get('ssh_gateway_password', None)}
@ -774,7 +753,7 @@ def wait_for_port(host, port=22, timeout=900, gateway=None):
gateway_retries -= 1
log.error(
'Gateway usage seems to be broken, '
'password error ? Tries left: {0}'.format(gateway_retries))
'password error ? Tries left: %s', gateway_retries)
if not gateway_retries:
raise SaltCloudExecutionFailure(
'SSH gateway is reachable but we can not login')
@ -787,14 +766,12 @@ def wait_for_port(host, port=22, timeout=900, gateway=None):
return True
time.sleep(1)
if time.time() - start > timeout:
log.error('Port connection timed out: {0}'.format(timeout))
log.error('Port connection timed out: %s', timeout)
return False
log.debug(
'Retrying connection to host {0} on port {1} '
'via gateway {2} on port {3}. (try {4})'.format(
host, port, ssh_gateway, ssh_gateway_port,
trycount
)
'Retrying connection to host %s on port %s '
'via gateway %s on port %s. (try %s)',
host, port, ssh_gateway, ssh_gateway_port, trycount
)
@ -804,10 +781,8 @@ def wait_for_winexesvc(host, port, username, password, timeout=900):
'''
start = time.time()
log.debug(
'Attempting winexe connection to host {0} on port {1}'.format(
host,
port
)
'Attempting winexe connection to host %s on port %s',
host, port
)
creds = "-U '{0}%{1}' //{2}".format(
username,
@ -831,20 +806,16 @@ def wait_for_winexesvc(host, port, username, password, timeout=900):
if ret_code == 0:
log.debug('winexe connected...')
return True
log.debug('Return code was {0}'.format(ret_code))
log.debug('Return code was %s', ret_code)
except socket.error as exc:
log.debug('Caught exception in wait_for_winexesvc: {0}'.format(exc))
log.debug('Caught exception in wait_for_winexesvc: %s', exc)
if time.time() - start > timeout:
log.error('winexe connection timed out: {0}'.format(timeout))
log.error('winexe connection timed out: %s', timeout)
return False
log.debug(
'Retrying winexe connection to host {0} on port {1} '
'(try {2})'.format(
host,
port,
try_count
)
'Retrying winexe connection to host %s on port %s (try %s)',
host, port, try_count
)
time.sleep(1)
@ -855,9 +826,8 @@ def wait_for_winrm(host, port, username, password, timeout=900, use_ssl=True, ve
'''
start = time.time()
log.debug(
'Attempting WinRM connection to host {0} on port {1}'.format(
host, port
)
'Attempting WinRM connection to host %s on port %s',
host, port
)
transport = 'ssl'
if not use_ssl:
@ -875,23 +845,21 @@ def wait_for_winrm(host, port, username, password, timeout=900, use_ssl=True, ve
s = winrm.Session(**winrm_kwargs)
if hasattr(s.protocol, 'set_timeout'):
s.protocol.set_timeout(15)
log.trace('WinRM endpoint url: {0}'.format(s.url))
log.trace('WinRM endpoint url: %s', s.url)
r = s.run_cmd('sc query winrm')
if r.status_code == 0:
log.debug('WinRM session connected...')
return s
log.debug('Return code was {0}'.format(r.status_code))
log.debug('Return code was %s', r.status_code)
except WinRMTransportError as exc:
log.debug('Caught exception in wait_for_winrm: {0}'.format(exc))
log.debug('Caught exception in wait_for_winrm: %s', exc)
if time.time() - start > timeout:
log.error('WinRM connection timed out: {0}'.format(timeout))
log.error('WinRM connection timed out: %s', timeout)
return None
log.debug(
'Retrying WinRM connection to host {0} on port {1} '
'(try {2})'.format(
host, port, trycount
)
'Retrying WinRM connection to host %s on port %s (try %s)',
host, port, trycount
)
time.sleep(1)
@ -958,16 +926,15 @@ def wait_for_passwd(host, port=22, ssh_timeout=15, username='root',
)
)
kwargs['key_filename'] = key_filename
log.debug('Using {0} as the key_filename'.format(key_filename))
log.debug('Using %s as the key_filename', key_filename)
elif password:
kwargs['password'] = password
log.debug('Using password authentication')
trycount += 1
log.debug(
'Attempting to authenticate as {0} (try {1} of {2})'.format(
username, trycount, maxtries
)
'Attempting to authenticate as %s (try %s of %s)',
username, trycount, maxtries
)
status = root_cmd('date', tty=False, sudo=False, **kwargs)
@ -977,11 +944,7 @@ def wait_for_passwd(host, port=22, ssh_timeout=15, username='root',
time.sleep(trysleep)
continue
log.error(
'Authentication failed: status code {0}'.format(
status
)
)
log.error('Authentication failed: status code %s', status)
return False
if connectfail is False:
return True
@ -1033,8 +996,8 @@ def deploy_windows(host,
return False
starttime = time.mktime(time.localtime())
log.debug('Deploying {0} at {1} (Windows)'.format(host, starttime))
log.trace('HAS_WINRM: {0}, use_winrm: {1}'.format(HAS_WINRM, use_winrm))
log.debug('Deploying %s at %s (Windows)', host, starttime)
log.trace('HAS_WINRM: %s, use_winrm: %s', HAS_WINRM, use_winrm)
port_available = wait_for_port(host=host, port=port, timeout=port_timeout * 60)
@ -1057,12 +1020,8 @@ def deploy_windows(host,
timeout=port_timeout * 60)
if port_available and service_available:
log.debug('SMB port {0} on {1} is available'.format(port, host))
log.debug(
'Logging into {0}:{1} as {2}'.format(
host, port, username
)
)
log.debug('SMB port %s on %s is available', port, host)
log.debug('Logging into %s:%s as %s', host, port, username)
newtimeout = timeout - (time.mktime(time.localtime()) - starttime)
smb_conn = salt.utils.smb.get_conn(host, username, password)
@ -1094,12 +1053,12 @@ def deploy_windows(host,
if master_sign_pub_file:
# Read master-sign.pub file
log.debug("Copying master_sign.pub file from {0} to minion".format(master_sign_pub_file))
log.debug("Copying master_sign.pub file from %s to minion", master_sign_pub_file)
try:
with salt.utils.files.fopen(master_sign_pub_file, 'rb') as master_sign_fh:
smb_conn.putFile('C$', 'salt\\conf\\pki\\minion\\master_sign.pub', master_sign_fh.read)
except Exception as e:
log.debug("Exception copying master_sign.pub file {0} to minion".format(master_sign_pub_file))
log.debug("Exception copying master_sign.pub file %s to minion", master_sign_pub_file)
# Copy over win_installer
# win_installer refers to a file such as:
@ -1274,15 +1233,15 @@ def deploy_script(host,
gateway = kwargs['gateway']
starttime = time.localtime()
log.debug('Deploying {0} at {1}'.format(
host,
time.strftime('%Y-%m-%d %H:%M:%S', starttime))
log.debug(
'Deploying %s at %s',
host, time.strftime('%Y-%m-%d %H:%M:%S', starttime)
)
known_hosts_file = kwargs.get('known_hosts_file', '/dev/null')
hard_timeout = opts.get('hard_timeout', None)
if wait_for_port(host=host, port=port, gateway=gateway):
log.debug('SSH port {0} on {1} is available'.format(port, host))
log.debug('SSH port %s on %s is available', port, host)
if wait_for_passwd(host, port=port, username=username,
password=password, key_filename=key_filename,
ssh_timeout=ssh_timeout,
@ -1290,11 +1249,7 @@ def deploy_script(host,
gateway=gateway, known_hosts_file=known_hosts_file,
maxtries=maxtries, hard_timeout=hard_timeout):
log.debug(
'Logging into {0}:{1} as {2}'.format(
host, port, username
)
)
log.debug('Logging into %s:%s as %s', host, port, username)
ssh_kwargs = {
'hostname': host,
'port': port,
@ -1309,7 +1264,7 @@ def deploy_script(host,
ssh_kwargs['ssh_gateway_key'] = gateway['ssh_gateway_key']
ssh_kwargs['ssh_gateway_user'] = gateway['ssh_gateway_user']
if key_filename:
log.debug('Using {0} as the key_filename'.format(key_filename))
log.debug('Using %s as the key_filename', key_filename)
ssh_kwargs['key_filename'] = key_filename
elif password and kwargs.get('has_ssh_agent', False) is False:
ssh_kwargs['password'] = password
@ -1348,10 +1303,9 @@ def deploy_script(host,
remote_file = file_map[map_item]
if not os.path.exists(map_item):
log.error(
'The local file "{0}" does not exist, and will not be '
'copied to "{1}" on the target system'.format(
local_file, remote_file
)
'The local file "%s" does not exist, and will not be '
'copied to "%s" on the target system',
local_file, remote_file
)
file_map_fail.append({local_file: remote_file})
continue
@ -1592,13 +1546,13 @@ def deploy_script(host,
deploy_command
)
)
log.debug('Executed command \'{0}\''.format(deploy_command))
log.debug('Executed command \'%s\'', deploy_command)
# Remove the deploy script
if not keep_tmp:
root_cmd('rm -f \'{0}/deploy.sh\''.format(tmp_dir),
tty, sudo, **ssh_kwargs)
log.debug('Removed {0}/deploy.sh'.format(tmp_dir))
log.debug('Removed %s/deploy.sh', tmp_dir)
if script_env:
root_cmd(
'rm -f \'{0}/environ-deploy-wrapper.sh\''.format(
@ -1606,51 +1560,45 @@ def deploy_script(host,
),
tty, sudo, **ssh_kwargs
)
log.debug(
'Removed {0}/environ-deploy-wrapper.sh'.format(
tmp_dir
)
)
log.debug('Removed %s/environ-deploy-wrapper.sh', tmp_dir)
if keep_tmp:
log.debug(
'Not removing deployment files from {0}/'.format(tmp_dir)
)
log.debug('Not removing deployment files from %s/', tmp_dir)
else:
# Remove minion configuration
if minion_pub:
root_cmd('rm -f \'{0}/minion.pub\''.format(tmp_dir),
tty, sudo, **ssh_kwargs)
log.debug('Removed {0}/minion.pub'.format(tmp_dir))
log.debug('Removed %s/minion.pub', tmp_dir)
if minion_pem:
root_cmd('rm -f \'{0}/minion.pem\''.format(tmp_dir),
tty, sudo, **ssh_kwargs)
log.debug('Removed {0}/minion.pem'.format(tmp_dir))
log.debug('Removed %s/minion.pem', tmp_dir)
if minion_conf:
root_cmd('rm -f \'{0}/grains\''.format(tmp_dir),
tty, sudo, **ssh_kwargs)
log.debug('Removed {0}/grains'.format(tmp_dir))
log.debug('Removed %s/grains', tmp_dir)
root_cmd('rm -f \'{0}/minion\''.format(tmp_dir),
tty, sudo, **ssh_kwargs)
log.debug('Removed {0}/minion'.format(tmp_dir))
log.debug('Removed %s/minion', tmp_dir)
if master_sign_pub_file:
root_cmd('rm -f {0}/master_sign.pub'.format(tmp_dir),
tty, sudo, **ssh_kwargs)
log.debug('Removed {0}/master_sign.pub'.format(tmp_dir))
log.debug('Removed %s/master_sign.pub', tmp_dir)
# Remove master configuration
if master_pub:
root_cmd('rm -f \'{0}/master.pub\''.format(tmp_dir),
tty, sudo, **ssh_kwargs)
log.debug('Removed {0}/master.pub'.format(tmp_dir))
log.debug('Removed %s/master.pub', tmp_dir)
if master_pem:
root_cmd('rm -f \'{0}/master.pem\''.format(tmp_dir),
tty, sudo, **ssh_kwargs)
log.debug('Removed {0}/master.pem'.format(tmp_dir))
log.debug('Removed %s/master.pem', tmp_dir)
if master_conf:
root_cmd('rm -f \'{0}/master\''.format(tmp_dir),
tty, sudo, **ssh_kwargs)
log.debug('Removed {0}/master'.format(tmp_dir))
log.debug('Removed %s/master', tmp_dir)
# Remove pre-seed keys directory
if preseed_minion_keys is not None:
@ -1659,9 +1607,7 @@ def deploy_script(host,
preseed_minion_keys_tempdir
), tty, sudo, **ssh_kwargs
)
log.debug(
'Removed {0}'.format(preseed_minion_keys_tempdir)
)
log.debug('Removed %s', preseed_minion_keys_tempdir)
if start_action and not parallel:
queuereturn = queue.get()
@ -1673,19 +1619,14 @@ def deploy_script(host,
# )
# for line in output:
# print(line)
log.info(
'Executing {0} on the salt-minion'.format(
start_action
)
)
log.info('Executing %s on the salt-minion', start_action)
root_cmd(
'salt-call {0}'.format(start_action),
tty, sudo, **ssh_kwargs
)
log.info(
'Finished executing {0} on the salt-minion'.format(
start_action
)
'Finished executing %s on the salt-minion',
start_action
)
# Fire deploy action
fire_event(
@ -1737,12 +1678,12 @@ def run_inline_script(host,
gateway = kwargs['gateway']
starttime = time.mktime(time.localtime())
log.debug('Deploying {0} at {1}'.format(host, starttime))
log.debug('Deploying %s at %s', host, starttime)
known_hosts_file = kwargs.get('known_hosts_file', '/dev/null')
if wait_for_port(host=host, port=port, gateway=gateway):
log.debug('SSH port {0} on {1} is available'.format(port, host))
log.debug('SSH port %s on %s is available', port, host)
newtimeout = timeout - (time.mktime(time.localtime()) - starttime)
if wait_for_passwd(host, port=port, username=username,
password=password, key_filename=key_filename,
@ -1750,11 +1691,7 @@ def run_inline_script(host,
display_ssh_output=display_ssh_output,
gateway=gateway, known_hosts_file=known_hosts_file):
log.debug(
'Logging into {0}:{1} as {2}'.format(
host, port, username
)
)
log.debug('Logging into %s:%s as %s', host, port, username)
newtimeout = timeout - (time.mktime(time.localtime()) - starttime)
ssh_kwargs = {
'hostname': host,
@ -1770,7 +1707,7 @@ def run_inline_script(host,
ssh_kwargs['ssh_gateway_key'] = gateway['ssh_gateway_key']
ssh_kwargs['ssh_gateway_user'] = gateway['ssh_gateway_user']
if key_filename:
log.debug('Using {0} as the key_filename'.format(key_filename))
log.debug('Using %s as the key_filename', key_filename)
ssh_kwargs['key_filename'] = key_filename
elif password and 'has_ssh_agent' in kwargs and kwargs['has_ssh_agent'] is False:
ssh_kwargs['password'] = password
@ -1781,11 +1718,11 @@ def run_inline_script(host,
allow_failure=True, **ssh_kwargs) and inline_script:
log.debug('Found inline script to execute.')
for cmd_line in inline_script:
log.info("Executing inline command: " + str(cmd_line))
log.info('Executing inline command: %s', cmd_line)
ret = root_cmd('sh -c "( {0} )"'.format(cmd_line),
tty, sudo, allow_failure=True, **ssh_kwargs)
if ret:
log.info("[" + str(cmd_line) + "] Output: " + str(ret))
log.info('[%s] Output: %s', cmd_line, ret)
# TODO: ensure we send the correct return value
return True
@ -1897,7 +1834,7 @@ def _exec_ssh_cmd(cmd, error_msg=None, allow_failure=False, **kwargs):
return proc.exitstatus
except salt.utils.vt.TerminalException as err:
trace = traceback.format_exc()
log.error(error_msg.format(cmd, err, trace))
log.error(error_msg.format(cmd, err, trace)) # pylint: disable=str-format-in-logging
finally:
proc.close(terminate=True, kill=True)
# Signal an error
@ -1921,7 +1858,7 @@ def scp_file(dest_path, contents=None, kwargs=None, local_file=None):
if exc.errno != errno.EBADF:
raise exc
log.debug('Uploading {0} to {1}'.format(dest_path, kwargs['hostname']))
log.debug('Uploading %s to %s', dest_path, kwargs['hostname'])
ssh_args = [
# Don't add new hosts to the host key database
@ -2007,7 +1944,7 @@ def scp_file(dest_path, contents=None, kwargs=None, local_file=None):
)
)
log.debug('SCP command: \'{0}\''.format(cmd))
log.debug('SCP command: \'%s\'', cmd)
retcode = _exec_ssh_cmd(cmd,
error_msg='Failed to upload file \'{0}\': {1}\n{2}',
password_retries=3,
@ -2062,7 +1999,7 @@ def sftp_file(dest_path, contents=None, kwargs=None, local_file=None):
if os.path.isdir(local_file):
put_args = ['-r']
log.debug('Uploading {0} to {1} (sftp)'.format(dest_path, kwargs.get('hostname')))
log.debug('Uploading %s to %s (sftp)', dest_path, kwargs.get('hostname'))
ssh_args = [
# Don't add new hosts to the host key database
@ -2137,7 +2074,7 @@ def sftp_file(dest_path, contents=None, kwargs=None, local_file=None):
cmd = 'echo "put {0} {1} {2}" | sftp {3} {4[username]}@{5}'.format(
' '.join(put_args), file_to_upload, dest_path, ' '.join(ssh_args), kwargs, ipaddr
)
log.debug('SFTP command: \'{0}\''.format(cmd))
log.debug('SFTP command: \'%s\'', cmd)
retcode = _exec_ssh_cmd(cmd,
error_msg='Failed to upload file \'{0}\': {1}\n{2}',
password_retries=3,
@ -2183,13 +2120,7 @@ def win_cmd(command, **kwargs):
proc.communicate()
return proc.returncode
except Exception as err:
log.error(
'Failed to execute command \'{0}\': {1}\n'.format(
logging_command,
err
),
exc_info=True
)
log.exception('Failed to execute command \'%s\'', logging_command)
# Signal an error
return 1
@ -2198,9 +2129,7 @@ def winrm_cmd(session, command, flags, **kwargs):
'''
Wrapper for commands to be run against Windows boxes using WinRM.
'''
log.debug('Executing WinRM command: {0} {1}'.format(
command, flags
))
log.debug('Executing WinRM command: %s %s', command, flags)
r = session.run_cmd(command, flags)
return r.status_code
@ -2220,7 +2149,7 @@ def root_cmd(command, tty, sudo, allow_failure=False, **kwargs):
logging_command = 'sudo -S "XXX-REDACTED-XXX" {0}'.format(command)
command = 'sudo -S {0}'.format(command)
log.debug('Using sudo to run command {0}'.format(logging_command))
log.debug('Using sudo to run command %s', logging_command)
ssh_args = []
@ -2291,9 +2220,8 @@ def root_cmd(command, tty, sudo, allow_failure=False, **kwargs):
)
])
log.info(
'Using SSH gateway {0}@{1}:{2}'.format(
ssh_gateway_user, ssh_gateway, ssh_gateway_port
)
'Using SSH gateway %s@%s:%s',
ssh_gateway_user, ssh_gateway, ssh_gateway_port
)
if 'port' in kwargs:
@ -2311,7 +2239,7 @@ def root_cmd(command, tty, sudo, allow_failure=False, **kwargs):
logging_command = 'timeout {0} {1}'.format(hard_timeout, logging_command)
cmd = 'timeout {0} {1}'.format(hard_timeout, cmd)
log.debug('SSH command: \'{0}\''.format(logging_command))
log.debug('SSH command: \'%s\'', logging_command)
retcode = _exec_ssh_cmd(cmd, allow_failure=allow_failure, **kwargs)
return retcode
@ -2325,11 +2253,7 @@ def check_auth(name, sock_dir=None, queue=None, timeout=300):
event = salt.utils.event.SaltEvent('master', sock_dir, listen=True)
starttime = time.mktime(time.localtime())
newtimeout = timeout
log.debug(
'In check_auth, waiting for {0} to become available'.format(
name
)
)
log.debug('In check_auth, waiting for %s to become available', name)
while newtimeout > 0:
newtimeout = timeout - (time.mktime(time.localtime()) - starttime)
ret = event.get_event(full=True)
@ -2338,7 +2262,7 @@ def check_auth(name, sock_dir=None, queue=None, timeout=300):
if ret['tag'] == 'minion_start' and ret['data']['id'] == name:
queue.put(name)
newtimeout = 0
log.debug('Minion {0} is ready to receive commands'.format(name))
log.debug('Minion %s is ready to receive commands', name)
def ip_to_int(ip):
@ -2408,14 +2332,11 @@ def remove_sshkey(host, known_hosts=None):
if known_hosts is not None:
log.debug(
'Removing ssh key for {0} from known hosts file {1}'.format(
host, known_hosts
)
'Removing ssh key for %s from known hosts file %s',
host, known_hosts
)
else:
log.debug(
'Removing ssh key for {0} from known hosts file'.format(host)
)
log.debug('Removing ssh key for %s from known hosts file', host)
cmd = 'ssh-keygen -R {0}'.format(host)
subprocess.call(cmd, shell=True)
@ -2458,18 +2379,14 @@ def wait_for_ip(update_callback,
duration = timeout
while True:
log.debug(
'Waiting for VM IP. Giving up in 00:{0:02d}:{1:02d}.'.format(
int(timeout // 60),
int(timeout % 60)
)
'Waiting for VM IP. Giving up in 00:%02d:%02d.',
int(timeout // 60), int(timeout % 60)
)
data = update_callback(*update_args, **update_kwargs)
if data is False:
log.debug(
'\'update_callback\' has returned \'False\', which is '
'considered a failure. Remaining Failures: {0}.'.format(
max_failures
)
'considered a failure. Remaining Failures: %s.', max_failures
)
max_failures -= 1
if max_failures <= 0:
@ -2495,7 +2412,7 @@ def wait_for_ip(update_callback,
if interval > timeout:
interval = timeout + 1
log.info('Interval multiplier in effect; interval is '
'now {0}s.'.format(interval))
'now %ss.', interval)
def list_nodes_select(nodes, selection, call=None):
@ -2520,7 +2437,7 @@ def list_nodes_select(nodes, selection, call=None):
pairs = {}
data = nodes[node]
for key in data:
if str(key) in selection:
if six.text_type(key) in selection:
value = data[key]
pairs[key] = value
ret[node] = pairs
@ -2536,13 +2453,13 @@ def lock_file(filename, interval=.5, timeout=15):
Note that these locks are only recognized by Salt Cloud, and not other
programs or platforms.
'''
log.trace('Attempting to obtain lock for {0}'.format(filename))
log.trace('Attempting to obtain lock for %s', filename)
lock = filename + '.lock'
start = time.time()
while True:
if os.path.exists(lock):
if time.time() - start >= timeout:
log.warning('Unable to obtain lock for {0}'.format(filename))
log.warning('Unable to obtain lock for %s', filename)
return False
time.sleep(interval)
else:
@ -2559,12 +2476,12 @@ def unlock_file(filename):
Note that these locks are only recognized by Salt Cloud, and not other
programs or platforms.
'''
log.trace('Removing lock for {0}'.format(filename))
log.trace('Removing lock for %s', filename)
lock = filename + '.lock'
try:
os.remove(lock)
except OSError as exc:
log.trace('Unable to remove lock for {0}: {1}'.format(filename, exc))
log.trace('Unable to remove lock for %s: %s', filename, exc)
def cachedir_index_add(minion_id, profile, driver, provider, base=None):
@ -2589,7 +2506,7 @@ def cachedir_index_add(minion_id, profile, driver, provider, base=None):
if os.path.exists(index_file):
mode = 'rb' if six.PY3 else 'r'
with salt.utils.files.fopen(index_file, mode) as fh_:
index = msgpack.load(fh_)
index = salt.utils.data.decode(msgpack.load(fh_))
else:
index = {}
@ -2623,7 +2540,7 @@ def cachedir_index_del(minion_id, base=None):
if os.path.exists(index_file):
mode = 'rb' if six.PY3 else 'r'
with salt.utils.files.fopen(index_file, mode) as fh_:
index = msgpack.load(fh_)
index = salt.utils.data.decode(msgpack.load(fh_))
else:
return
@ -2721,7 +2638,7 @@ def change_minion_cachedir(
path = os.path.join(base, cachedir, fname)
with salt.utils.files.fopen(path, 'r') as fh_:
cache_data = msgpack.load(fh_)
cache_data = salt.utils.data.decode(msgpack.load(fh_))
cache_data.update(data)
@ -2763,7 +2680,7 @@ def delete_minion_cachedir(minion_id, provider, opts, base=None):
fname = '{0}.p'.format(minion_id)
for cachedir in 'requested', 'active':
path = os.path.join(base, cachedir, driver, provider, fname)
log.debug('path: {0}'.format(path))
log.debug('path: %s', path)
if os.path.exists(path):
os.remove(path)
@ -2799,7 +2716,7 @@ def list_cache_nodes_full(opts=None, provider=None, base=None):
fpath = os.path.join(min_dir, fname)
minion_id = fname[:-2] # strip '.p' from end of msgpack filename
with salt.utils.files.fopen(fpath, 'r') as fh_:
minions[driver][prov][minion_id] = msgpack.load(fh_)
minions[driver][prov][minion_id] = salt.utils.data.decode(msgpack.load(fh_))
return minions
@ -2861,7 +2778,7 @@ def update_bootstrap(config, url=None):
script_name = os.path.basename(url)
elif os.path.exists(url):
with salt.utils.files.fopen(url) as fic:
script_content = fic.read()
script_content = salt.utils.stringutils.to_unicode(fic.read())
script_name = os.path.basename(url)
# in last case, assuming we got a script content
else:
@ -2933,28 +2850,20 @@ def update_bootstrap(config, url=None):
try:
os.makedirs(entry)
except (OSError, IOError) as err:
log.info(
'Failed to create directory \'{0}\''.format(entry)
)
log.info('Failed to create directory \'%s\'', entry)
continue
if not is_writeable(entry):
log.debug(
'The \'{0}\' is not writeable. Continuing...'.format(
entry
)
)
log.debug('The \'%s\' is not writeable. Continuing...', entry)
continue
deploy_path = os.path.join(entry, script_name)
try:
finished_full.append(deploy_path)
with salt.utils.files.fopen(deploy_path, 'w') as fp_:
fp_.write(script_content)
fp_.write(salt.utils.stringutils.to_str(script_content))
except (OSError, IOError) as err:
log.debug(
'Failed to write the updated script: {0}'.format(err)
)
log.debug('Failed to write the updated script: %s', err)
continue
return {'Success': {'Files updated': finished_full}}
@ -3083,9 +2992,9 @@ def diff_node_cache(prov_dir, node, new_data, opts):
with salt.utils.files.fopen(path, 'r') as fh_:
try:
cache_data = msgpack.load(fh_)
cache_data = salt.utils.data.decode(msgpack.load(fh_))
except ValueError:
log.warning('Cache for {0} was corrupt: Deleting'.format(node))
log.warning('Cache for %s was corrupt: Deleting', node)
cache_data = {}
# Perform a simple diff between the old and the new data, and if it differs,
@ -3209,7 +3118,7 @@ def store_password_in_keyring(credential_id, username, password=None):
try:
_save_password_in_keyring(credential_id, username, password)
except keyring.errors.PasswordSetError as exc:
log.debug('Problem saving password in the keyring: {0}'.format(exc))
log.debug('Problem saving password in the keyring: %s', exc)
except ImportError:
log.error('Tried to store password in keyring, but no keyring module is installed')
return False
@ -3246,9 +3155,10 @@ def run_func_until_ret_arg(fun, kwargs, fun_call=None,
for k, v in six.iteritems(d0):
r_set[k] = v
status = _unwrap_dict(r_set, argument_being_watched)
log.debug('Function: {0}, Watched arg: {1}, Response: {2}'.format(str(fun).split(' ')[1],
argument_being_watched,
status))
log.debug(
'Function: %s, Watched arg: %s, Response: %s',
six.text_type(fun).split(' ')[1], argument_being_watched, status
)
time.sleep(5)
return True
@ -3290,22 +3200,16 @@ def check_key_path_and_mode(provider, key_path):
'''
if not os.path.exists(key_path):
log.error(
'The key file \'{0}\' used in the \'{1}\' provider configuration '
'does not exist.\n'.format(
key_path,
provider
)
'The key file \'%s\' used in the \'%s\' provider configuration '
'does not exist.\n', key_path, provider
)
return False
key_mode = stat.S_IMODE(os.stat(key_path).st_mode)
if key_mode not in (0o400, 0o600):
log.error(
'The key file \'{0}\' used in the \'{1}\' provider configuration '
'needs to be set to mode 0400 or 0600.\n'.format(
key_path,
provider
)
'The key file \'%s\' used in the \'%s\' provider configuration '
'needs to be set to mode 0400 or 0600.\n', key_path, provider
)
return False
@ -3355,6 +3259,6 @@ def userdata_template(opts, vm_, userdata):
'Templated userdata resulted in non-string result (%s), '
'converting to string', templated
)
templated = str(templated)
templated = six.text_type(templated)
return templated

View File

@ -4,7 +4,7 @@ Functions used for CLI color themes.
'''
# Import Python libs
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import logging
import os
@ -20,23 +20,24 @@ def get_color_theme(theme):
Return the color theme to use
'''
# Keep the heavy lifting out of the module space
import salt.utils.data
import salt.utils.files
import salt.utils.yaml
if not os.path.isfile(theme):
log.warning('The named theme {0} if not available'.format(theme))
log.warning('The named theme %s if not available', theme)
try:
with salt.utils.files.fopen(theme, 'rb') as fp_:
colors = salt.utils.yaml.safe_load(fp_)
colors = salt.utils.data.decode(salt.utils.yaml.safe_load(fp_))
ret = {}
for color in colors:
ret[color] = '\033[{0}m'.format(colors[color])
if not isinstance(colors, dict):
log.warning('The theme file {0} is not a dict'.format(theme))
log.warning('The theme file %s is not a dict', theme)
return {}
return ret
except Exception:
log.warning('Failed to read the color theme {0}'.format(theme))
log.warning('Failed to read the color theme %s', theme)
return {}

View File

@ -4,7 +4,7 @@ Compatibility functions for utils
'''
# Import python libs
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import sys
import copy
import types

View File

@ -5,7 +5,7 @@ changes in a way that can be easily reported in a state.
'''
# Import Python libs
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import Salt libs
from salt.ext import six

View File

@ -3,7 +3,7 @@
Functions dealing with encryption
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import Python libs
import hashlib

View File

@ -353,7 +353,7 @@ def subdict_match(data,
try:
return re.match(pattern.lower(), six.text_type(target).lower())
except Exception:
log.error('Invalid regex \'{0}\' in match'.format(pattern))
log.error('Invalid regex \'%s\' in match', pattern)
return False
elif exact_match:
return six.text_type(target).lower() == pattern.lower()
@ -402,8 +402,8 @@ def subdict_match(data,
splits = expr.split(delimiter)
key = delimiter.join(splits[:idx])
matchstr = delimiter.join(splits[idx:])
log.debug('Attempting to match \'{0}\' in \'{1}\' using delimiter '
'\'{2}\''.format(matchstr, key, delimiter))
log.debug("Attempting to match '%s' in '%s' using delimiter '%s'",
matchstr, key, delimiter)
match = traverse_dict_and_list(data, key, {}, delimiter=delimiter)
if match == {}:
continue
@ -671,3 +671,18 @@ def simple_types_filter(data):
return simpledict
return data
def stringify(data):
'''
Given an iterable, returns its items as a list, with any non-string items
converted to unicode strings.
'''
ret = []
for item in data:
if six.PY2 and isinstance(item, str):
item = salt.utils.stringutils.to_unicode(item)
elif not isinstance(item, six.string_types):
item = six.text_type(item)
ret.append(item)
return ret

View File

@ -3,7 +3,7 @@
Convenience functions for dealing with datetime classes
'''
from __future__ import absolute_import, division
from __future__ import absolute_import, division, print_function, unicode_literals
# Import Python libs
import datetime

View File

@ -2,7 +2,7 @@
'''
Print a stacktrace when sent a SIGUSR1 for debugging
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import python libs
import os
@ -15,6 +15,7 @@ import inspect
# Import salt libs
import salt.utils.files
import salt.utils.stringutils
def _makepretty(printout, stack):
@ -40,7 +41,7 @@ def _handle_sigusr1(sig, stack):
filename = 'salt-debug-{0}.log'.format(int(time.time()))
destfile = os.path.join(tempfile.gettempdir(), filename)
with salt.utils.files.fopen(destfile, 'w') as output:
_makepretty(output, stack)
_makepretty(output, salt.utils.stringutils.to_str(stack))
def _handle_sigusr2(sig, stack):

View File

@ -4,9 +4,10 @@ Helpful decorators for module writing
'''
# Import python libs
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import inspect
import logging
import sys
import time
from functools import wraps
from collections import defaultdict
@ -50,9 +51,8 @@ class Depends(object):
'''
log.trace(
'Depends decorator instantiated with dep list of {0}'.format(
dependencies
)
'Depends decorator instantiated with dep list of %s',
dependencies
)
self.dependencies = dependencies
self.fallback_function = kwargs.get('fallback_function')
@ -76,8 +76,10 @@ class Depends(object):
self.dependency_dict[kind][dep][(mod_name, fun_name)] = \
(frame, self.fallback_function)
except Exception as exc:
log.error('Exception encountered when attempting to inspect frame in '
'dependency decorator: {0}'.format(exc))
log.error(
'Exception encountered when attempting to inspect frame in '
'dependency decorator: %s', exc
)
return function
@classmethod
@ -93,30 +95,21 @@ class Depends(object):
# check if dependency is loaded
if dependency is True:
log.trace(
'Dependency for {0}.{1} exists, not unloading'.format(
mod_name,
func_name
)
'Dependency for %s.%s exists, not unloading',
mod_name, func_name
)
continue
# check if you have the dependency
if dependency in frame.f_globals \
or dependency in frame.f_locals:
log.trace(
'Dependency ({0}) already loaded inside {1}, '
'skipping'.format(
dependency,
mod_name
)
'Dependency (%s) already loaded inside %s, skipping',
dependency, mod_name
)
continue
log.trace(
'Unloading {0}.{1} because dependency ({2}) is not '
'imported'.format(
mod_name,
func_name,
dependency
)
'Unloading %s.%s because dependency (%s) is not imported',
mod_name, func_name, dependency
)
# if not, unload the function
if frame:
@ -138,7 +131,7 @@ class Depends(object):
del functions[mod_key]
except AttributeError:
# we already did???
log.trace('{0} already removed, skipping'.format(mod_key))
log.trace('%s already removed, skipping', mod_key)
continue
@ -158,13 +151,10 @@ def timing(function):
mod_name = function.__module__[16:]
else:
mod_name = function.__module__
log.profile(
'Function {0}.{1} took {2:.20f} seconds to execute'.format(
mod_name,
function.__name__,
end_time - start_time
)
fstr = 'Function %s.%s took %.{0}f seconds to execute'.format(
sys.float_info.dig
)
log.profile(fstr, mod_name, function.__name__, end_time - start_time)
return ret
return wrapped
@ -185,7 +175,7 @@ def memoize(func):
str_args = []
for arg in args:
if not isinstance(arg, six.string_types):
str_args.append(str(arg))
str_args.append(six.text_type(arg))
else:
str_args.append(arg)
@ -258,14 +248,17 @@ class _DeprecationDecorator(object):
try:
return self._function(*args, **kwargs)
except TypeError as error:
error = str(error).replace(self._function, self._orig_f_name) # Hide hidden functions
log.error('Function "{f_name}" was not properly called: {error}'.format(f_name=self._orig_f_name,
error=error))
error = six.text_type(error).replace(self._function, self._orig_f_name) # Hide hidden functions
log.error(
'Function "%s" was not properly called: %s',
self._orig_f_name, error
)
return self._function.__doc__
except Exception as error:
log.error('Unhandled exception occurred in '
'function "{f_name}: {error}'.format(f_name=self._function.__name__,
error=error))
log.error(
'Unhandled exception occurred in function "%s: %s',
self._function.__name__, error
)
raise error
else:
raise CommandExecutionError("Function is deprecated, but the successor function was not found.")

View File

@ -2,7 +2,7 @@
'''
Jinja-specific decorators
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import Python libs
import logging
@ -77,7 +77,7 @@ class JinjaGlobal(object):
'''
name = self.name or function.__name__
if name not in self.salt_jinja_globals:
log.debug('Marking "{0}" as a jinja global'.format(name))
log.debug('Marking \'%s\' as a jinja global', name)
self.salt_jinja_globals[name] = function
return function

View File

@ -2,7 +2,7 @@
'''
Decorators for salt.utils.path
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import Salt libs
import salt.utils.path

View File

@ -4,7 +4,7 @@ A decorator which returns a function with the same signature of the function
which is being wrapped.
'''
# Import Python libs
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import inspect
from functools import wraps

View File

@ -11,9 +11,10 @@
Added the ability to recursively compare dictionaries
'''
from __future__ import absolute_import
from copy import deepcopy
from __future__ import absolute_import, print_function, unicode_literals
import copy
from collections import Mapping
from salt.ext import six
def diff(current_dict, past_dict):
@ -21,15 +22,13 @@ def diff(current_dict, past_dict):
class DictDiffer(object):
"""
'''
Calculate the difference between two dictionaries as:
(1) items added
(2) items removed
(3) keys same in both but changed values
(4) keys same in both and unchanged values
"""
'''
def __init__(self, current_dict, past_dict):
self.current_dict, self.past_dict = current_dict, past_dict
self.set_current, self.set_past = set(list(current_dict)), set(list(past_dict))
@ -51,8 +50,8 @@ class DictDiffer(object):
def deep_diff(old, new, ignore=None):
ignore = ignore or []
res = {}
old = deepcopy(old)
new = deepcopy(new)
old = copy.deepcopy(old)
new = copy.deepcopy(new)
stack = [(old, new, False)]
while len(stack) > 0:
@ -223,7 +222,7 @@ class RecursiveDictDiffer(DictDiffer):
old_value = diff_dict[p]['old']
if diff_dict[p]['old'] == cls.NONE_VALUE:
old_value = 'nothing'
elif isinstance(diff_dict[p]['old'], str):
elif isinstance(diff_dict[p]['old'], six.string_types):
old_value = '\'{0}\''.format(diff_dict[p]['old'])
elif isinstance(diff_dict[p]['old'], list):
old_value = '\'{0}\''.format(
@ -231,7 +230,7 @@ class RecursiveDictDiffer(DictDiffer):
new_value = diff_dict[p]['new']
if diff_dict[p]['new'] == cls.NONE_VALUE:
new_value = 'nothing'
elif isinstance(diff_dict[p]['new'], str):
elif isinstance(diff_dict[p]['new'], six.string_types):
new_value = '\'{0}\''.format(diff_dict[p]['new'])
elif isinstance(diff_dict[p]['new'], list):
new_value = '\'{0}\''.format(', '.join(diff_dict[p]['new']))

View File

@ -1,6 +1,5 @@
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import print_function
from __future__ import absolute_import, print_function, unicode_literals
import sys

View File

@ -5,7 +5,7 @@ http://stackoverflow.com/a/3233356
'''
# Import python libs
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import collections
# Import 3rd-party libs
@ -121,8 +121,10 @@ def merge(obj_a, obj_b, strategy='smart', renderer='yaml', merge_lists=False):
# we just do not want to log an error
merged = merge_recurse(obj_a, obj_b)
else:
log.warning('Unknown merging strategy \'{0}\', '
'fallback to recurse'.format(strategy))
log.warning(
'Unknown merging strategy \'%s\', fallback to recurse',
strategy
)
merged = merge_recurse(obj_a, obj_b)
return merged

View File

@ -10,7 +10,7 @@ dns.srv_data('my1.example.com', 389, prio=10, weight=100)
dns.srv_name('ldap/tcp', 'example.com')
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
# Import Python libs
import base64
@ -28,11 +28,13 @@ import string
import salt.utils.files
import salt.utils.network
import salt.utils.path
import salt.utils.stringutils
import salt.modules.cmdmod
from salt._compat import ipaddress
from salt.utils.odict import OrderedDict
# Import 3rd-party libs
from salt.ext import six
from salt.ext.six.moves import map, zip # pylint: disable=redefined-builtin
@ -239,14 +241,15 @@ def _lookup_dig(name, rdtype, timeout=None, servers=None, secure=None):
if secure:
cmd += '+dnssec +adflag '
cmd = __salt__['cmd.run_all'](cmd + str(name), python_shell=False, output_loglevel='quiet')
cmd = __salt__['cmd.run_all'](cmd + six.text_type(name), python_shell=False, output_loglevel='quiet')
if 'ignoring invalid type' in cmd['stderr']:
raise ValueError('Invalid DNS type {}'.format(rdtype))
elif cmd['retcode'] != 0:
log.warning('dig returned ({0}): {1}'.format(
log.warning(
'dig returned (%s): %s',
cmd['retcode'], cmd['stderr'].strip(string.whitespace + ';')
))
)
return False
elif not cmd['stdout']:
return []
@ -288,9 +291,7 @@ def _lookup_drill(name, rdtype, timeout=None, servers=None, secure=None):
python_shell=False, output_loglevel='quiet')
if cmd['retcode'] != 0:
log.warning('drill returned ({0}): {1}'.format(
cmd['retcode'], cmd['stderr']
))
log.warning('drill returned (%s): %s', cmd['retcode'], cmd['stderr'])
return False
lookup_res = iter(cmd['stdout'].splitlines())
@ -373,9 +374,7 @@ def _lookup_host(name, rdtype, timeout=None, server=None):
if 'invalid type' in cmd['stderr']:
raise ValueError('Invalid DNS type {}'.format(rdtype))
elif cmd['retcode'] != 0:
log.warning('host returned ({0}): {1}'.format(
cmd['retcode'], cmd['stderr']
))
log.warning('host returned (%s): %s', cmd['retcode'], cmd['stderr'])
return False
elif 'has no' in cmd['stdout']:
return []
@ -413,7 +412,7 @@ def _lookup_dnspython(name, rdtype, timeout=None, servers=None, secure=None):
resolver.ednsflags += dns.flags.DO
try:
res = [str(rr.to_text().strip(string.whitespace + '"'))
res = [six.text_type(rr.to_text().strip(string.whitespace + '"'))
for rr in resolver.query(name, rdtype, raise_on_no_answer=False)]
return res
except dns.rdatatype.UnknownRdatatype:
@ -434,7 +433,7 @@ def _lookup_nslookup(name, rdtype, timeout=None, server=None):
:param server: server to query
:return: [] of records or False if error
'''
cmd = 'nslookup -query={0} {1}'.format(rdtype, str(name))
cmd = 'nslookup -query={0} {1}'.format(rdtype, name)
if timeout is not None:
cmd += ' -timeout={0}'.format(int(timeout))
@ -444,9 +443,11 @@ def _lookup_nslookup(name, rdtype, timeout=None, server=None):
cmd = __salt__['cmd.run_all'](cmd, python_shell=False, output_loglevel='quiet')
if cmd['retcode'] != 0:
log.warning('nslookup returned ({0}): {1}'.format(
cmd['retcode'], cmd['stdout'].splitlines()[-1].strip(string.whitespace + ';')
))
log.warning(
'nslookup returned (%s): %s',
cmd['retcode'],
cmd['stdout'].splitlines()[-1].strip(string.whitespace + ';')
)
return False
lookup_res = iter(cmd['stdout'].splitlines())
@ -543,9 +544,9 @@ def lookup(
resolver = next((rcb for rname, rcb, rtest in query_methods if rname == method and rtest))
except StopIteration:
log.error(
'Unable to lookup {1}/{2}: Resolver method {0} invalid, unsupported or unable to perform query'.format(
method, rdtype, name
))
'Unable to lookup %s/%s: Resolver method %s invalid, unsupported '
'or unable to perform query', method, rdtype, name
)
return False
res_kwargs = {
@ -692,7 +693,7 @@ def caa_rec(rdatas):
rschema = OrderedDict((
('flags', lambda flag: ['critical'] if int(flag) > 0 else []),
('tag', lambda tag: RFC.validate(tag, RFC.COO_TAGS)),
('value', lambda val: str(val).strip('"'))
('value', lambda val: six.text_type(val).strip('"'))
))
res = _data2rec_group(rschema, rdatas, 'tag')
@ -743,7 +744,10 @@ def ptr_name(rdata):
try:
return ipaddress.ip_address(rdata).reverse_pointer
except ValueError:
log.error('Unable to generate PTR record; {0} is not a valid IP address'.format(rdata))
log.error(
'Unable to generate PTR record; %s is not a valid IP address',
rdata
)
return False
@ -954,7 +958,7 @@ def services(services_file='/etc/services'):
res = {}
with salt.utils.files.fopen(services_file, 'r') as svc_defs:
for svc_def in svc_defs.readlines():
svc_def = svc_def.strip()
svc_def = salt.utils.stringutils.to_unicode(svc_def.strip())
if not len(svc_def) or svc_def.startswith('#'):
continue
elif '#' in svc_def:
@ -1018,7 +1022,7 @@ def parse_resolv(src='/etc/resolv.conf'):
with salt.utils.files.fopen(src) as src_file:
# pylint: disable=too-many-nested-blocks
for line in src_file:
line = line.strip().split()
line = salt.utils.stringutils.to_unicode(line).strip().split()
try:
(directive, arg) = (line[0].lower(), line[1:])
@ -1032,7 +1036,7 @@ def parse_resolv(src='/etc/resolv.conf'):
if ip_addr not in nameservers:
nameservers.append(ip_addr)
except ValueError as exc:
log.error('{0}: {1}'.format(src, exc))
log.error('%s: %s', src, exc)
elif directive == 'domain':
domain = arg[0]
elif directive == 'search':
@ -1046,13 +1050,13 @@ def parse_resolv(src='/etc/resolv.conf'):
try:
ip_net = ipaddress.ip_network(ip_raw)
except ValueError as exc:
log.error('{0}: {1}'.format(src, exc))
log.error('%s: %s', src, exc)
else:
if '/' not in ip_raw:
# No netmask has been provided, guess
# the "natural" one
if ip_net.version == 4:
ip_addr = str(ip_net.network_address)
ip_addr = six.text_type(ip_net.network_address)
# pylint: disable=protected-access
mask = salt.utils.network.natural_ipv4_netmask(ip_addr)
ip_net = ipaddress.ip_network(
@ -1077,8 +1081,10 @@ def parse_resolv(src='/etc/resolv.conf'):
# The domain and search keywords are mutually exclusive. If more
# than one instance of these keywords is present, the last instance
# will override.
log.debug('{0}: The domain and search keywords are mutually '
'exclusive.'.format(src))
log.debug(
'%s: The domain and search keywords are mutually exclusive.',
src
)
return {
'nameservers': nameservers,

View File

@ -3,28 +3,44 @@
Functions for analyzing/parsing docstrings
'''
from __future__ import absolute_import
from __future__ import absolute_import, print_function, unicode_literals
import logging
import re
import salt.utils.data
from salt.ext import six
log = logging.getLogger(__name__)
def strip_rst(docs):
'''
Strip/replace reStructuredText directives in docstrings
'''
for func, docstring in six.iteritems(docs):
log.debug('Stripping docstring for %s', func)
if not docstring:
continue
docstring_new = re.sub(r' *.. code-block:: \S+\n{1,2}',
'', docstring)
docstring_new = re.sub('.. note::',
'Note:', docstring_new)
docstring_new = re.sub('.. warning::',
'Warning:', docstring_new)
docstring_new = re.sub('.. versionadded::',
'New in version', docstring_new)
docstring_new = re.sub('.. versionchanged::',
'Changed in version', docstring_new)
docstring_new = docstring if six.PY3 else salt.utils.data.encode(docstring)
for regex, repl in (
(r' *.. code-block:: \S+\n{1,2}', ''),
('.. note::', 'Note:'),
('.. warning::', 'Warning:'),
('.. versionadded::', 'New in version'),
('.. versionchanged::', 'Changed in version')):
if six.PY2:
regex = salt.utils.data.encode(regex)
repl = salt.utils.data.encode(repl)
try:
docstring_new = re.sub(regex, repl, docstring_new)
except Exception:
log.debug(
'Exception encountered while matching regex %r to '
'docstring for function %s', regex, func,
exc_info=True
)
if six.PY2:
docstring_new = salt.utils.data.decode(docstring_new)
if docstring != docstring_new:
docs[func] = docstring_new
return docs

View File

@ -1028,7 +1028,7 @@ class GitProvider(object):
except IndexError:
dirs = []
self._linkdir_walk.append((
salt.utils.path_join(self.linkdir, *parts[:idx + 1]),
salt.utils.path.join(self.linkdir, *parts[:idx + 1]),
dirs,
[]
))
@ -2974,7 +2974,7 @@ class GitPillar(GitBase):
Ensure that the mountpoint is present in the correct location and
points at the correct path
'''
lcachelink = salt.utils.path_join(repo.linkdir, repo._mountpoint)
lcachelink = salt.utils.path.join(repo.linkdir, repo._mountpoint)
wipe_linkdir = False
create_link = False
try:
@ -3019,7 +3019,7 @@ class GitPillar(GitBase):
# is remove the symlink and let it be created
# below.
try:
if salt.utils.is_windows() \
if salt.utils.platform.is_windows() \
and not ldest.startswith('\\\\') \
and os.path.isdir(ldest):
# On Windows, symlinks to directories

View File

@ -4,12 +4,13 @@ Connection library for Amazon IAM
:depends: requests
'''
from __future__ import absolute_import
from __future__ import absolute_import, unicode_literals
# Import Python libs
import logging
import time
import pprint
import salt.utils.data
from salt.ext.six.moves import range
from salt.ext import six
@ -42,12 +43,12 @@ def _retry_get_url(url, num_retries=10, timeout=5):
pass
log.warning(
'Caught exception reading from URL. Retry no. {0}'.format(i)
'Caught exception reading from URL. Retry no. %s', i
)
log.warning(pprint.pformat(exc))
time.sleep(2 ** i)
log.error(
'Failed to read from URL for {0} times. Giving up.'.format(num_retries)
'Failed to read from URL for %s times. Giving up.', num_retries
)
return ''
@ -56,8 +57,11 @@ def _convert_key_to_str(key):
'''
Stolen completely from boto.providers
'''
if isinstance(key, six.text_type):
# the secret key must be bytes and not unicode to work
# properly with hmac.new (see http://bugs.python.org/issue5285)
return str(key)
return key
# IMPORTANT: on PY2, the secret key must be str and not unicode to work
# properly with hmac.new (see http://bugs.python.org/issue5285)
#
# pylint: disable=incompatible-py3-code
return salt.utils.data.encode(key) \
if six.PY2 and isinstance(key, unicode) \
else key
# pylint: enable=incompatible-py3-code

Some files were not shown because too many files have changed in this diff Show More