Fixes#40278
When we run a cloud.action function from the CLI, the various __pub_* keys
and values populate the "kwargs" arg. Then, when we attempt to call out
to a cloud driver's function that doesn't accept a "kwarg" arg (or there are
too many args passed now), we get an error.
If the cloud function only takes "name" and "call", we should not be passing
in "kwargs", too.
Fixes#40437
Using 'dict.keys()' in Python 3 returns a view iterator instead of a list
like in Python 2. Therefore, we cannot get an indexed item from a dict.
We could wrap that up in a list, but this is slower and somewhat sloppy since
PY3 is trying to help us out and be efficient with dict views. We can get the
key name by using next(iter(dict)), which works both in python 2 and 3. The
order of getting the key names doesn't matter, as long as we get use them all.
This fixes a regression I introduced the other day in #40481 when I
backported auth fixes to 2016.3. I changed how the client was
instantiated and ended up passing an unsupported kwarg to the wrapped
function. This resolves that regression.
os.makedirs() will raise OSError in case the directory passed as argument
already exists. We do check that this is not the case right before the
call, but there is still a tiny time window in which the directory might
be concurrently created between the isdir() check and the makedirs() call.
In some unlucky cases under heavy I/O the following stack trace is produced:
The minion function caused an exception: Traceback (most recent call last):
...
File "/usr/lib/python2.7/site-packages/salt/fileclient.py", line 165, in cache_file
return self.get_url(path, '', True, saltenv, cachedir=cachedir)
...
File "/usr/lib/python2.7/site-packages/salt/fileclient.py", line 126, in _cache_loc
os.makedirs(destdir)
File "/usr/lib64/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 17] File exists: <PATH>
This will force a re-download if the hash specified by source_hash if it
doesn't match the hash of the cached file, but if the hash matches, it
will skip the download. This will permit keep=True to prevent repeated
downloads of the source file, if and only if the source_hash also
remains the same.
First of all, the opts will never not have this key, as it has a default
value in salt/config/__init__.py. Second, the default hash type changed
to sha256 anyway in 2016.11, so we shouldn't be referring to a specific
hash type when it'll no longer be accurate after a merge-forward.
1. The trim_output argument was ignored for archives extracted using
tarfile.
2. file.get_source_sum would fail if "source" is a list
3. The checksum was being updated before we checked to see if it
matched, effectively keeping us from detecting changes to the hash.
4. When source_hash_update is True, and the archive is extracted, and
files are later removed from the extraction dir, the state would not
re-extract the archive unless the source_hash had changed.