which will modify the default behavior of yaml load.
Foe example, for following example (t.sls), it will cause the difference
between the content of file testa and testb, but it should be identical!
$ cat t
{%- load_yaml as vars %}
toaddr:
- test@test.com
{%- endload -%}
{{ vars.toaddr }}
$ cat t.sls
/tmp/testa:
file.managed:
- source: salt://t
- user: root
- group: root
- mode: "0755"
- template: jinja
sys-power/acpid:
pkg.installed:
- refresh: False
/tmp/testb:
file.managed:
- source: salt://t
- user: root
- group: root
- mode: "0755"
- template: jinja
$ touch /tmp/test{a,b}
$ salt-call state.sls t
local:
----------
ID: /tmp/testa
Function: file.managed
Result: None
Comment: The file /tmp/testa is set to be changed
Changes:
----------
diff:
---
+++
@@ -0,0 +1 @@
+['test@test.com']
----------
ID: /tmp/testb
Function: file.managed
Result: None
Comment: The file /tmp/testb is set to be changed
Changes:
----------
diff:
---
+++
@@ -0,0 +1 @@
+[u'test@test.com']
This commit extends the cmd_pattern with unknown parts of the parsing
process. Otherwise it is not possible to customize the openscap XCCDF
execution with additional parameters like --remediate.
This fixes the unnecessary re-downloading reported in #38971 in 2017.7
without using the new fileclient capabilities added in develop. It
includes a helper function in the `file.cached` state that will need to
be removed once we merge forward into develop.
The code used to have a salt.state.HighState instance call the state.
That method pre-dated availability of __states__ to use for executing
the state function. The HighState instance handles exception catching
and produces a list as output if there were errors which arose before
the state was executed. Running the state function using __states__ does
not give you any such protection. This commit removes the type check on
the return data, as it will never be a list when run via __states__, and
wraps the state function in a try/except to catch any exceptions that
may be raised by invoking the file.cached state.
The file.managed state, which is used by the archive.extracted state to
download the source archive, at some point recently was modified to
clear the file from the minion cache. This caused unnecessary
re-downloading on subsequent runs, which slows down states considerably
when dealing with larger archives.
This commit makes the following changes to improve this:
1. The fileclient now accepts a `source_hash` argument, which will cause
the client's get_url function to skip downloading http(s) and ftp
files if the file is already cached, and its hash matches the passed
hash. This argument has also been added to the `cp.get_url` and
`cp.cache_file` function.
2. We no longer try to download the file when it's an http(s) or ftp URL
when running `file.source_list`.
3. Where `cp.cache_file` is used, we pass the `source_hash` if it is
available.
4. A `cache_source` argument has been added to the `file.managed` state,
defaulting to `True`. This is now used to control whether or not the
source file is cleared from the minion cache when the state
completes.
5. Two new states (`file.cached` and `file.not_cached`) have been added
to managed files in the minion cache.
In addition, the `archive.extracted` state has been modified in the
following ways:
1. For consistency with `file.managed`, a `cache_source` argument has
been added. This also deprecates `keep`. If `keep` is used,
`cache_source` assumes its value, and a warning is added to the state
return to let the user know to update their SLS.
2. The variable name `cached_source` (used internally in the
`archive.extracted` state) has been renamed to `cached` to reduce
confusion with the new `cache_source` argument.
3. The new `file.cached` and `file.not_cached` states are now used to
manage the source tarball instead of `file.managed`. This improves
disk usage and reduces unnecessary complexity in the state as we no
longer keep a copy of the archive in a separate location within the
cachedir. We now only use the copy downloaded using `cp.cache_file`
within the `file.cached` state. This change has also necessitated a
new home for hash files tracked by the `source_hash_update` argument,
in a subdirectory of the minion cachedir called `archive_hash`.