Merge pull request #19029 from basepi/merge-forward

Merge forward from 2014.7 to develop
This commit is contained in:
Thomas S Hatch 2014-12-17 14:48:01 -07:00
commit 45365c095e
52 changed files with 1225 additions and 383 deletions

View File

@ -209,16 +209,15 @@ Linux/Unix
.. code-block:: yaml
salt-minion:
pkg:
- installed
pkg.installed:
- name: salt-minion
- version: 2014.1.7-3.el6
- order: last
service:
- running
service.running:
- name: salt-minion
- require:
- pkg: salt-minion
cmd:
- wait
cmd.wait:
- name: echo service salt-minion restart | at now + 1 minute
- watch:
- pkg: salt-minion
@ -230,10 +229,9 @@ distro the minion is running, in case they differ from the example below.
.. code-block:: yaml
at:
pkg:
- installed
service:
- running
pkg.installed:
- name: at
service.running:
- name: atd
- enable: True
@ -258,8 +256,7 @@ adding the following state:
.. code-block:: yaml
schedule-start:
cmd:
- run
cmd.run:
- name: 'start powershell "Restart-Service -Name salt-minion"'
- order: last
@ -291,4 +288,4 @@ master without requiring the minion to be running.
More information about salting the Salt master can be found in the salt-formula
for salt itself:
https://github.com/saltstack-formulas/salt-formula
https://github.com/saltstack-formulas/salt-formula

View File

@ -57,16 +57,17 @@ As an example, a state written thusly:
.. code-block:: yaml
apache:
pkg:
- installed
service:
- running
pkg.installed:
- name: httpd
service.running:
- name: httpd
- watch:
- file: /etc/httpd/conf.d/httpd.conf
- file: apache_conf
- pkg: apache
/etc/httpd/conf.d/httpd.conf:
file:
- managed
apache_conf:
file.managed:
- name: /etc/httpd/conf.d/httpd.conf
- source: salt://apache/httpd.conf
Will have High Data which looks like this represented in json:
@ -76,41 +77,50 @@ Will have High Data which looks like this represented in json:
{
"apache": {
"pkg": [
{
"name": "httpd"
},
"installed",
{
"order": 10000
}
],
"service": [
"running",
{
"name": "httpd"
},
{
"watch": [
{
"file": "/etc/httpd/conf.d/httpd.conf"
"file": "apache_conf"
},
{
"pkg": "apache"
}
]
},
"running",
{
"order": 10001
}
],
"__sls__": "apache",
"__sls__": "blah",
"__env__": "base"
},
"/etc/httpd/conf.d/httpd.conf": {
"apache_conf": {
"file": [
"managed",
{
"name": "/etc/httpd/conf.d/httpd.conf"
},
{
"source": "salt://apache/httpd.conf"
},
"managed",
{
"order": 10002
}
],
"__sls__": "apache",
"__sls__": "blah",
"__env__": "base"
}
}
@ -121,19 +131,19 @@ The subsequent Low Data will look like this:
[
{
"name": "apache",
"name": "httpd",
"state": "pkg",
"__id__": "apache",
"fun": "installed",
"__env__": "base",
"__sls__": "apache",
"__sls__": "blah",
"order": 10000
},
{
"name": "apache",
"name": "httpd",
"watch": [
{
"file": "/etc/httpd/conf.d/httpd.conf"
"file": "apache_conf"
},
{
"pkg": "apache"
@ -143,22 +153,21 @@ The subsequent Low Data will look like this:
"__id__": "apache",
"fun": "running",
"__env__": "base",
"__sls__": "apache",
"__sls__": "blah",
"order": 10001
},
{
"name": "/etc/httpd/conf.d/httpd.conf",
"source": "salt://apache/httpd.conf",
"state": "file",
"__id__": "/etc/httpd/conf.d/httpd.conf",
"__id__": "apache_conf",
"fun": "managed",
"__env__": "base",
"__sls__": "apache",
"__sls__": "blah",
"order": 10002
}
]
This tutorial discusses the Low Data evaluation and the state runtime.
Ordering Layers
@ -235,8 +244,8 @@ ordering can be explicitly overridden using the `order` flag in states:
.. code-block:: yaml
apache:
pkg:
- installed
pkg.installed:
- name: httpd
- order: 1
This order flag will over ride the definition order, this makes it very
@ -342,4 +351,4 @@ the first instance of a failure.
In the end, using requisites creates very tight and fine grained states,
not using requisites makes full sequence runs and while slightly easier
to write, and gives much less control over the executions.
to write, and gives much less control over the executions.

View File

@ -103,8 +103,8 @@ declaration that will restart Apache whenever the Apache configuration file,
- file: mywebsite
mywebsite:
file:
- managed
file.managed:
- name: /var/www/mysite
.. seealso:: watch_in and require_in
@ -168,10 +168,10 @@ For example, the following state declaration calls the :mod:`installed
.. code-block:: yaml
httpd:
pkg.installed
pkg.installed: []
The function can be declared inline with the state as a shortcut, but
the actual data structure is better referenced in this form:
The function can be declared inline with the state as a shortcut.
The actual data structure is compiled to this form:
.. code-block:: yaml
@ -203,10 +203,8 @@ VALID:
.. code-block:: yaml
httpd:
pkg:
- installed
service:
- running
pkg.installed: []
service.running: []
Occurs as the only index in the :ref:`state-declaration` list.
@ -280,8 +278,7 @@ easier to specify ``mywebsite`` than to specify
- file: mywebsite
apache2:
service:
- running
service.running:
- watch:
- file: mywebsite

View File

@ -113,19 +113,17 @@ Here is an example of a Salt State:
.. code-block:: yaml
vim:
pkg:
- installed
pkg.installed: []
salt:
pkg:
- latest
pkg.latest:
- name: salt
service.running:
- require:
- file: /etc/salt/minion
- pkg: salt
- names:
- salt-master
- salt-minion
- require:
- pkg: salt
- watch:
- file: /etc/salt/minion
@ -196,15 +194,15 @@ the following state file which we'll call ``pep8.sls``:
.. code-block:: yaml
python-pip:
cmd:
- run
cmd.run:
- name: |
easy_install --script-dir=/usr/bin -U pip
- cwd: /
- name: easy_install --script-dir=/usr/bin -U pip
pep8:
pip.installed
requires:
- cmd: python-pip
pip.installed:
- require:
- cmd: python-pip
The above example installs `pip`_ using ``easy_install`` from `setuptools`_ and
@ -276,16 +274,16 @@ The modified state file would now be:
.. code-block:: yaml
python-pip:
cmd:
- run
cmd.run:
- name: |
easy_install --script-dir=/usr/bin -U pip
- cwd: /
- name: easy_install --script-dir=/usr/bin -U pip
- reload_modules: true
pep8:
pip.installed
requires:
- cmd: python-pip
pip.installed:
- require:
- cmd: python-pip
Let's run it, once:

View File

@ -61,8 +61,7 @@ These requisite statements are applied to a specific state declaration:
.. code-block:: yaml
httpd:
pkg:
- installed
pkg.installed: []
file.managed:
- name: /etc/httpd/conf/httpd.conf
- source: salt://httpd/httpd.conf
@ -91,10 +90,8 @@ the discrete states are split or groups into separate sls files:
- network
httpd:
pkg:
- installed
service:
- running
pkg.installed: []
service.running:
- require:
- pkg: httpd
- sls: network
@ -121,8 +118,7 @@ more requisites. Both requisite types can also be separately declared:
.. code-block:: yaml
httpd:
pkg:
- installed
pkg.installed: []
service.running:
- enable: True
- watch:
@ -136,10 +132,8 @@ more requisites. Both requisite types can also be separately declared:
- source: salt://httpd/httpd.conf
- require:
- pkg: httpd
user:
- present
group:
- present
user.present: []
group.present: []
In this example, the httpd service is only going to be started if the package,
user, group, and file are executed successfully.

View File

@ -20,7 +20,7 @@ the targeting state. The following example demonstrates a direct requisite:
.. code-block:: yaml
vim:
pkg.installed
pkg.installed: []
/etc/vimrc:
file.managed:
@ -258,15 +258,13 @@ The ``onfail`` requisite is applied in the same way as ``require`` as ``watch``:
.. code-block:: yaml
primary_mount:
mount:
- mounted
mount.mounted:
- name: /mnt/share
- device: 10.0.0.45:/share
- fstype: nfs
backup_mount:
mount:
- mounted
mount.mounted:
- name: /mnt/share
- device: 192.168.40.34:/share
- fstype: nfs
@ -338,10 +336,8 @@ Using ``require``
.. code-block:: yaml
httpd:
pkg:
- installed
service:
- running
pkg.installed: []
service.running:
- require:
- pkg: httpd
@ -350,12 +346,10 @@ Using ``require_in``
.. code-block:: yaml
httpd:
pkg:
- installed
pkg.installed:
- require_in:
- service: httpd
service:
- running
service.running: []
The ``require_in`` statement is particularly useful when assigning a require
in a separate sls file. For instance it may be common for httpd to require
@ -367,10 +361,8 @@ http.sls
.. code-block:: yaml
httpd:
pkg:
- installed
service:
- running
pkg.installed: []
service.running:
- require:
- pkg: httpd
@ -382,8 +374,7 @@ php.sls
- http
php:
pkg:
- installed
pkg.installed:
- require_in:
- service: httpd
@ -395,8 +386,7 @@ mod_python.sls
- http
mod_python:
pkg:
- installed
pkg.installed:
- require_in:
- service: httpd

View File

@ -90,8 +90,11 @@ be used as often as possible.
.. note::
Formulas should never be referenced from the main repository, and should
be forked to a repo where unintended changes will not take place.
Formulas repositories on the saltstack-formulas GitHub organization should
not be pointed to directly from systems that automatically fetch new
updates such as GitFS or similar tooling. Instead formulas repositories
should be forked on GitHub or cloned locally, where unintended, automatic
changes will not take place.
Structuring Pillar Files
@ -145,13 +148,13 @@ for variable definitions.
Each SLS file within the ``/srv/pillar/`` directory should correspond to the
states which it matches.
This would mean that the apache pillar file should contain data relevant to
apache. Structuring files in this way once again ensures modularity, and
This would mean that the ``apache`` pillar file should contain data relevant to
Apache. Structuring files in this way once again ensures modularity, and
creates a consistent understanding throughout our Salt environment. Users can
expect that pillar variables found in an Apache state will live inside of an
Apache pillar:
/srv/salt/pillar/apache.sls
``/srv/salt/pillar/apache.sls``:
.. code-block:: yaml
@ -178,7 +181,7 @@ lead to extensive flexibility.
Although it is possible to set variables locally, this is generally not
preferred:
/srv/salt/apache/conf.sls
``/srv/salt/apache/conf.sls``:
.. code-block:: yaml
@ -189,8 +192,7 @@ preferred:
- apache
apache_conf:
file:
- managed
file.managed:
- name: {{ name }}
- source: {{ tmpl }}
- template: jinja
@ -203,7 +205,7 @@ When generating this information it can be easily transitioned to the pillar
where data can be overwritten, modified, and applied to multiple states, or
locations within a single state:
/srv/pillar/apache.sls
``/srv/pillar/apache.sls``:
.. code-block:: yaml
@ -213,7 +215,7 @@ locations within a single state:
config:
tmpl: salt://apache/files/httpd.conf
/srv/salt/apache/conf.sls
``/srv/salt/apache/conf.sls``:
.. code-block:: yaml
@ -223,8 +225,7 @@ locations within a single state:
- apache
apache_conf:
file:
- managed
file.managed:
- name: {{ salt['pillar.get']('apache:lookup:name') }}
- source: {{ salt['pillar.get']('apache:lookup:config:tmpl') }}
- template: jinja
@ -244,20 +245,17 @@ state could be re-used, and what it relies on to operate. Below are several
examples which will iteratively explain how a user can go from a state which
is not very modular to one that is:
/srv/salt/apache/init.sls:
``/srv/salt/apache/init.sls``:
.. code-block:: yaml
httpd:
pkg:
- installed
service:
- running
pkg.installed: []
service.running:
- enable: True
/etc/httpd/httpd.conf:
file:
- managed
file.managed:
- source: salt://apache/files/httpd.conf
- template: jinja
- watch_in:
@ -280,22 +278,19 @@ conf file.
Our second revision begins to address the referencing by using ``- name``, as
opposed to direct ID references:
/srv/salt/apache/init.sls:
``/srv/salt/apache/init.sls``:
.. code-block:: yaml
apache:
pkg:
- installed
pkg.installed:
- name: httpd
service:
service.running:
- name: httpd
- enable: True
- running
apache_conf:
file:
- managed
file.managed:
- name: /etc/httpd/httpd.conf
- source: salt://apache/files/httpd.conf
- template: jinja
@ -317,7 +312,7 @@ Starting with the addition of a map.jinja file (as noted in the
:ref:`Formula documentation <conventions-formula>`), and
modification of static values:
/srv/salt/apache/map.jinja:
``/srv/salt/apache/map.jinja``:
.. code-block:: yaml
@ -343,24 +338,21 @@ modification of static values:
config:
tmpl: salt://apache/files/httpd.conf
/srv/salt/apache/init.sls:
``/srv/salt/apache/init.sls``:
.. code-block:: yaml
{% from "apache/map.jinja" import apache with context %}
apache:
pkg:
- installed
pkg.installed:
- name: {{ apache.server }}
service:
service.running:
- name: {{ apache.service }}
- enable: True
- running
apache_conf:
file:
- managed
file.managed:
- name: {{ apache.conf }}
- source: {{ salt['pillar.get']('apache:lookup:config:tmpl') }}
- template: jinja
@ -376,7 +368,7 @@ configuration file, but the default apache conf. With the current state setup
this is not possible. To attain this level of modularity this state will need
to be broken into two states.
/srv/salt/apache/map.jinja:
``/srv/salt/apache/map.jinja``:
.. code-block:: yaml
@ -393,7 +385,7 @@ to be broken into two states.
},
}, merge=salt['pillar.get']('apache:lookup')) %}
/srv/pillar/apache.sls:
``/srv/pillar/apache.sls``:
.. code-block:: yaml
@ -403,22 +395,20 @@ to be broken into two states.
tmpl: salt://apache/files/httpd.conf
/srv/salt/apache/init.sls:
``/srv/salt/apache/init.sls``:
.. code-block:: yaml
{% from "apache/map.jinja" import apache with context %}
apache:
pkg:
- installed
pkg.installed:
- name: {{ apache.server }}
service:
service.running:
- name: {{ apache.service }}
- enable: True
- running
/srv/salt/apache/conf.sls:
``/srv/salt/apache/conf.sls``:
.. code-block:: yaml
@ -428,8 +418,7 @@ to be broken into two states.
- apache
apache_conf:
file:
- managed
file.managed:
- name: {{ apache.conf }}
- source: {{ salt['pillar.get']('apache:lookup:config:tmpl') }}
- template: jinja
@ -457,16 +446,15 @@ those servers which require this secure data have access to it. In this
example a use can go from an insecure configuration to one which is only
accessible by the appropriate hosts:
/srv/salt/mysql/testerdb.sls:
``/srv/salt/mysql/testerdb.sls``:
.. code-block:: yaml
testdb:
mysql_database:
- present:
mysql_database.present::
- name: testerdb
/srv/salt/mysql/user.sls:
``/srv/salt/mysql/user.sls``:
.. code-block:: yaml
@ -474,8 +462,7 @@ accessible by the appropriate hosts:
- mysql.testerdb
testdb_user:
mysql_user:
- present
mysql_user.present:
- name: frank
- password: "test3rdb"
- host: localhost
@ -504,7 +491,7 @@ portable it may result in more work later!
Fixing this issue is relatively simple, the content just needs to be moved to
the associated pillar:
/srv/pillar/mysql.sls
``/srv/pillar/mysql.sls``:
.. code-block:: yaml
@ -515,16 +502,15 @@ the associated pillar:
user: frank
host: localhost
/srv/salt/mysql/testerdb.sls:
``/srv/salt/mysql/testerdb.sls``:
.. code-block:: yaml
testdb:
mysql_database:
- present:
mysql_database.present:
- name: {{ salt['pillar.get']('mysql:lookup:name') }}
/srv/salt/mysql/user.sls:
``/srv/salt/mysql/user.sls``:
.. code-block:: yaml
@ -532,8 +518,7 @@ the associated pillar:
- mysql.testerdb
testdb_user:
mysql_user:
- present
mysql_user.present:
- name: {{ salt['pillar.get']('mysql:lookup:user') }}
- password: {{ salt['pillar.get']('mysql:lookup:password') }}
- host: {{ salt['pillar.get']('mysql:lookup:host') }}

View File

@ -117,8 +117,7 @@ package until after the EPEL repository has also been installed:
- epel
python26:
pkg:
- installed
pkg.installed:
- require:
- pkg: epel
@ -220,19 +219,513 @@ on GitHub.
manage which repositories they are subscribed to on GitHub's watching page:
https://github.com/watching.
Abstracting platform-specific data
----------------------------------
Style
-----
It is useful to have a single source for platform-specific or other static
information that can be reused throughout a Formula. Such a file should be
named :file:`map.jinja` and live alongside the state files.
Maintainability, readability, and reusability are all marks of a good Salt sls
file. This section contains several suggestions and examples.
The following is an example from the MySQL Formula. It is a simple dictionary
that serves as a lookup table (sometimes called a hash map or a dictionary).
.. code-block:: yaml
# Deploy the stable master branch unless version overridden by passing
# Pillar at the CLI or via the Reactor.
deploy_myapp:
git.latest:
- name: git@github.com/myco/myapp.git
- version: {{ salt.pillar.get('myapp:version', 'master') }}
Use a descriptive State ID
``````````````````````````
The ID of a state is used as a unique identifier that may be referenced via
other states in :ref:`requisites <requisites>`. It must be unique across the
whole state tree (:ref:`it is a key in a dictionary <id-declaration>`, after
all).
In addition a state ID should be descriptive and serve as a high-level hint of
what it will do, or manage, or change. For example, ``deploy_webapp``, or
``apache``, or ``reload_firewall``.
Use ``module.function`` notation
````````````````````````````````
So-called "short-declaration" notation is preferred for referencing state
modules and state functions. It provides a consistent pattern of
``module.function`` shared between Salt States, the Reactor, Overstate, Salt
Mine, the Scheduler, as well as with the CLI.
.. code-block:: yaml
# Do
apache:
pkg.installed:
- name: httpd
# Don't
apache:
pkg:
- installed
- name: httpd
Salt's state compiler will transform "short-decs" into the longer format
:ref:`when compiling the human-friendly highstate structure into the
machine-friendly lowstate structure <state-layers>`.
Specify the ``name`` parameter
``````````````````````````````
Use a unique and permanent identifier for the state ID and reserve ``name`` for
data with variability.
The :ref:`name declaration <name-declaration>` is a required parameter for all
state functions. The state ID will implicitly be used as ``name`` if it is not
explicitly set in the state.
In many state functions the ``name`` parameter is used for data that varies
such as OS-specific package names, OS-specific file system paths, repository
addresses, etc. Any time the ID of a state changes all references to that ID
must also be changed. Use a permanent ID when writing a state the first time to
future-proof that state and allow for easier refactors down the road.
Comment state files
```````````````````
YAML allows comments at varying indentation levels. It is a good practice to
comment state files. Use vertical whitespace to visually separate different
concepts or actions.
.. code-block:: yaml
# Start with a high-level description of the current sls file.
# Explain the scope of what it will do or manage.
# Comment individual states as necessary.
update_a_config_file:
# Provide details on why an unusual choice was made. For example:
#
# This template is fetched from a third-party and does not fit our
# company norm of using Jinja. This must be processed using Mako.
file.managed:
- name: /path/to/file.cfg
- source: salt://path/to/file.cfg.template
- template: mako
# Provide a description or explanation that did not fit within the state
# ID. For example:
#
# Update the application's last-deployed timestamp.
# This is a workaround until Bob configures Jenkins to automate RPM
# builds of the app.
cmd.run:
# FIXME: Joe needs this to run on Windows by next quarter. Switch these
# from shell commands to Salt's file.managed and file.replace state
# modules.
- name: |
touch /path/to/file_last_updated
sed -e 's/foo/bar/g' /path/to/file_environment
- onchanges:
- file: a_config_file
Be careful to use Jinja comments for commenting Jinja code and YAML comments
for commenting YAML code.
.. code-block:: jinja
# BAD EXAMPLE
# The Jinja in this YAML comment is still executed!
# {% set apache_is_installed = 'apache' in salt.pkg.list_pkgs() %}
# GOOD EXAMPLE
# The Jinja in this Jinja comment will not be executed.
{# {% set apache_is_installed = 'apache' in salt.pkg.list_pkgs() %} #}
Easy on the Jinja!
------------------
Jinja templating provides vast flexibility and power when building Salt sls
files. It can also create an unmaintainable tangle of logic and data. Speaking
broadly, Jinja is best used when kept apart from the states (as much as is
possible).
Below are guidelines and examples of how Jinja can be used effectively.
Know the evaluation and execution order
```````````````````````````````````````
High-level knowledge of how Salt states are compiled and run is useful when
writing states.
The default :conf_minion:`renderer` setting in Salt is Jinja piped to YAML.
Each is a separate step. Each step is not aware of the previous or following
step. Jinja is not YAML aware, YAML is not Jinja aware; they cannot share
variables or interact.
* Whatever the Jinja step produces must be valid YAML.
* Whatever the YAML step produces must be a valid :ref:`highstate data
structure <states-highstate-example>`. (This is also true of the final step
for :ref:`any of the alternate renderers <all-salt.renderers>` in Salt.)
* Highstate can be thought of as a human-friendly data structure; easy to write
and easy to read.
* Salt's state compiler validates the highstate and compiles it to low state.
* Low state can be thought of as a machine-friendly data structure. It is a
list of dictionaries that each map directly to a function call.
* Salt's state system finally starts and executes on each "chunk" in the low
state. Remember that requisites are evaluated at runtime.
* The return for each function call is added to the "running" dictionary which
is the final output at the end of the state run.
The full evaluation and execution order::
Jinja -> YAML -> Highstate -> low state -> execution
Avoid changing the underlying system with Jinja
```````````````````````````````````````````````
Avoid calling commands from Jinja that change the underlying system. Commands
run via Jinja do not respect Salt's dry-run mode (``test=True``)! This is
usually in conflict with the idempotent nature of Salt states unless the
command being run is also idempotent.
Inspect the local system
````````````````````````
A common use for Jinja in Salt states is to gather information about the
underlying system. The ``grains`` dictionary available in the Jinja context is
a great example of common data points that Salt itself has already gathered.
Less common values are often found by running commands. For example:
.. code-block:: jinja
{% set is_selinux_enabled = salt.cmd.run('sestatus') == '1' %}
This is usually best done with a variable assignment in order to separate the
data from the state that will make use of the data.
Gather external data
````````````````````
One of the most common uses for Jinja is to pull external data into the state
file. External data can come from anywhere like API calls or database queries,
but it most commonly comes from flat files on the file system or Pillar data
from the Salt Master. For example:
.. code-block:: jinja
{% set some_data = salt.pillar.get('some_data', {'sane default': True}) %}
{# or #}
{% load_json 'path/to/file.json' as some_data %}
{# or #}
{% load_text 'path/to/ssh_key.pub' as ssh_pub_key %}
{# or #}
{% from 'path/to/other_file.jinja' import some_data with context %}
This is usually best done with a variable assignment in order to separate the
data from the state that will make use of the data.
Light conditionals and looping
``````````````````````````````
Jinja is extremely powerful for programatically generating Salt states. It is
also easy to overuse. As a rule of thumb, if it is hard to read it will be hard
to maintain!
Separate Jinja control-flow statements from the states as much as is possible
to create readable states. Limit Jinja within states to simple variable
lookups.
Below is a simple example of a readable loop:
.. code-block:: yaml
{% for user in salt.pillar.get('list_of_users', []) %}
{# Ensure unique state IDs when looping. #}
{{ user.name }}-{{ loop.index }}:
user.present:
- name: {{ user.name }}
- shell: {{ user.shell }}
{% endfor %}
Avoid putting a Jinja conditionals within Salt states where possible.
Readability suffers and the correct YAML indentation is difficult to see in the
surrounding visual noise. Parameterization (discussed below) and variables are
both useful techniques to avoid this. For example:
.. code-block:: yaml
{# ---- Bad example ---- #}
apache:
pkg.installed:
{% if grains.os_family == 'RedHat' %}
- name: httpd
{% elif grains.os_family == 'Debian' %}
- name: apache2
{% endif %}
{# ---- Better example ---- #}
{% if grains.os_family == 'RedHat' %}
{% set name = 'httpd' %}
{% elif grains.os_family == 'Debian' %}
{% set name = 'apache2' %}
{% endif %}
apache:
pkg.installed:
- name: {{ name }}
{# ---- Good example ---- #}
{% set name = {
'RedHat': 'httpd',
'Debian': 'apache2',
}.get(grains.os_family) %}
apache:
pkg.installed:
- name: {{ name }}
Dictionaries are useful to effectively "namespace" a collection of variables.
This is useful with parameterization (discussed below). Dictionaries are also
easily combined and merged. And they can be directly serialized into YAML which
is often easier than trying to create valid YAML through templating. For
example:
.. code-block:: yaml
{# ---- Bad example ---- #}
haproxy_conf:
file.managed:
- name: /etc/haproxy/haproxy.cfg
- template: jinja
{% if 'external_loadbalancer' in grains.roles %}
- source: salt://haproxy/external_haproxy.cfg
{% elif 'internal_loadbalancer' in grains.roles %}
- source: salt://haproxy/internal_haproxy.cfg
{% endif %}
- context:
{% if 'external_loadbalancer' in grains.roles %}
ssl_termination: True
{% elif 'internal_loadbalancer' in grains.roles %}
ssl_termination: False
{% endif %}
{# ---- Better example ---- #}
{% load_yaml as haproxy_defaults %}
common_settings:
bind_port: 80
internal_loadbalancer:
source: salt://haproxy/internal_haproxy.cfg
settings:
bind_port: 8080
ssl_termination: False
external_loadbalancer:
source: salt://haproxy/external_haproxy.cfg
settings:
ssl_termination: True
{% endload %}
{% if 'external_loadbalancer' in grains.roles %}
{% set haproxy = haproxy_defaults['external_loadbalancer'] %}
{% elif 'internal_loadbalancer' in grains.roles %}
{% set haproxy = haproxy_defaults['internal_loadbalancer'] %}
{% endif %}
{% do haproxy.settings.update(haproxy_defaults.common_settings) %}
haproxy_conf:
file.managed:
- name: /etc/haproxy/haproxy.cfg
- template: jinja
- source: {{ haproxy.source }}
- context: {{ haproxy.settings | yaml() }}
There is still room for improvement in the above example. For example,
extracting into an external file or replacing the if-elif conditional with a
function call to filter the correct data more succinctly. However, the state
itself is simple and legible, the data is separate and also simple and legible.
And those suggested improvements can be made at some future date without
altering the state at all!
Avoid heavy logic and programming
`````````````````````````````````
Jinja is not Python. It was made by Python programmers and shares many
semantics and some syntax but it does not allow for abitrary Python function
calls or Python imports. Jinja is a fast and efficient templating language but
the syntax can be verbose and visually noisy.
Once Jinja use within an sls file becomes slightly complicated -- long chains
of if-elif-elif-else statements, nested conditionals, complicated dictionary
merges, wanting to use sets -- instead consider using a different Salt
renderer, such as the Python renderer. As a rule of thumb, if it is hard to
read it will be hard to maintain -- switch to a format that is easier to read.
Using alternate renderers is very simple to do using Salt's "she-bang" syntax
at the top of the file. The Python renderer must simply return the correct
:ref:`highstate data structure <states-highstate-example>`. The following
example is a state tree of two sls files, one simple and one complicated.
``/srv/salt/top.sls``:
.. code-block:: yaml
base:
'*':
- common_configuration
- roles_configuration
``/srv/salt/common_configuration.sls``:
.. code-block:: yaml
common_users:
user.present:
- names: [larry, curly, moe]
``/srv/salt/roles_configuration``:
.. code-block:: python
#!py
def run():
list_of_roles = set()
# This example has the minion id in the form 'web-03-dev'.
# Easily access the grains dictionary:
try:
app, instance_number, environment = __grains__['id'].split('-')
instance_number = int(instance_number)
except ValueError:
app, instance_number, environment = ['Unknown', 0, 'dev']
list_of_roles.add(app)
if app == 'web' and environment == 'dev':
list_of_roles.add('primary')
list_of_roles.add('secondary')
elif app == 'web' and environment == 'staging':
if instance_number == 0:
list_of_roles.add('primary')
else:
list_of_roles.add('secondary')
# Easily cross-call Salt execution modules:
if __salt__['myutils.query_valid_ec2_instance']():
list_of_roles.add('is_ec2_instance')
return {
'set_roles_grains': {
'grains.present': [
{'name': 'roles'},
{'value': list(list_of_roles)},
],
},
}
Jinja Macros
````````````
In Salt sls files Jinja macros are useful for one thing and one thing only:
creating mini templates that can be reused and rendered on demand. Do not fall
into the trap of thinking of macros as functions; Jinja is not Python (see
above).
Macros are useful for creating reusable, parameterized states. For example:
.. code-block:: yaml
{% macro user_state(state_id, user_name, shell='/bin/bash', groups=[]) %}
{{ state_id }}:
user.present:
- name: {{ user_name }}
- shell: {{ shell }}
- groups: {{ groups | json() }}
{% endmacro %}
{% for user_info in salt.pillar.get('my_users', []) %}
{{ user_state('user_number_' ~ loop.index, **user_info) }}
{% endfor %}
Macros are also useful for creating one-off "serializers" that can accept a
data structure and write that out as a domain-specific configuration file. For
example, the following macro could be used to write a php.ini config file:
``/srv/salt/php.sls``:
.. code-block:: yaml
php_ini:
file.managed:
- name: /etc/php.ini
- source: salt://php.ini.tmpl
- template: jinja
- context:
php_ini_settings: {{ salt.pillar.get('php_ini', {}) | json() }}
``/srv/pillar/php.sls``:
.. code-block:: yaml
PHP:
engine: 'On'
short_open_tag: 'Off'
error_reporting: 'E_ALL & ~E_DEPRECATED & ~E_STRICT'
``/srv/salt/php.ini.tmpl``:
.. code-block:: jinja
{% macro php_ini_serializer(data) %}
{% for section_name, name_val_pairs in data.items() %}
[{{ section }}]
{% for name, val in name_val_pairs.items() %}
{{ name }} = "{{ val }}"
{% endfor %}
{% endfor %}
{% endmacro %}
; File managed by Salt at <{{ source }}>.
; Your changes will be overwritten.
{{ php_ini_serializer(php_ini_settings) }}
Abstracting static defaults into a lookup table
-----------------------------------------------
Separate data that a state uses from the state itself to increases the
flexibility and reusability of a state.
An obvious and common example of this is platform-specific package names and
file system paths. Another example is sane defaults for an application, or
common settings within a company or organization. Organizing such data as a
dictionary (aka hash map, lookup table, associative array) often provides a
lightweight namespacing and allows for quick and easy lookups. In addition,
using a dictionary allows for easily merging and overriding static values
within a lookup table with dynamic values fetched from Pillar.
A strong convention in Salt Formulas is to place platform-specific data, such
as package names and file system paths, into a file named :file:`map.jinja`
that is placed alongside the state files.
The following is an example from the MySQL Formula.
The :py:func:`grains.filter_by <salt.modules.grains.filter_by>` function
performs a lookup on that table using the ``os_family`` grain (by default).
The result is that the ``mysql`` variable is assigned to one of *subsets* of
The result is that the ``mysql`` variable is assigned to a *subset* of
the lookup table for the current platform. This allows states to reference, for
example, the name of a package without worrying about the underlying OS. The
syntax for referencing a value is a normal dictionary lookup in Jinja, such as
@ -274,11 +767,9 @@ state file using the following syntax:
{% from "mysql/map.jinja" import mysql with context %}
mysql-server:
pkg:
- installed
pkg.installed:
- name: {{ mysql.server }}
service:
- running
service.running:
- name: {{ mysql.service }}
Collecting common values
@ -321,11 +812,13 @@ different from the base must be specified of the alternates:
Overriding values in the lookup table
`````````````````````````````````````
Any value in the lookup table may be overridden using Pillar.
Allow static values within lookup tables to be overridden. This is a simple
pattern which once again increases flexibility and reusability for state files.
The ``merge`` keyword specifies the location of a dictionary in Pillar that can
be used to override values returned from the lookup table. If the value exists
in Pillar it will take precedence.
The ``merge`` argument in :py:func:`filter_by <salt.modules.grains.filter_by>`
specifies the location of a dictionary in Pillar that can be used to override
values returned from the lookup table. If the value exists in Pillar it will
take precedence.
This is useful when software or configuration files is installed to
non-standard locations or on unsupported platforms. For example, the following
@ -369,6 +862,159 @@ Pillar would replace the ``config`` value from the call above.
zap: "The word of the day is \"salty\"."
zip: "\"The quick brown fox . . .\""
The :py:func:`filter_by <salt.modules.grains.filter_by>` function performs a
simple dictionary lookup but also allows for fetching data from Pillar and
overriding data stored in the lookup table. That same workflow can be easily
performed without using ``filter_by``; other dictionaries besides data from
Pillar can also be used.
.. code-block:: jinja
{% set lookup_table = {...} %}
{% do lookup_table.update(salt.pillar.get('my:custom:data')) %}
When to use lookup tables
`````````````````````````
The ``map.jinja`` file is only a convention within Salt Formulas. This greater
pattern is useful for a wide variety of data in a wide variety of workflows.
This pattern is not limited to pulling data from a single file or data source.
This pattern is useful in States, Pillar, the Reactor, and Overstate as well.
Working with a data structure instead of, say, a config file allows the data to
be cobbled together from multiple sources (local files, remote Pillar, database
queries, etc), combined, overridden, and searched.
Below are a few examples of what lookup tables may be useful for and how they
may be used and represented.
Platform-specific information
.............................
An obvious pattern and one used heavily in Salt Formulas is extracting
platform-specific information such as package names and file system paths in
a file named ``map.jinja``. The pattern is explained in detail above.
Sane defaults
.............
Application settings can be a good fit for this pattern. Store default
settings along with the states themselves and keep overrides and sensitive
settings in Pillar. Combine both into a single dictionary and then write the
application config or settings file.
The example below stores most of the Apache Tomcat ``server.xml`` file
alongside the Tomcat states and then allows values to be updated or augmented
via Pillar. (This example uses the BadgerFish format for transforming JSON to
XML.)
``/srv/salt/tomcat/defaults.yaml``:
.. code-block:: yaml
Server:
'@port': '8005'
'@shutdown': SHUTDOWN
GlobalNamingResources:
Resource:
'@auth': Container
'@description': User database that can be updated and saved
'@factory': org.apache.catalina.users.MemoryUserDatabaseFactory
'@name': UserDatabase
'@pathname': conf/tomcat-users.xml
'@type': org.apache.catalina.UserDatabase
# <...snip...>
``/srv/pillar/tomcat.sls``:
.. code-block:: yaml
appX:
server_xml_overrides:
Server:
Service:
'@name': Catalina
Connector:
'@port': '8009'
'@protocol': AJP/1.3
'@redirectPort': '8443'
# <...snip...>
``/srv/salt/tomcat/server_xml.sls``:
.. code-block:: yaml
{% load_yaml 'tomcat/defaults.yaml' as server_xml_defaults %}
{% set server_xml_final_values = salt.pillar.get(
'appX:server_xml_overrides',
default=server_xml_defaults,
merge=True)
%}
appX_server_xml:
file.serialize:
- name: /etc/tomcat/server.xml
- dataset: {{ server_xml_final_values | json() }}
- formatter: xml_badgerfish
The :py:func:`file.serialize <salt.states.file.serialize>` state can provide a
shorthand for creating some files from data structures. There are also many
examples within Salt Formulas of creating one-off "serializers" (often as Jinja
macros) that reformat a data structure to a specific config file format. For
example, `Nginx vhosts`__ or the `php.ini`__
__: https://github.com/saltstack-formulas/nginx-formula/blob/5cad4512/nginx/ng/vhosts_config.sls
__: https://github.com/saltstack-formulas/php-formula/blob/82e2cd3a/php/ng/files/php.ini
Environment specific information
................................
A single state can be reused when it is parameterized as described in the
section below, by separating the data the state will use from the state that
performs the work. This can be the difference between deploying *Application X*
and *Application Y*, or the difference between production and development. For
example:
``/srv/salt/app/deploy.sls``:
.. code-block:: yaml
{# Load the map file. #}
{% load_yaml 'app/defaults.yaml' as app_defaults %}
{# Extract the relevant subset for the app configured on the current
machine (configured via a grain in this example). #}
{% app = app_defaults.get(salt.grains.get('role') %}
{# Allow values from Pillar to (optionally) update values from the lookup
table. #}
{% do app_defaults.update(salt.pillar.get('myapp', {}) %}
deploy_application:
git.latest:
- name: {{ app.repo_url }}
- version: {{ app.version }}
- target: {{ app.deploy_dir }}
myco/myapp/deployed:
event.send:
- data:
version: {{ app.version }}
- onchanges:
- git: deploy_application
``/srv/salt/app/defaults.yaml``:
.. code-block:: yaml
appX:
repo_url: git@github.com/myco/appX.git
target: /var/www/appX
version: master
appY:
repo_url: git@github.com/myco/appY.git
target: /var/www/appY
version: v1.2.3.4
Single-purpose SLS files
------------------------
@ -394,11 +1040,9 @@ skips platform-specific options for brevity. See the full
# apache/init.sls
apache:
pkg:
- installed
pkg.installed:
[...]
service:
- running
service.running:
[...]
# apache/mod_wsgi.sls
@ -406,8 +1050,7 @@ skips platform-specific options for brevity. See the full
- apache
mod_wsgi:
pkg:
- installed
pkg.installed:
[...]
- require:
- pkg: apache
@ -417,8 +1060,7 @@ skips platform-specific options for brevity. See the full
- apache
apache_conf:
file:
- managed
file.managed:
[...]
- watch_in:
- service: apache
@ -509,8 +1151,7 @@ thousands of function calls across a large state tree.
{% set settings = salt['pillar.get']('apache', {}) %}
mod_status:
file:
- managed
file.managed:
- name: {{ apache.conf_dir }}
- source: {{ settings.get('mod_status_conf', 'salt://apache/mod_status.conf') }}
- template: {{ settings.get('template_engine', 'jinja') }}

View File

@ -91,15 +91,14 @@ the minion:
.. code-block:: yaml
update_zmq:
pkg:
- latest
pkg.latest:
- pkgs:
- zeromq
- python-zmq
- order: last
cmd:
- wait
- name: echo service salt-minion restart | at now + 1 minute
cmd.wait:
- name: |
echo service salt-minion restart | at now + 1 minute
- watch:
- pkg: update_zmq

View File

@ -96,8 +96,7 @@ to add them to the pool of load balanced servers.
.. code-block:: yaml
haproxy_config:
file:
- managed
file.managed:
- name: /etc/haproxy/config
- source: salt://haproxy_config
- template: jinja

View File

@ -98,15 +98,13 @@ files, and more via the shared pillar :ref:`dict <python2:typesmapping>`:
.. code-block:: yaml
apache:
pkg:
- installed
pkg.installed:
- name: {{ pillar['apache'] }}
.. code-block:: yaml
git:
pkg:
- installed
pkg.installed:
- name: {{ pillar['git'] }}
Finally, the above states can utilize the values provided to them via Pillar.

View File

@ -53,20 +53,16 @@ to set up the libvirt pki keys.
.. code-block:: yaml
libvirt:
pkg:
- installed
file:
- managed
pkg.installed: []
file.managed:
- name: /etc/sysconfig/libvirtd
- contents: 'LIBVIRTD_ARGS="--listen"'
- require:
- pkg: libvirt
libvirt:
- keys
libvirt.keys:
- require:
- pkg: libvirt
service:
- running
service.running:
- name: libvirtd
- require:
- pkg: libvirt
@ -76,12 +72,10 @@ to set up the libvirt pki keys.
- file: libvirt
libvirt-python:
pkg:
- installed
pkg.installed: []
libguestfs:
pkg:
- installed
pkg.installed:
- pkgs:
- libguestfs
- libguestfs-tools

View File

@ -247,8 +247,7 @@ A simple formula:
.. code-block:: yaml
vim:
pkg:
- installed
pkg.installed: []
/etc/vimrc:
file.managed:
@ -266,8 +265,7 @@ Can be easily transformed into a powerful, parameterized formula:
.. code-block:: jinja
vim:
pkg:
- installed
pkg.installed:
- name: {{ pillar['pkgs']['vim'] }}
/etc/vimrc:

View File

@ -67,10 +67,8 @@ A typical SLS file will often look like this in YAML:
.. code-block:: yaml
apache:
pkg:
- installed
service:
- running
pkg.installed: []
service.running:
- require:
- pkg: apache
@ -107,10 +105,8 @@ and a user and group may need to be set up.
.. code-block:: yaml
apache:
pkg:
- installed
service:
- running
pkg.installed: []
service.running:
- watch:
- pkg: apache
- file: /etc/httpd/conf/httpd.conf
@ -455,11 +451,9 @@ a MooseFS distributed filesystem chunkserver:
- pkg: mfs-chunkserver
mfs-chunkserver:
pkg:
- installed
pkg.installed: []
mfschunkserver:
service:
- running
service.running:
- require:
{% for mnt in salt['cmd.run']('ls /dev/data/moose*') %}
- mount: /mnt/moose{{ mnt[-1] }}

View File

@ -23,10 +23,8 @@ You can specify multiple :ref:`state-declaration` under an
:emphasize-lines: 4,5
apache:
pkg:
- installed
service:
- running
pkg.installed: []
service.running:
- require:
- pkg: apache
@ -47,10 +45,8 @@ installed and running. Include the following at the bottom of your
:emphasize-lines: 7,11
apache:
pkg:
- installed
service:
- running
pkg.installed: []
service.running:
- require:
- pkg: apache
@ -121,15 +117,12 @@ Verify that Apache is now serving your custom HTML.
:emphasize-lines: 1,2,3,4,11,12
/etc/httpd/extra/httpd-vhosts.conf:
file:
- managed
file.managed:
- source: salt://webserver/httpd-vhosts.conf
apache:
pkg:
- installed
service:
- running
pkg.installed: []
service.running:
- watch:
- file: /etc/httpd/extra/httpd-vhosts.conf
- require:

View File

@ -88,8 +88,7 @@ The Salt module functions are also made available in the template context as
.. code-block:: jinja
moe:
user:
- present
user.present:
- gid: {{ salt['file.group_to_gid']('some_group_that_exists') }}
Note that for the above example to work, ``some_group_that_exists`` must exist

View File

@ -519,7 +519,7 @@ Now, to beef up the vim SLS formula, a ``vimrc`` can be added:
.. code-block:: yaml
vim:
pkg.installed
pkg.installed: []
/etc/vimrc:
file.managed:
@ -553,10 +553,8 @@ make an nginx subdirectory and add an init.sls file:
.. code-block:: yaml
nginx:
pkg:
- installed
service:
- running
pkg.installed: []
service.running:
- require:
- pkg: nginx

View File

@ -66,7 +66,7 @@ class AsyncClientMixin(object):
client = None
tag_prefix = None
def _proc_function(self, fun, low, user, tag, jid, fire_event=True):
def _proc_function(self, fun, low, user, tag, jid):
'''
Run this method in a multiprocess target to execute the function in a
multiprocess and fire the return data on the event bus
@ -76,14 +76,13 @@ class AsyncClientMixin(object):
'jid': jid,
'user': user,
}
if fire_event:
event = salt.utils.event.get_event(
'master',
self.opts['sock_dir'],
self.opts['transport'],
opts=self.opts,
listen=False)
event.fire_event(data, tagify('new', base=tag))
event = salt.utils.event.get_event(
'master',
self.opts['sock_dir'],
self.opts['transport'],
opts=self.opts,
listen=False)
event.fire_event(data, tagify('new', base=tag))
try:
data['return'] = self.low(fun, low)
@ -98,13 +97,12 @@ class AsyncClientMixin(object):
data['success'] = False
data['user'] = user
if fire_event:
event.fire_event(data, tagify('ret', base=tag))
# if we fired an event, make sure to delete the event object.
# This will ensure that we call destroy, which will do the 0MQ linger
del event
event.fire_event(data, tagify('ret', base=tag))
# if we fired an event, make sure to delete the event object.
# This will ensure that we call destroy, which will do the 0MQ linger
del event
def async(self, fun, low, user='UNKNOWN', fire_event=True):
def async(self, fun, low, user='UNKNOWN'):
'''
Execute the function in a multiprocess and return the event tag to use
to watch for the return
@ -114,7 +112,6 @@ class AsyncClientMixin(object):
proc = multiprocessing.Process(
target=self._proc_function,
args=(fun, low, user, tag, jid),
kwargs={'fire_event': fire_event})
args=(fun, low, user, tag, jid))
proc.start()
return {'tag': tag, 'jid': jid}

View File

@ -161,11 +161,9 @@ def filter_by(lookup_dict, grain='os_family', merge=None, default='default'):
}), default='Debian' %}
myapache:
pkg:
- installed
pkg.installed:
- name: {{ apache.pkg }}
service:
- running
service.running:
- name: {{ apache.srv }}
Values in the lookup table may be overridden by values in Pillar. An

View File

@ -221,6 +221,8 @@ VALID_OPTS = {
'publish_session': int,
'reactor': list,
'reactor_refresh_interval': int,
'reactor_worker_threads': int,
'reactor_worker_hwm': int,
'serial': str,
'search': str,
'search_index_interval': int,

View File

@ -145,7 +145,7 @@ def clean_pub_auth(opts):
if not os.path.exists(auth_cache):
return
else:
for (dirpath, dirnames, filenames) in os.walkpath(auth_cache):
for (dirpath, dirnames, filenames) in os.walk(auth_cache):
for auth_file in filenames:
auth_file_path = os.path.join(dirpath, auth_file)
if not os.path.isfile(auth_file_path):

View File

@ -822,6 +822,8 @@ def _windows_platform_data():
grains['virtual'] = 'Xen'
if 'HVM domU' in systeminfo.Model:
grains['virtual_subtype'] = 'HVM domU'
elif 'OpenStack' in systeminfo.Model:
grains['virtual'] = 'OpenStack'
return grains

View File

@ -9,8 +9,7 @@ so it can be used to maintain services using the ``provider`` argument:
.. code-block:: yaml
myservice:
service:
- running
service.running:
- provider: daemontools
'''
from __future__ import absolute_import

View File

@ -371,11 +371,9 @@ def filter_by(lookup_dict, grain='os_family', merge=None, default='default', bas
}, default='Debian') %}
myapache:
pkg:
- installed
pkg.installed:
- name: {{ apache.pkg }}
service:
- running
service.running:
- name: {{ apache.srv }}
Values in the lookup table may be overridden by values in Pillar. An

View File

@ -109,9 +109,8 @@ def add(name,
if not isinstance(gid, int):
raise SaltInvocationError('gid must be an integer')
_dscl('/Users/{0} UniqueID {1!r}'.format(_cmd_quote(name), _cmd_quote(uid)))
_dscl('/Users/{0} PrimaryGroupID {1!r}'.format(_cmd_quote(name),
_cmd_quote(gid)))
_dscl('/Users/{0} UniqueID {1!r}'.format(_cmd_quote(name), uid))
_dscl('/Users/{0} PrimaryGroupID {1!r}'.format(_cmd_quote(name), gid))
_dscl('/Users/{0} UserShell {1!r}'.format(_cmd_quote(name),
_cmd_quote(shell)))
_dscl('/Users/{0} NFSHomeDirectory {1!r}'.format(_cmd_quote(name),
@ -196,7 +195,7 @@ def chuid(name, uid):
return True
_dscl(
'/Users/{0} UniqueID {1!r} {2!r}'.format(_cmd_quote(name),
_cmd_quote(pre_info['uid']),
pre_info['uid'],
uid),
ctype='change'
)
@ -225,8 +224,9 @@ def chgid(name, gid):
return True
_dscl(
'/Users/{0} PrimaryGroupID {1!r} {2!r}'.format(
_cmd_quote(name), _cmd_quote(pre_info['gid']),
_cmd_quote(gid)),
_cmd_quote(name),
pre_info['gid'],
gid),
ctype='change'
)
# dscl buffers changes, sleep 1 second before checking if new value
@ -308,8 +308,7 @@ def chfullname(name, fullname):
if fullname == pre_info['fullname']:
return True
_dscl(
'/Users/{0} RealName {1!r}'.format(_cmd_quote(name),
_cmd_quote(fullname)),
'/Users/{0} RealName {1!r}'.format(_cmd_quote(name), fullname),
# use a "create" command, because a "change" command would fail if
# current fullname is an empty string. The "create" will just overwrite
# this field.

View File

@ -49,7 +49,8 @@ def list_():
'''
ret = {}
for line in (__salt__['cmd.run_stdout']
('mdadm --detail --scan', python_shell=False).splitlines()):
(['mdadm', '--detail', '--scan'],
python_shell=False).splitlines()):
if ' ' not in line:
continue
comps = line.split()
@ -124,12 +125,13 @@ def destroy(device):
except CommandExecutionError:
return False
stop_cmd = 'mdadm --stop {0}'.format(device)
zero_cmd = 'mdadm --zero-superblock {0}'
stop_cmd = ['mdadm', '--stop', device]
zero_cmd = ['mdadm', '--zero-superblock']
if __salt__['cmd.retcode'](stop_cmd):
for number in details['members']:
__salt__['cmd.retcode'](zero_cmd.format(number['device']))
zero_cmd.append(number['device'])
__salt__['cmd.retcode'](zero_cmd)
# Remove entry from config file:
if __grains__.get('os_family') == 'Debian':

View File

@ -58,7 +58,13 @@ def _active_mountinfo(ret):
for line in ifile:
comps = line.split()
device = comps[2].split(':')
device_name = comps[8]
# each line can have any number of
# optional parameters, we use the
# location of the seperator field to
# determine the location of the elements
# after it.
_sep = comps.index('-')
device_name = comps[_sep + 2]
device_uuid = None
if device_name:
device_uuid = blkid_info.get(device_name, {}).get('UUID')
@ -69,10 +75,10 @@ def _active_mountinfo(ret):
'minor': device[1],
'root': comps[3],
'opts': comps[5].split(','),
'fstype': comps[7],
'fstype': comps[_sep + 1],
'device': device_name,
'alt_device': _list.get(comps[4], None),
'superopts': comps[9].split(','),
'superopts': comps[_sep + 3].split(','),
'device_uuid': device_uuid}
return ret

View File

@ -95,7 +95,7 @@ def list_(show_all=False, return_yaml=True):
else:
return schedule
else:
return None
return {'schedule': {}}
def purge(**kwargs):

View File

@ -491,6 +491,19 @@ def highstate(test=None,
kwargs.get('terse'):
ret = _filter_running(ret)
# Not 100% if this should be fatal or not,
# but I'm guessing it likely should not be.
cumask = os.umask(077)
try:
if salt.utils.is_windows():
# Make sure cache file isn't read-only
__salt__['cmd.run'](['attrib', '-R', cache_file], python_shell=False)
with salt.utils.fopen(cache_file, 'w+b') as fp_:
serial.dump(ret, fp_)
except (IOError, OSError):
msg = 'Unable to write to "state.highstate" cache file {0}'
log.error(msg.format(cache_file))
os.umask(cumask)
_set_retcode(ret)
# Work around Windows multiprocessing bug, set __opts__['test'] back to
# value from before this function was run.
@ -645,7 +658,7 @@ def sls(mods,
try:
if salt.utils.is_windows():
# Make sure cache file isn't read-only
__salt__['cmd.run']('attrib -R "{0}"'.format(cache_file))
__salt__['cmd.run'](['attrib', '-R', cache_file], python_shell=False)
with salt.utils.fopen(cache_file, 'w+b') as fp_:
serial.dump(ret, fp_)
except (IOError, OSError):

View File

@ -1442,7 +1442,7 @@ def group_info(name):
'default packages': [],
'description': ''
}
cmd_template = 'repoquery --plugins --group --grouppkgs={0} --list {1!r}'
cmd_template = 'repoquery --plugins --group --grouppkgs={0} --list {1}'
cmd = cmd_template.format('all', _cmd_quote(name))
out = __salt__['cmd.run_stdout'](cmd, output_loglevel='trace')
@ -1466,7 +1466,7 @@ def group_info(name):
# considered to be conditional packages.
ret['conditional packages'] = sorted(all_pkgs)
cmd = 'repoquery --plugins --group --info {0!r}'.format(_cmd_quote(name))
cmd = 'repoquery --plugins --group --info {0}'.format(_cmd_quote(name))
out = __salt__['cmd.run_stdout'](
cmd, output_loglevel='trace'
)

View File

@ -216,13 +216,13 @@ def list_pkgs(versions_as_list=False, **kwargs):
__salt__['pkg_resource.stringify'](ret)
return ret
cmd = ('rpm', '-qa', '--queryformat', '%{NAME}_|-%{VERSION}_|-%{RELEASE}\\n')
cmd = ['rpm', '-qa', '--queryformat', '%{NAME}_|-%{VERSION}_|-%{RELEASE}\\n']
ret = {}
out = __salt__['cmd.run'](
cmd,
output_loglevel='trace',
python_shell=False
)
cmd,
output_loglevel='trace',
python_shell=False
)
for line in out.splitlines():
name, pkgver, rel = line.split('_|-')
if rel:
@ -603,7 +603,7 @@ def install(name=None,
old = list_pkgs()
downgrades = []
if fromrepo:
fromrepoopt = ('--force', '--force-resolution', '--from', fromrepo)
fromrepoopt = ['--force', '--force-resolution', '--from', fromrepo]
log.info('Targeting repo {0!r}'.format(fromrepo))
else:
fromrepoopt = ''
@ -611,17 +611,17 @@ def install(name=None,
# the maximal length of the command line is not broken
while targets:
cmd = ['zypper', '--non-interactive', 'install', '--name',
'--auto-agree-with-licenses']
'--auto-agree-with-licenses']
if fromrepo:
cmd.extend(fromrepoopt)
cmd.extend(targets[:500])
targets = targets[500:]
out = __salt__['cmd.run'](
cmd,
output_loglevel='trace',
python_shell=False
)
cmd,
output_loglevel='trace',
python_shell=False
)
for line in out.splitlines():
match = re.match(
"^The selected package '([^']+)'.+has lower version",
@ -632,7 +632,7 @@ def install(name=None,
while downgrades:
cmd = ['zypper', '--non-interactive', 'install', '--name',
'--auto-agree-with-licenses', '--force']
'--auto-agree-with-licenses', '--force']
if fromrepo:
cmd.extend(fromrepoopt)
cmd.extend(downgrades[:500])

View File

@ -71,8 +71,10 @@ Now you can include your ciphers in your pillar data like so:
'''
from __future__ import absolute_import
import os
import re
import salt.utils
import salt.syspaths
try:
import gnupg
HAS_GPG = True
@ -86,7 +88,7 @@ from salt.exceptions import SaltRenderError
log = logging.getLogger(__name__)
GPG_HEADER = re.compile(r'-----BEGIN PGP MESSAGE-----')
DEFAULT_GPG_KEYDIR = '/etc/salt/gpgkeys'
DEFAULT_GPG_KEYDIR = os.path.join(salt.syspaths.CONFIG_DIR, 'gpgkeys')
def decrypt_ciphertext(c, gpg):

View File

@ -49,8 +49,7 @@ def extracted(name,
.. code-block:: yaml
graylog2-server:
archive:
- extracted
archive.extracted:
- name: /opt/
- source: https://github.com/downloads/Graylog2/graylog2-server/graylog2-server-0.9.6p1.tar.lzma
- source_hash: md5=499ae16dcae71eeb7c3a30c75ea7a1a6
@ -61,8 +60,7 @@ def extracted(name,
.. code-block:: yaml
graylog2-server:
archive:
- extracted
archive.extracted:
- name: /opt/
- source: https://github.com/downloads/Graylog2/graylog2-server/graylog2-server-0.9.6p1.tar.gz
- source_hash: md5=499ae16dcae71eeb7c3a30c75ea7a1a6

View File

@ -12,8 +12,7 @@ A state module to manage blockdevices
- read-only: True
master-data:
blockdev:
- tuned:
blockdev.tuned::
- name : /dev/vg/master-data
- read-only: True
- read-ahead: 1024

View File

@ -147,17 +147,14 @@ executed when the state it is watching changes. Example:
.. code-block:: yaml
/usr/local/bin/postinstall.sh:
cmd:
- wait
cmd.wait:
- watch:
- pkg: mycustompkg
file:
- managed
file.managed:
- source: salt://utils/scripts/postinstall.sh
mycustompkg:
pkg:
- installed
pkg.installed:
- require:
- file: /usr/local/bin/postinstall.sh

View File

@ -174,8 +174,7 @@ def started(name):
.. code-block:: yaml
mycluster:
glusterfs:
- started
glusterfs.started: []
'''
ret = {'name': name,
'changes': {},

View File

@ -57,8 +57,10 @@ def options_present(name, sections=None):
current_value = __salt__['ini.get_option'](name,
section,
key)
if current_value == sections[section][key]:
# Test if the change is necessary
if current_value == str(sections[section][key]):
continue
ret['changes'] = __salt__['ini.set_option'](name,
sections)
if 'error' in ret['changes']:

View File

@ -37,6 +37,8 @@ from salt.ext.six import string_types
import logging
import salt.ext.six as six
log = logging.getLogger(__name__)
from salt._compat import string_types
from salt.exceptions import SaltInvocationError
def mounted(name,
@ -161,16 +163,32 @@ def mounted(name,
comment_option = opt.split('=')[0]
if comment_option == 'comment':
opt = comment_option
if opt not in active[real_name]['opts'] and opt not in mount_invisible_options:
if opt not in active[real_name]['opts'] and opt not in active[real_name]['superopts'] and opt not in mount_invisible_options:
if __opts__['test']:
ret['result'] = None
ret['comment'] = "Remount would be forced because options changed"
return ret
else:
ret['changes']['umount'] = "Forced remount because " \
+ "options changed"
remount_result = __salt__['mount.remount'](real_name, device, mkmnt=mkmnt, fstype=fstype, opts=opts, user=user)
ret['result'] = remount_result
# nfs requires umounting and mounting if options change
# add others to list that require similiar functionality
if fstype in ['nfs']:
ret['changes']['umount'] = "Forced unmount and mount because " \
+ "options changed"
unmount_result = __salt__['mount.umount'](real_name)
if unmount_result is True:
mount_result = __salt__['mount.mount'](real_name, device, mkmnt=mkmnt, fstype=fstype, opts=opts)
ret['result'] = mount_result
else:
raise SaltInvocationError('Unable to unmount {0}: {1}.'.format(real_name, unmount_result))
else:
ret['changes']['umount'] = "Forced remount because " \
+ "options changed"
remount_result = __salt__['mount.remount'](real_name, device, mkmnt=mkmnt, fstype=fstype, opts=opts)
ret['result'] = remount_result
# Cleanup after the remount, so we
# don't write remount into fstab
if 'remount' in opts:
opts.remove('remount')
if real_device not in device_list:
# name matches but device doesn't - need to umount
if __opts__['test']:

View File

@ -45,13 +45,11 @@ def installed(name,
.. code-block:: yaml
coffee-script:
npm:
- installed
npm.installed:
- user: someuser
coffee-script@1.0.1:
npm:
- installed
npm.installed: []
name
The package to install

View File

@ -10,8 +10,7 @@ typically rather simple:
.. code-block:: yaml
pkgng_clients:
pkgng:
- update_packaging_site
pkgng.update_packaging_site:
- name: "http://192.168.0.2"
'''

View File

@ -9,8 +9,7 @@ only addition/deletion of licenses is supported.
.. code-block:: yaml
key:
powerpath:
- license_present
powerpath.license_present: []
'''

View File

@ -10,8 +10,7 @@ Example:
.. code-block:: yaml
some_plugin:
rabbitmq_plugin:
- enabled
rabbitmq_plugin.enabled: []
'''
from __future__ import absolute_import

View File

@ -15,8 +15,7 @@ configuration could look like:
.. code-block:: yaml
rvm:
group:
- present
group.present: []
user.present:
- gid: rvm
- home: /home/rvm
@ -25,7 +24,7 @@ configuration could look like:
rvm-deps:
pkg.installed:
- names:
- pkgs:
- bash
- coreutils
- gzip
@ -38,7 +37,7 @@ configuration could look like:
mri-deps:
pkg.installed:
- names:
- pkgs:
- build-essential
- openssl
- libreadline6
@ -65,7 +64,7 @@ configuration could look like:
jruby-deps:
pkg.installed:
- names:
- pkgs:
- curl
- g++
- openjdk-6-jre-headless

View File

@ -9,16 +9,14 @@ rc scripts, services can be defined as running or dead.
.. code-block:: yaml
httpd:
service:
- running
service.running: []
The service can also be set to be started at runtime via the enable option:
.. code-block:: yaml
openvpn:
service:
- running
service.running:
- enable: True
By default if a service is triggered to refresh due to a watch statement the
@ -28,8 +26,7 @@ service, then set the reload value to True:
.. code-block:: yaml
redis:
service:
- running
service.running:
- enable: True
- reload: True
- watch:

View File

@ -15,27 +15,23 @@ to use a YAML 'explicit key', as demonstrated in the second example below.
.. code-block:: yaml
AAAAB3NzaC1kc3MAAACBAL0sQ9fJ5bYTEyY==:
ssh_auth:
- present
ssh_auth.present:
- user: root
- enc: ssh-dss
? AAAAB3NzaC1kc3MAAACBAL0sQ9fJ5bYTEyY==...
:
ssh_auth:
- present
ssh_auth.present:
- user: root
- enc: ssh-dss
thatch:
ssh_auth:
- present
ssh_auth.present:
- user: root
- source: salt://ssh_keys/thatch.id_rsa.pub
sshkeys:
ssh_auth:
- present
ssh_auth.present:
- user: root
- enc: ssh-rsa
- options:

View File

@ -6,8 +6,7 @@ Interaction with the Supervisor daemon
.. code-block:: yaml
wsgi_server:
supervisord:
- running
supervisord.running:
- require:
- pkg: supervisor
- watch:

View File

@ -187,21 +187,18 @@ def wait(name, url='http://localhost:8080/manager', timeout=180):
.. code-block:: yaml
tomcat-service:
service:
- running
service.running:
- name: tomcat
- enable: True
wait-for-tomcatmanager:
tomcat:
- wait
tomcat.wait:
- timeout: 300
- require:
- service: tomcat-service
jenkins:
tomcat:
- war_deployed
tomcat.war_deployed:
- name: /ran
- war: salt://jenkins-1.2.4.war
- require:

View File

@ -11,12 +11,10 @@ description.
.. code-block:: yaml
ERIK-WORKSTATION:
system:
- computer_name
system.computer_name: []
This is Erik's computer, don't touch!:
system:
- computer_desc
system.computer_desc: []
'''
from __future__ import absolute_import

View File

@ -74,6 +74,7 @@ import salt.utils.cache
import salt.utils.dicttrim
import salt.utils.process
import salt.utils.zeromq
from salt._compat import string_types
log = logging.getLogger(__name__)
# The SUB_EVENT set is for functions that require events fired based on
@ -624,6 +625,7 @@ class EventReturn(multiprocessing.Process):
Return an EventReturn instance
'''
multiprocessing.Process.__init__(self)
salt.state.Compiler.__init__(self, opts)
self.opts = opts
self.event_return_queue = self.opts['event_return_queue']
@ -631,25 +633,107 @@ class EventReturn(multiprocessing.Process):
local_minion_opts['file_client'] = 'local'
self.minion = salt.minion.MasterMinion(local_minion_opts)
def render_reaction(self, glob_ref, tag, data):
'''
Execute the render system against a single reaction file and return
the data structure
'''
react = {}
if glob_ref.startswith('salt://'):
glob_ref = self.minion.functions['cp.cache_file'](glob_ref)
for fn_ in glob.glob(glob_ref):
try:
react.update(self.render_template(
fn_,
tag=tag,
data=data))
except Exception:
log.error('Failed to render "{0}"'.format(fn_))
return react
def list_reactors(self, tag):
'''
Take in the tag from an event and return a list of the reactors to
process
'''
log.debug('Gathering reactors for tag {0}'.format(tag))
reactors = []
if isinstance(self.opts['reactor'], string_types):
try:
with salt.utils.fopen(self.opts['reactor']) as fp_:
react_map = yaml.safe_load(fp_.read())
except (OSError, IOError):
log.error(
'Failed to read reactor map: "{0}"'.format(
self.opts['reactor']
)
)
except Exception:
log.error(
'Failed to parse YAML in reactor map: "{0}"'.format(
self.opts['reactor']
)
)
else:
react_map = self.opts['reactor']
for ropt in react_map:
if not isinstance(ropt, dict):
continue
if len(ropt) != 1:
continue
key = ropt.iterkeys().next()
val = ropt[key]
if fnmatch.fnmatch(tag, key):
if isinstance(val, string_types):
reactors.append(val)
elif isinstance(val, list):
reactors.extend(val)
return reactors
def reactions(self, tag, data, reactors):
'''
Render a list of reactor files and returns a reaction struct
'''
log.debug('Compiling reactions for tag {0}'.format(tag))
high = {}
chunks = []
for fn_ in reactors:
high.update(self.render_reaction(fn_, tag, data))
if high:
errors = self.verify_high(high)
if errors:
log.error(('Unable to render reactions for event {0} due to '
'errors ({1}) in one or more of the sls files ({2})').format(tag, errors, reactors))
return [] # We'll return nothing since there was an error
chunks = self.order_chunks(self.compile_high_data(high))
return chunks
def call_reactions(self, chunks):
'''
Execute the reaction state
'''
for chunk in chunks:
self.wrap.run(chunk)
def run(self):
'''
Spin up the multiprocess event returner
'''
salt.utils.appendproctitle(self.__class__.__name__)
self.event = get_event('master', opts=self.opts)
events = self.event.iter_events(full=True)
self.event.fire_event({}, 'salt/event_listen/start')
event_queue = []
try:
for event in events:
if self._filter(event):
event_queue.append(event)
if len(event_queue) >= self.event_return_queue:
self.minion.returners['{0}.event_return'.format(self.opts['event_return'])](event_queue)
event_queue = []
except KeyError:
log.error('Could not store return for events {0}. Returner {1} '
'not found.'.format(events, self.opts.get('event_return', None)))
# instantiate some classes inside our new process
self.event = SaltEvent('master', self.opts['sock_dir'])
self.wrap = ReactWrap(self.opts)
for data in self.event.iter_events(full=True):
reactors = self.list_reactors(data['tag'])
if not reactors:
continue
chunks = self.reactions(data['tag'], data['data'], reactors)
if chunks:
self.call_reactions(chunks)
def _filter(self, event):
'''
@ -668,6 +752,65 @@ class EventReturn(multiprocessing.Process):
return True
class ReactWrap(object):
'''
Create a wrapper that executes low data for the reaction system
'''
# class-wide cache of clients
client_cache = None
def __init__(self, opts):
self.opts = opts
if ReactWrap.client_cache is None:
ReactWrap.client_cache = salt.utils.cache.CacheDict(opts['reactor_refresh_interval'])
self.pool = salt.utils.process.ThreadPool(
self.opts['reactor_worker_threads'], # number of workers for runner/wheel
queue_size=self.opts['reactor_worker_hwm'] # queue size for those workers
)
def run(self, low):
'''
Execute the specified function in the specified state by passing the
LowData
'''
l_fun = getattr(self, low['state'])
try:
f_call = salt.utils.format_call(l_fun, low)
l_fun(*f_call.get('args', ()), **f_call.get('kwargs', {}))
except Exception:
log.error(
'Failed to execute {0}: {1}\n'.format(low['state'], l_fun),
exc_info=True
)
def local(self, *args, **kwargs):
'''
Wrap LocalClient for running :ref:`execution modules <all-salt.modules>`
'''
if 'local' not in self.client_cache:
self.client_cache['local'] = salt.client.LocalClient(self.opts['conf_file'])
self.client_cache['local'].cmd_async(*args, **kwargs)
cmd = local
def runner(self, **kwargs):
'''
Wrap RunnerClient for executing :ref:`runner modules <all-salt.runners>`
'''
if 'runner' not in self.client_cache:
self.client_cache['runner'] = salt.runner.RunnerClient(self.opts)
self.pool.fire_async(self.client_cache['runner'].low, kwargs)
def wheel(self, **kwargs):
'''
Wrap Wheel to enable executing :ref:`wheel modules <all-salt.wheel>`
'''
if 'wheel' not in self.client_cache:
self.client_cache['wheel'] = salt.wheel.Wheel(self.opts)
self.pool.fire_async(self.client_cache['wheel'].low, kwargs)
class StateFire(object):
'''
Evaluate the data from a state run and fire events on the master and minion

View File

@ -10,6 +10,9 @@ import sys
import multiprocessing
import signal
import threading
import Queue
# Import salt libs
import salt.defaults.exitcodes
import salt.utils
@ -123,6 +126,67 @@ def os_is_running(pid):
return False
class ThreadPool(object):
'''
This is a very VERY basic threadpool implementation
This was made instead of using multiprocessing ThreadPool because
we want to set max queue size and we want to daemonize threads (neither
is exposed in the stdlib version).
Since there isn't much use for this class as of right now this implementation
Only supports daemonized threads and will *not* return results
TODO: if this is found to be more generally useful it would be nice to pull
in the majority of code from upstream or from http://bit.ly/1wTeJtM
'''
def __init__(self,
num_threads=None,
queue_size=0):
# if no count passed, default to number of CPUs
if num_threads is None:
num_threads = multiprocessing.cpu_count()
self.num_threads = num_threads
# create a task queue of queue_size
self._job_queue = Queue.Queue(queue_size)
self._workers = []
# create worker threads
for idx in xrange(num_threads):
thread = threading.Thread(target=self._thread_target)
thread.daemon = True
thread.start()
self._workers.append(thread)
# intentionally not called "apply_async" since we aren't keeping track of
# the return at all, if we want to make this API compatible with multiprocessing
# threadpool we can in the future, and we won't have to worry about name collision
def fire_async(self, func, args=None, kwargs=None):
if args is None:
args = []
if kwargs is None:
kwargs = {}
try:
self._job_queue.put_nowait((func, args, kwargs))
return True
except Queue.Full:
return False
def _thread_target(self):
while True:
# 1s timeout so that if the parent dies this thread will die within 1s
try:
func, args, kwargs = self._job_queue.get(timeout=1)
self._job_queue.task_done() # Mark the task as done once we get it
except Queue.Empty:
continue
try:
func(*args, **kwargs)
except Exception as err:
log.debug(err, exc_info=True)
class ProcessManager(object):
'''
A class which will manage processes that should be running

View File

@ -189,7 +189,7 @@ def download_unittest_reports(options):
os.makedirs(xml_reports_path)
cmds = (
'salt {0} archive.tar zcvf /tmp/xml-test-reports.tar.gz \'*.xml\' cwd=/tmp/xml-unitests-output/',
'salt {0} archive.tar zcvf /tmp/xml-test-reports.tar.gz \'*.xml\' cwd=/tmp/xml-unittests-output/',
'salt {0} cp.push /tmp/xml-test-reports.tar.gz',
'mv -f /var/cache/salt/master/minions/{1}/files/tmp/xml-test-reports.tar.gz {2} && '
'tar zxvf {2}/xml-test-reports.tar.gz -C {2}/xml-test-reports && '

View File

@ -79,9 +79,49 @@ class TestProcessManager(TestCase):
process_manager.kill_children()
class TestThreadPool(TestCase):
def test_basic(self):
'''
Make sure the threadpool can do things
'''
def incr_counter(counter):
counter.value += 1
counter = multiprocessing.Value('i', 0)
pool = salt.utils.process.ThreadPool()
sent = pool.fire_async(incr_counter, args=(counter,))
self.assertTrue(sent)
time.sleep(1) # Sleep to let the threads do things
self.assertEqual(counter.value, 1)
self.assertEqual(pool._job_queue.qsize(), 0)
def test_full_queue(self):
'''
Make sure that a full threadpool acts as we expect
'''
def incr_counter(counter):
counter.value += 1
counter = multiprocessing.Value('i', 0)
# Create a pool with no workers and 1 queue size
pool = salt.utils.process.ThreadPool(0, 1)
# make sure we can put the one item in
sent = pool.fire_async(incr_counter, args=(counter,))
self.assertTrue(sent)
# make sure we can't put more in
sent = pool.fire_async(incr_counter, args=(counter,))
self.assertFalse(sent)
time.sleep(1) # Sleep to let the threads do things
# make sure no one updated the counter
self.assertEqual(counter.value, 0)
# make sure the queue is still full
self.assertEqual(pool._job_queue.qsize(), 1)
if __name__ == '__main__':
from integration import run_tests
run_tests(
[TestProcessManager],
[TestProcessManager, TestThreadPool],
needs_daemon=False
)