.TH "SALT" "7" "June 25, 2014" "2014.1.0-8653-gc447bd0" "Salt" .SH NAME salt \- Salt Documentation . .nr rst2man-indent-level 0 . .de1 rstReportMargin \\$1 \\n[an-margin] level \\n[rst2man-indent-level] level margin: \\n[rst2man-indent\\n[rst2man-indent-level]] - \\n[rst2man-indent0] \\n[rst2man-indent1] \\n[rst2man-indent2] .. .de1 INDENT .\" .rstReportMargin pre: . RS \\$1 . nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin] . nr rst2man-indent-level +1 .\" .rstReportMargin post: .. .de UNINDENT . RE .\" indent \\n[an-margin] .\" old: \\n[rst2man-indent\\n[rst2man-indent-level]] .nr rst2man-indent-level -1 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .in \\n[rst2man-indent\\n[rst2man-indent-level]]u .. .\" Man page generated from reStructeredText. . .SH INTRODUCTION TO SALT We’re not just talking about NaCl..SS The 30 second summary .sp Salt is: .INDENT 0.0 .IP \(bu 2 a configuration management system, capable of maintaining remote nodes in defined states (for example, ensuring that specific packages are installed and specific services are running) .IP \(bu 2 a distributed remote execution system used to execute commands and query data on remote nodes, either individually or by arbitrary selection criteria .UNINDENT .sp It was developed in order to bring the best solutions found in the world of remote execution together and make them better, faster, and more malleable. Salt accomplishes this through its ability to handle large loads of information, and not just dozens but hundreds and even thousands of individual servers quickly through a simple and manageable interface. .SS Simplicity .sp Providing versatility between massive scale deployments and smaller systems may seem daunting, but Salt is very simple to set up and maintain, regardless of the size of the project. The architecture of Salt is designed to work with any number of servers, from a handful of local network systems to international deployments across different datacenters. The topology is a simple server/client model with the needed functionality built into a single set of daemons. While the default configuration will work with little to no modification, Salt can be fine tuned to meet specific needs. .SS Parallel execution .sp The core functions of Salt: .INDENT 0.0 .IP \(bu 2 enable commands to remote systems to be called in parallel rather than serially .IP \(bu 2 use a secure and encrypted protocol .IP \(bu 2 use the smallest and fastest network payloads possible .IP \(bu 2 provide a simple programming interface .UNINDENT .sp Salt also introduces more granular controls to the realm of remote execution, allowing systems to be targeted not just by hostname, but also by system properties. .SS Building on proven technology .sp Salt takes advantage of a number of technologies and techniques. The networking layer is built with the excellent \fI\%ZeroMQ\fP networking library, so the Salt daemon includes a viable and transparent AMQ broker. Salt uses public keys for authentication with the master daemon, then uses faster \fI\%AES\fP encryption for payload communication; authentication and encryption are integral to Salt. Salt takes advantage of communication via \fI\%msgpack\fP, enabling fast and light network traffic. .SS Python client interface .sp In order to allow for simple expansion, Salt execution routines can be written as plain Python modules. The data collected from Salt executions can be sent back to the master server, or to any arbitrary program. Salt can be called from a simple Python API, or from the command line, so that Salt can be used to execute one\-off commands as well as operate as an integral part of a larger application. .SS Fast, flexible, scalable .sp The result is a system that can execute commands at high speed on target server groups ranging from one to very many servers. Salt is very fast, easy to set up, amazingly malleable and provides a single remote execution architecture that can manage the diverse requirements of any number of servers. The Salt infrastructure brings together the best of the remote execution world, amplifies its capabilities and expands its range, resulting in a system that is as versatile as it is practical, suitable for any network. .SS Open .sp Salt is developed under the \fI\%Apache 2.0 license\fP, and can be used for open and proprietary projects. Please submit your expansions back to the Salt project so that we can all benefit together as Salt grows. Please feel free to sprinkle Salt around your systems and let the deliciousness come forth. .SS Salt Community .sp Join the Salt! .sp There are many ways to participate in and communicate with the Salt community. .sp Salt has an active IRC channel and a mailing list. .SS Mailing List .sp Join the \fI\%salt-users mailing list\fP. It is the best place to ask questions about Salt and see whats going on with Salt development! The Salt mailing list is hosted by Google Groups. It is open to new members. .sp \fI\%https://groups.google.com/forum/#!forum/salt-users\fP .SS IRC .sp The \fB#salt\fP IRC channel is hosted on the popular \fI\%Freenode\fP network. You can use the \fI\%Freenode webchat client\fP right from your browser. .sp \fI\%Logs of the IRC channel activity\fP are being collected courtesy of Moritz Lenz. .sp If you wish to discuss the development of Salt itself join us in \fB#salt\-devel\fP. .SS Follow on Github .sp The Salt code is developed via Github. Follow Salt for constant updates on what is happening in Salt development: .sp \fI\%https://github.com/saltstack/salt\fP .SS Blogs .sp SaltStack Inc. keeps a \fI\%blog\fP with recent news and advancements: .sp \fI\%http://www.saltstack.com/blog/\fP .sp Thomas Hatch also shares news and thoughts on Salt and related projects in his personal blog \fI\%The Red45\fP: .sp \fI\%http://red45.wordpress.com/\fP .SS Example Salt States .sp The official \fBsalt\-states\fP repository is: \fI\%https://github.com/saltstack/salt-states\fP .sp A few examples of salt states from the community: .INDENT 0.0 .IP \(bu 2 \fI\%https://github.com/blast-hardcheese/blast-salt-states\fP .IP \(bu 2 \fI\%https://github.com/kevingranade/kevingranade-salt-state\fP .IP \(bu 2 \fI\%https://github.com/uggedal/states\fP .IP \(bu 2 \fI\%https://github.com/mattmcclean/salt-openstack/tree/master/salt\fP .IP \(bu 2 \fI\%https://github.com/rentalita/ubuntu-setup/\fP .IP \(bu 2 \fI\%https://github.com/brutasse/states\fP .IP \(bu 2 \fI\%https://github.com/bclermont/states\fP .IP \(bu 2 \fI\%https://github.com/pcrews/salt-data\fP .UNINDENT .SS Follow on ohloh .sp \fI\%https://www.ohloh.net/p/salt\fP .SS Other community links .INDENT 0.0 .IP \(bu 2 \fI\%Salt Stack Inc.\fP .IP \(bu 2 \fI\%Subreddit\fP .IP \(bu 2 \fI\%Google+\fP .IP \(bu 2 \fI\%YouTube\fP .IP \(bu 2 \fI\%Facebook\fP .IP \(bu 2 \fI\%Twitter\fP .IP \(bu 2 \fI\%Wikipedia page\fP .UNINDENT .SS Hack the Source .sp If you want to get involved with the development of source code or the documentation efforts, please review the \fBhacking section\fP! .SH INSTALLATION .IP "See also" .sp \fBInstalling Salt for development\fP and contributing to the project. .RE .SS Quick Install .sp On most distributions, you can set up a \fBSalt Minion\fP with the \fI\%Salt Bootstrap\fP. .SS Platform\-specific Installation Instructions .sp These guides go into detail how to install Salt on a given platform. .SS Arch Linux .SS Installation .sp Salt is currently available via the Arch User Repository (AUR). There are currently stable and \-git packages available. .SS Stable Release .sp Install Salt stable releases from the Arch Linux AUR as follows: .sp .nf .ft C wget https://aur.archlinux.org/packages/sa/salt/salt.tar.gz tar xf salt.tar.gz cd salt/ makepkg \-is .ft P .fi .sp A few of Salt\(aqs dependencies are currently only found within the AUR, so it is necessary to download and run \fBmakepkg \-is\fP on these as well. As a reference, Salt currently relies on the following packages which are only available via the AUR: .INDENT 0.0 .IP \(bu 2 \fI\%https://aur.archlinux.org/packages/py/python2-msgpack/python2-msgpack.tar.gz\fP .IP \(bu 2 \fI\%https://aur.archlinux.org/packages/py/python2-psutil/python2-psutil.tar.gz\fP .UNINDENT .IP Note yaourt .sp If a tool such as \fI\%Yaourt\fP is used, the dependencies will be gathered and built automatically. .sp The command to install salt using the yaourt tool is: .sp .nf .ft C yaourt salt .ft P .fi .RE .SS Tracking develop .sp To install the bleeding edge version of Salt (\fBmay include bugs!\fP), use the \-git package. Installing the \-git package as follows: .sp .nf .ft C wget https://aur.archlinux.org/packages/sa/salt\-git/salt\-git.tar.gz tar xf salt\-git.tar.gz cd salt\-git/ makepkg \-is .ft P .fi .sp See the note above about Salt\(aqs dependencies. .SS Post\-installation tasks .sp \fBsystemd\fP .sp Activate the Salt Master and/or Minion via \fBsystemctl\fP as follows: .sp .nf .ft C systemctl enable salt\-master.service systemctl enable salt\-minion.service .ft P .fi .sp \fBStart the Master\fP .sp Once you\(aqve completed all of these steps you\(aqre ready to start your Salt Master. You should be able to start your Salt Master now using the command seen here: .sp .nf .ft C systemctl start salt\-master .ft P .fi .sp Now go to the \fBConfiguring Salt\fP page. .SS Debian Installation .sp Currently the latest packages for Debian Old Stable, Stable and Unstable (Squeeze, Wheezy and Sid) are published in our (saltstack.com) Debian repository. .SS Configure Apt .SS Squeeze (Old Stable) .sp For squeeze, you will need to enable the Debian backports repository as well as the debian.saltstack.com repository. To do so, add the following to \fB/etc/apt/sources.list\fP or a file in \fB/etc/apt/sources.list.d\fP: .sp .nf .ft C deb http://debian.saltstack.com/debian squeeze\-saltstack main deb http://backports.debian.org/debian\-backports squeeze\-backports main contrib non\-free .ft P .fi .SS Wheezy (Stable) .sp For wheezy, the following line is needed in either \fB/etc/apt/sources.list\fP or a file in \fB/etc/apt/sources.list.d\fP: .sp .nf .ft C deb http://debian.saltstack.com/debian wheezy\-saltstack main .ft P .fi .SS Jessie (Testing) .sp For jessie, the following line is needed in either \fB/etc/apt/sources.list\fP or a file in \fB/etc/apt/sources.list.d\fP: .sp .nf .ft C deb http://debian.saltstack.com/debian jessie\-saltstack main .ft P .fi .SS Sid (Unstable) .sp For sid, the following line is needed in either \fB/etc/apt/sources.list\fP or a file in \fB/etc/apt/sources.list.d\fP: .sp .nf .ft C deb http://debian.saltstack.com/debian unstable main .ft P .fi .SS Import the repository key. .sp You will need to import the key used for signing. .sp .nf .ft C wget \-q \-O\- "http://debian.saltstack.com/debian\-salt\-team\-joehealy.gpg.key" | apt\-key add \- .ft P .fi .IP Note You can optionally verify the key integrity with \fBsha512sum\fP using the public key signature shown here. E.g: .sp .nf .ft C echo "b702969447140d5553e31e9701be13ca11cc0a7ed5fe2b30acb8491567560ee62f834772b5095d735dfcecb2384a5c1a20045f52861c417f50b68dd5ff4660e6 debian\-salt\-team\-joehealy.gpg.key" | sha512sum \-c .ft P .fi .RE .SS Update the package database .sp .nf .ft C apt\-get update .ft P .fi .SS Install packages .sp Install the Salt master, minion, or syndic from the repository with the apt\-get command. These examples each install one daemon, but more than one package name may be given at a time: .sp .nf .ft C apt\-get install salt\-master .ft P .fi .sp .nf .ft C apt\-get install salt\-minion .ft P .fi .sp .nf .ft C apt\-get install salt\-syndic .ft P .fi .SS Post\-installation tasks .sp Now, go to the \fBConfiguring Salt\fP page. .SS Notes .sp 1. These packages will be backported from the packages intended to be uploaded into Debian unstable. This means that the packages will be built for unstable first and then backported over the next day or so. .sp 2. These packages will be tracking the released versions of salt rather than maintaining a stable fixed feature set. If a fixed version is what you desire, then either pinning or manual installation may be more appropriate for you. .sp 3. The version numbering and backporting process should provide clean upgrade paths between Debian versions. .sp If you have any questions regarding these, please email the mailing list or look for joehh on IRC. .SS Fedora .sp Beginning with version 0.9.4, Salt has been available in the primary Fedora repositories and \fI\%EPEL\fP. It is installable using yum. Fedora will have more up to date versions of Salt than other members of the Red Hat family, which makes it a great place to help improve Salt! .SS Installation .sp Salt can be installed using \fByum\fP and is available in the standard Fedora repositories. .SS Stable Release .sp Salt is packaged separately for the minion and the master. It is necessary only to install the appropriate package for the role the machine will play. Typically, there will be one master and multiple minions. .sp .nf .ft C yum install salt\-master yum install salt\-minion .ft P .fi .SS Post\-installation tasks .sp \fBMaster\fP .sp To have the Master start automatically at boot time: .sp .nf .ft C systemctl enable salt\-master.service .ft P .fi .sp To start the Master: .sp .nf .ft C systemctl start salt\-master.service .ft P .fi .sp \fBMinion\fP .sp To have the Minion start automatically at boot time: .sp .nf .ft C systemctl enable salt\-minion.service .ft P .fi .sp To start the Minion: .sp .nf .ft C systemctl start salt\-minion.service .ft P .fi .sp Now go to the \fBConfiguring Salt\fP page. .SS FreeBSD .sp Salt was added to the FreeBSD ports tree Dec 26th, 2011 by Christer Edwards <\fI\%christer.edwards@gmail.com\fP>. It has been tested on FreeBSD 7.4, 8.2, 9.0 and 9.1 releases. .sp Salt is dependent on the following additional ports. These will be installed as dependencies of the \fBsysutils/py\-salt\fP port. .sp .nf .ft C /devel/py\-yaml /devel/py\-pyzmq /devel/py\-Jinja2 /devel/py\-msgpack /security/py\-pycrypto /security/py\-m2crypto .ft P .fi .SS Installation .sp On FreeBSD 10 and later, to install Salt from the FreeBSD pkgng repo, use the command: .sp .nf .ft C pkg install py27\-salt .ft P .fi .sp On older versions of FreeBSD, to install Salt from the FreeBSD ports tree, use the command: .sp .nf .ft C make \-C /usr/ports/sysutils/py\-salt install clean .ft P .fi .SS Post\-installation tasks .sp \fBMaster\fP .sp Copy the sample configuration file: .sp .nf .ft C cp /usr/local/etc/salt/master.sample /usr/local/etc/salt/master .ft P .fi .sp \fBrc.conf\fP .sp Activate the Salt Master in \fB/etc/rc.conf\fP or \fB/etc/rc.conf.local\fP and add: .sp .nf .ft C + salt_master_enable="YES" .ft P .fi .sp \fBStart the Master\fP .sp Start the Salt Master as follows: .sp .nf .ft C service salt_master start .ft P .fi .sp \fBMinion\fP .sp Copy the sample configuration file: .sp .nf .ft C cp /usr/local/etc/salt/minion.sample /usr/local/etc/salt/minion .ft P .fi .sp \fBrc.conf\fP .sp Activate the Salt Minion in \fB/etc/rc.conf\fP or \fB/etc/rc.conf.local\fP and add: .sp .nf .ft C + salt_minion_enable="YES" + salt_minion_paths="/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin" .ft P .fi .sp \fBStart the Minion\fP .sp Start the Salt Minion as follows: .sp .nf .ft C service salt_minion start .ft P .fi .sp Now go to the \fBConfiguring Salt\fP page. .SS Gentoo .sp Salt can be easily installed on Gentoo via Portage: .sp .nf .ft C emerge app\-admin/salt .ft P .fi .SS Post\-installation tasks .sp Now go to the \fBConfiguring Salt\fP page. .SS OS X .SS Dependency Installation .sp When installing via Homebrew, dependency resolution is handled for you. .sp .nf .ft C brew install saltstack .ft P .fi .sp When using macports, zmq, swig, and pip may need to be installed this way: .sp .nf .ft C sudo port install py\-zmq sudo port install py27\-m2crypto sudo port install py27\-crypto sudo port install py27\-msgpack sudo port install swig\-python sudo port install py\-pip .ft P .fi .sp For installs using the OS X system python, pip install needs to use \(aqsudo\(aq: .sp .nf .ft C sudo pip install salt .ft P .fi .SS Salt\-Master Customizations .sp To run salt\-master on OS X, the root user maxfiles limit must be increased: .sp .nf .ft C sudo launchctl limit maxfiles 4096 8192 .ft P .fi .sp And sudo add this configuration option to the /etc/salt/master file: .sp .nf .ft C max_open_files: 8192 .ft P .fi .sp Now the salt\-master should run without errors: .sp .nf .ft C sudo /usr/local/share/python/salt\-master \-\-log\-level=all .ft P .fi .SS Post\-installation tasks .sp Now go to the \fBConfiguring Salt\fP page. .SS RHEL / CentOS / Scientific Linux / Amazon Linux / Oracle Linux .SS Installation Using pip .sp Since Salt is on \fI\%PyPI\fP, it can be installed using pip, though most users prefer to install using RPMs (which can be installed from \fI\%EPEL\fP). Installation from pip is easy: .sp .nf .ft C pip install salt .ft P .fi .IP Warning If installing from pip (or from source using \fBsetup.py install\fP), be advised that the \fByum\-utils\fP package is needed for Salt to manage packages. Also, if the Python dependencies are not already installed, then you will need additional libraries/tools installed to build some of them. More information on this can be found \fIhere\fP. .RE .SS Installation from EPEL .sp Beginning with version 0.9.4, Salt has been available in \fI\%EPEL\fP. It is installable using yum. Salt should work properly with all mainstream derivatives of RHEL, including CentOS, Scientific Linux, Oracle Linux and Amazon Linux. Report any bugs or issues on the \fI\%issue tracker\fP. .sp On RHEL6, the proper Jinja package \(aqpython\-jinja2\(aq was moved from EPEL to the "RHEL Server Optional Channel". Verify this repository is enabled before installing salt on RHEL6. .SS Enabling EPEL on RHEL .sp If EPEL is not enabled on your system, you can use the following commands to enable it. .sp For RHEL 5: .sp .nf .ft C rpm \-Uvh http://mirror.pnl.gov/epel/5/i386/epel\-release\-5\-4.noarch.rpm .ft P .fi .sp For RHEL 6: .sp .nf .ft C rpm \-Uvh http://ftp.linux.ncsu.edu/pub/epel/6/i386/epel\-release\-6\-8.noarch.rpm .ft P .fi .SS Installing Stable Release .sp Salt is packaged separately for the minion and the master. It is necessary only to install the appropriate package for the role the machine will play. Typically, there will be one master and multiple minions. .sp On the salt\-master, run this: .sp .nf .ft C yum install salt\-master .ft P .fi .sp On each salt\-minion, run this: .sp .nf .ft C yum install salt\-minion .ft P .fi .SS Installing from \fBepel\-testing\fP .sp When a new Salt release is packaged, it is first admitted into the \fBepel\-testing\fP repository, before being moved to the stable repo. .sp To install from \fBepel\-testing\fP, use the \fBenablerepo\fP argument for yum: .sp .nf .ft C yum \-\-enablerepo=epel\-testing install salt\-minion .ft P .fi .SS Post\-installation tasks .sp \fBMaster\fP .sp To have the Master start automatically at boot time: .sp .nf .ft C chkconfig salt\-master on .ft P .fi .sp To start the Master: .sp .nf .ft C service salt\-master start .ft P .fi .sp \fBMinion\fP .sp To have the Minion start automatically at boot time: .sp .nf .ft C chkconfig salt\-minion on .ft P .fi .sp To start the Minion: .sp .nf .ft C service salt\-minion start .ft P .fi .sp Now go to the \fBConfiguring Salt\fP page. .SS Solaris .sp Salt was added to the OpenCSW package repository in September of 2012 by Romeo Theriault <\fI\%romeot@hawaii.edu\fP> at version 0.10.2 of Salt. It has mainly been tested on Solaris 10 (sparc), though it is built for and has been tested minimally on Solaris 10 (x86), Solaris 9 (sparc/x86) and 11 (sparc/x86). (Please let me know if you\(aqre using it on these platforms!) Most of the testing has also just focused on the minion, though it has verified that the master starts up successfully on Solaris 10. .sp Comments and patches for better support on these platforms is very welcome. .sp As of version 0.10.4, Solaris is well supported under salt, with all of the following working well: .INDENT 0.0 .IP 1. 3 remote execution .IP 2. 3 grain detection .IP 3. 3 service control with SMF .IP 4. 3 \(aqpkg\(aq states with \(aqpkgadd\(aq and \(aqpkgutil\(aq modules .IP 5. 3 cron modules/states .IP 6. 3 user and group modules/states .IP 7. 3 shadow password management modules/states .UNINDENT .sp Salt is dependent on the following additional packages. These will automatically be installed as dependencies of the \fBpy_salt\fP package.: .sp .nf .ft C py_yaml py_pyzmq py_jinja2 py_msgpack_python py_m2crypto py_crypto python .ft P .fi .SS Installation .sp To install Salt from the OpenCSW package repository you first need to install \fI\%pkgutil\fP assuming you don\(aqt already have it installed: .sp On Solaris 10: .sp .nf .ft C pkgadd \-d http://get.opencsw.org/now .ft P .fi .sp On Solaris 9: .sp .nf .ft C wget http://mirror.opencsw.org/opencsw/pkgutil.pkg pkgadd \-d pkgutil.pkg all .ft P .fi .sp Once pkgutil is installed you\(aqll need to edit it\(aqs config file \fB/etc/opt/csw/pkgutil.conf\fP to point it at the unstable catalog: .sp .nf .ft C \- #mirror=http://mirror.opencsw.org/opencsw/testing + mirror=http://mirror.opencsw.org/opencsw/unstable .ft P .fi .sp OK, time to install salt. .sp .nf .ft C # Update the catalog root> /opt/csw/bin/pkgutil \-U # Install salt root> /opt/csw/bin/pkgutil \-i \-y py_salt .ft P .fi .SS Minion Configuration .sp Now that salt is installed you can find it\(aqs configuration files in \fB/etc/opt/csw/salt/\fP. .sp You\(aqll want to edit the minion config file to set the name of your salt master server: .sp .nf .ft C \- #master: salt + master: your\-salt\-server .ft P .fi .sp If you would like to use \fI\%pkgutil\fP as the default package provider for your Solaris minions, you can do so using the \fBproviders\fP option in the minion config file. .sp You can now start the salt minion like so: .sp On Solaris 10: .sp .nf .ft C svcadm enable salt\-minion .ft P .fi .sp On Solaris 9: .sp .nf .ft C /etc/init.d/salt\-minion start .ft P .fi .sp You should now be able to log onto the salt master and check to see if the salt\-minion key is awaiting acceptance: .sp .nf .ft C salt\-key \-l un .ft P .fi .sp Accept the key: .sp .nf .ft C salt\-key \-a .ft P .fi .sp Run a simple test against the minion: .sp .nf .ft C salt \(aq\(aq test.ping .ft P .fi .SS Troubleshooting .sp Logs are in \fB/var/log/salt\fP .SS Ubuntu Installation .SS Add repository .sp The latest packages for Ubuntu are published in the saltstack PPA. If you have the \fBadd\-apt\-repository\fP utility, you can add the repository and import the key in one step: .sp .nf .ft C sudo add\-apt\-repository ppa:saltstack/salt .ft P .fi .IP "add\-apt\-repository: command not found?" .sp The \fBadd\-apt\-repository\fP command is not always present on Ubuntu systems. This can be fixed by installing \fIpython\-software\-properties\fP: .sp .nf .ft C sudo apt\-get install python\-software\-properties .ft P .fi .sp Note that since Ubuntu 12.10 (Raring Ringtail), \fBadd\-apt\-repository\fP is found in the \fIsoftware\-properties\-common\fP package, and is part of the base install. Thus, \fBadd\-apt\-repository\fP should be able to be used out\-of\-the\-box to add the PPA. .RE .sp Alternately, manually add the repository and import the PPA key with these commands: .sp .nf .ft C echo deb http://ppa.launchpad.net/saltstack/salt/ubuntu \(galsb_release \-sc\(ga main | sudo tee /etc/apt/sources.list.d/saltstack.list wget \-q \-O\- "http://keyserver.ubuntu.com:11371/pks/lookup?op=get&search=0x4759FA960E27C0A6" | sudo apt\-key add \- .ft P .fi .sp After adding the repository, update the package management database: .sp .nf .ft C sudo apt\-get update .ft P .fi .SS Install packages .sp Install the Salt master, minion, or syndic from the repository with the apt\-get command. These examples each install one daemon, but more than one package name may be given at a time: .sp .nf .ft C sudo apt\-get install salt\-master .ft P .fi .sp .nf .ft C sudo apt\-get install salt\-minion .ft P .fi .sp .nf .ft C sudo apt\-get install salt\-syndic .ft P .fi .SS Post\-installation tasks .sp Now go to the \fBConfiguring Salt\fP page. .SS Windows .sp Salt has full support for running the Salt Minion on Windows. .sp There are no plans for the foreseeable future to develop a Salt Master on Windows. For now you must run your Salt Master on a supported operating system to control your Salt Minions on Windows. .sp Many of the standard Salt modules have been ported to work on Windows and many of the Salt States currently work on Windows, as well. .SS Windows Installer .sp Salt Minion Windows installers can be found here. The output of \fImd5sum \fP should match the contents of the corresponding md5 file. .IP "Download here" .INDENT 0.0 .IP \(bu 2 2014.1.5 .IP \(bu 2 \fI\%Salt-Minion-2014.1.5-win32-Setup.exe\fP | \fI\%md5\fP .IP \(bu 2 \fI\%Salt-Minion-2014.1.5-AMD64-Setup.exe\fP | \fI\%md5\fP .IP \(bu 2 2014.1.4 .IP \(bu 2 \fI\%Salt-Minion-2014.1.4-win32-Setup.exe\fP | \fI\%md5\fP .IP \(bu 2 \fI\%Salt-Minion-2014.1.4-AMD64-Setup.exe\fP | \fI\%md5\fP .IP \(bu 2 2014.1.3\-1 (packaging bugfix) .IP \(bu 2 \fI\%Salt-Minion-2014.1.3-1-win32-Setup.exe\fP | \fI\%md5\fP .IP \(bu 2 \fI\%Salt-Minion-2014.1.3-1-AMD64-Setup.exe\fP | \fI\%md5\fP .IP \(bu 2 2014.1.3 .IP \(bu 2 \fI\%Salt-Minion-2014.1.3-win32-Setup.exe\fP | \fI\%md5\fP .IP \(bu 2 \fI\%Salt-Minion-2014.1.3-AMD64-Setup.exe\fP | \fI\%md5\fP .IP \(bu 2 2014.1.1 .IP \(bu 2 \fI\%Salt-Minion-2014.1.1-win32-Setup.exe\fP | \fI\%md5\fP .IP \(bu 2 \fI\%Salt-Minion-2014.1.1-AMD64-Setup.exe\fP | \fI\%md5\fP .IP \(bu 2 2014.1.0 .IP \(bu 2 \fI\%Salt-Minion-2014.1.0-win32-Setup.exe\fP | \fI\%md5\fP .IP \(bu 2 \fI\%Salt-Minion-2014.1.0-AMD64-Setup.exe\fP | \fI\%md5\fP .IP \(bu 2 0.17.5\-2 (bugfix release) .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.17.5-2-win32-Setup.exe\fP .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.17.5-2-AMD64-Setup.exe\fP .IP \(bu 2 0.17.5 .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.17.5-win32-Setup.exe\fP .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.17.5-AMD64-Setup.exe\fP .IP \(bu 2 0.17.4 .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.17.4-win32-Setup.exe\fP .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.17.4-AMD64-Setup.exe\fP .IP \(bu 2 0.17.2 .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.17.2-win32-Setup.exe\fP .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.17.2-AMD64-Setup.exe\fP .IP \(bu 2 0.17.1.1 \- Windows Installer bugfix release .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.17.1.1-win32-Setup.exe\fP .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.17.1.1-AMD64-Setup.exe\fP .IP \(bu 2 0.17.1 .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.17.1-win32-Setup.exe\fP .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.17.1-AMD64-Setup.exe\fP .IP \(bu 2 0.17.0 .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.17.0-win32-Setup.exe\fP .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.17.0-AMD64-Setup.exe\fP .IP \(bu 2 0.16.3 .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.16.3-win32-Setup.exe\fP .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.16.3-AMD64-Setup.exe\fP .IP \(bu 2 0.16.2 .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.16.2-win32-Setup.exe\fP .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.16.2-AMD64-Setup.exe\fP .IP \(bu 2 0.16.0 .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.16.0-win32-Setup.exe\fP .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.16.0-AMD64-Setup.exe\fP .IP \(bu 2 0.15.3 .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.15.3-win32-Setup.exe\fP .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.15.3-AMD64-Setup.exe\fP .IP \(bu 2 0.14.1 .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.14.1-win32-Setup.exe\fP .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.14.1-AMD64-Setup.exe\fP .IP \(bu 2 0.14.0 .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.14.0-win32-Setup.exe\fP .IP \(bu 2 \fI\%https://docs.saltstack.com/downloads/Salt-Minion-0.14.0-AMD64-Setup.exe\fP .UNINDENT .RE .IP Note The executables above will install dependencies that the Salt minion requires. .RE .sp The 64bit installer has been tested on Windows 7 64bit and Windows Server 2008R2 64bit. The 32bit installer has been tested on Windows 2003 Server 32bit. Please file a bug report on our GitHub repo if issues for other platforms are found. .sp The installer asks for 2 bits of information; the master hostname and the minion name. The installer will update the minion config with these options and then start the minion. .sp The \fIsalt\-minion\fP service will appear in the Windows Service Manager and can be started and stopped there or with the command line program \fIsc\fP like any other Windows service. .sp If the minion won\(aqt start, try installing the Microsoft Visual C++ 2008 x64 SP1 redistributable. Allow all Windows updates to run salt\-minion smoothly. .SS Silent Installer option .sp The installer can be run silently by providing the \fI/S\fP option at the command line. The options \fI/master\fP and \fI/minion\-name\fP allow for configuring the master hostname and minion name, respectively. Here\(aqs an example of using the silent installer: .sp .nf .ft C Salt\-Minion\-0.17.0\-Setup\-amd64.exe /S /master=yoursaltmaster /minion\-name=yourminionname .ft P .fi .SS Setting up a Windows build environment .INDENT 0.0 .IP 1. 4 Install the Microsoft Visual C++ 2008 SP1 Redistributable, \fI\%vcredist_x86\fP or \fI\%vcredist_x64\fP. .IP 2. 4 Install \fI\%msysgit\fP .IP 3. 4 Clone the Salt git repository from GitHub .sp .nf .ft C git clone git://github.com/saltstack/salt.git .ft P .fi .IP 4. 4 Install the latest point release of \fI\%Python 2.7\fP for the architecture you wish to target .IP 5. 4 Add C:\ePython27 and C:\ePython27\eScripts to your system path .IP 6. 4 Download and run the Setuptools bootstrap \- \fI\%ez_setup.py\fP .sp .nf .ft C python ez_setup.py .ft P .fi .IP 7. 4 Install Pip .sp .nf .ft C easy_install pip .ft P .fi .IP 8. 4 Install the latest point release of \fI\%OpenSSL for Windows\fP .INDENT 4.0 .IP 1. 3 During setup, choose first option to install in Windows system directory .UNINDENT .IP 9. 4 Install the latest point release of \fI\%M2Crypto\fP .INDENT 4.0 .IP 1. 3 In general, be sure to download installers targeted at py2.7 for your chosen architecture .UNINDENT .IP 10. 4 Install the latest point release of \fI\%pycrypto\fP .IP 11. 4 Install the latest point release of \fI\%pywin32\fP .IP 12. 4 Install the latest point release of \fI\%Cython\fP .IP 13. 4 Install the latest point release of \fI\%jinja2\fP .IP 14. 4 Install the latest point release of \fI\%msgpack\fP .IP 15. 4 Install psutil .sp .nf .ft C easy_install psutil .ft P .fi .IP 16. 4 Install pyzmq .sp .nf .ft C easy_install pyzmq .ft P .fi .IP 17. 4 Install PyYAML .sp .nf .ft C easy_install pyyaml .ft P .fi .IP 18. 4 Install bbfreeze .sp .nf .ft C easy_install bbfreeze .ft P .fi .IP 19. 4 Install wmi .sp .nf .ft C pip install wmi .ft P .fi .IP 20. 4 Install esky .sp .nf .ft C pip install esky .ft P .fi .IP 21. 4 Install Salt .sp .nf .ft C cd salt python setup.py install .ft P .fi .IP 22. 4 Build a frozen binary distribution of Salt .sp .nf .ft C python setup.py bdist_esky .ft P .fi .UNINDENT .sp A zip file has been created in the \fBdist/\fP folder, containing a frozen copy of Python and the dependency libraries, along with Windows executables for each of the Salt scripts. .SS Building the installer .sp The Salt Windows installer is built with the open\-source NSIS compiler. The source for the installer is found in the pkg directory of the Salt repo here: \fI\%https://github.com/saltstack/salt/blob/develop/pkg/windows/installer/Salt-Minion-Setup.nsi\fP. To create the installer, extract the frozen archive from \fBdist/\fP into \fBpkg/windows/buildenv/\fP and run NSIS. .sp The NSIS installer can be found here: \fI\%http://nsis.sourceforge.net/Main_Page\fP .SS Testing the Salt minion .INDENT 0.0 .IP 1. 3 Create the directory C:\esalt (if it doesn\(aqt exist already) .IP 2. 3 Copy the example \fBconf\fP and \fBvar\fP directories from \fBpkg/windows/buildenv/\fP into C:\esalt .IP 3. 3 Edit C:\esalt\econf\eminion .sp .nf .ft C master: ipaddress or hostname of your salt\-master .ft P .fi .IP 4. 3 Start the salt\-minion .sp .nf .ft C cd C:\ePython27\eScripts python salt\-minion .ft P .fi .IP 5. 3 On the salt\-master accept the new minion\(aqs key .sp .nf .ft C sudo salt\-key \-A .ft P .fi .sp This accepts all unaccepted keys. If you\(aqre concerned about security just accept the key for this specific minion. .IP 6. 3 Test that your minion is responding .sp On the salt\-master run: .sp .nf .ft C sudo salt \(aq*\(aq test.ping .ft P .fi .UNINDENT .sp You should get the following response: \fB{\(aqyour minion hostname\(aq: True}\fP .SS Single command bootstrap script .sp On a 64 bit Windows host the following script makes an unattended install of salt, including all dependencies: .IP "Not up to date." .sp This script is not up to date. Please use the installer found above .RE .sp .nf .ft C # (All in one line.) "PowerShell (New\-Object System.Net.WebClient).DownloadFile(\(aqhttp://csa\-net.dk/salt/bootstrap64.bat\(aq,\(aqC:\ebootstrap.bat\(aq);(New\-Object \-com Shell.Application).ShellExecute(\(aqC:\ebootstrap.bat\(aq);" .ft P .fi .sp You can execute the above command remotely from a Linux host using winexe: .sp .nf .ft C winexe \-U "administrator" //fqdn "PowerShell (New\-Object ......);" .ft P .fi .sp For more info check \fI\%http://csa-net.dk/salt\fP .SS Packages management under Windows 2003 .sp On windows Server 2003, you need to install optional component "wmi windows installer provider" to have full list of installed packages. If you don\(aqt have this, salt\-minion can\(aqt report some installed softwares. .SS SUSE Installation .sp With openSUSE 13.1, Salt 0.16.4 has been available in the primary repositories. The devel:language:python repo will have more up to date versions of salt, all package development will be done there. .SS Installation .sp Salt can be installed using \fBzypper\fP and is available in the standard openSUSE 13.1 repositories. .SS Stable Release .sp Salt is packaged separately for the minion and the master. It is necessary only to install the appropriate package for the role the machine will play. Typically, there will be one master and multiple minions. .sp .nf .ft C zypper install salt\-master zypper install salt\-minion .ft P .fi .SS Post\-installation tasks openSUSE .sp \fBMaster\fP .sp To have the Master start automatically at boot time: .sp .nf .ft C systemctl enable salt\-master.service .ft P .fi .sp To start the Master: .sp .nf .ft C systemctl start salt\-master.service .ft P .fi .sp \fBMinion\fP .sp To have the Minion start automatically at boot time: .sp .nf .ft C systemctl enable salt\-minion.service .ft P .fi .sp To start the Minion: .sp .nf .ft C systemctl start salt\-minion.service .ft P .fi .SS Post\-installation tasks SLES .sp \fBMaster\fP .sp To have the Master start automatically at boot time: .sp .nf .ft C chkconfig salt\-master on .ft P .fi .sp To start the Master: .sp .nf .ft C rcsalt\-master start .ft P .fi .sp \fBMinion\fP .sp To have the Minion start automatically at boot time: .sp .nf .ft C chkconfig salt\-minion on .ft P .fi .sp To start the Minion: .sp .nf .ft C rcsalt\-minion start .ft P .fi .SS Unstable Release .SS openSUSE .sp For openSUSE Factory run the following as root: .sp .nf .ft C zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_Factory/devel:languages:python.repo zypper refresh zypper install salt salt\-minion salt\-master .ft P .fi .sp For openSUSE 13.1 run the following as root: .sp .nf .ft C zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_13.1/devel:languages:python.repo zypper refresh zypper install salt salt\-minion salt\-master .ft P .fi .sp For openSUSE 12.3 run the following as root: .sp .nf .ft C zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_12.3/devel:languages:python.repo zypper refresh zypper install salt salt\-minion salt\-master .ft P .fi .sp For openSUSE 12.2 run the following as root: .sp .nf .ft C zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_12.2/devel:languages:python.repo zypper refresh zypper install salt salt\-minion salt\-master .ft P .fi .sp For openSUSE 12.1 run the following as root: .sp .nf .ft C zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_12.1/devel:languages:python.repo zypper refresh zypper install salt salt\-minion salt\-master .ft P .fi .sp For bleeding edge python Factory run the following as root: .sp .nf .ft C zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/bleeding_edge_python_Factory/devel:languages:python.repo zypper refresh zypper install salt salt\-minion salt\-master .ft P .fi .SS Suse Linux Enterprise .sp For SLE 11 SP3 run the following as root: .sp .nf .ft C zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/SLE_11_SP3/devel:languages:python.repo zypper refresh zypper install salt salt\-minion salt\-master .ft P .fi .sp For SLE 11 SP2 run the following as root: .sp .nf .ft C zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/SLE_11_SP2/devel:languages:python.repo zypper refresh zypper install salt salt\-minion salt\-master .ft P .fi .sp Now go to the \fBConfiguring Salt\fP page. .SS Dependencies .sp Salt should run on any Unix\-like platform so long as the dependencies are met. .INDENT 0.0 .IP \(bu 2 \fI\%Python 2.6\fP >= 2.6 <3.0 .IP \(bu 2 \fI\%msgpack-python\fP \- High\-performance message interchange format .IP \(bu 2 \fI\%YAML\fP \- Python YAML bindings .IP \(bu 2 \fI\%Jinja2\fP \- parsing Salt States (configurable in the master settings) .IP \(bu 2 \fI\%MarkupSafe\fP \- Implements a XML/HTML/XHTML Markup safe string for Python .IP \(bu 2 \fI\%apache-libcloud\fP \- Python lib for interacting with many of the popular cloud service providers using a unified API .IP \(bu 2 \fI\%Requests\fP \- HTTP library .UNINDENT .sp Depending on the chosen Salt transport, \fI\%ZeroMQ\fP or \fI\%RAET\fP, dependencies vary: .INDENT 0.0 .IP \(bu 2 ZeroMQ: .INDENT 2.0 .IP \(bu 2 \fI\%ZeroMQ\fP >= 3.2.0 .IP \(bu 2 \fI\%pyzmq\fP >= 2.2.0 \- ZeroMQ Python bindings .IP \(bu 2 \fI\%PyCrypto\fP \- The Python cryptography toolkit .IP \(bu 2 \fI\%M2Crypto\fP \- "Me Too Crypto" \- Python OpenSSL wrapper .UNINDENT .IP \(bu 2 RAET: .INDENT 2.0 .IP \(bu 2 \fI\%libnacl\fP \- Python bindings to \fI\%libsodium\fP .IP \(bu 2 \fI\%ioflo\fP \- The flo programming interface raet and salt\-raet is built on .IP \(bu 2 \fI\%RAET\fP \- The worlds most awesome UDP protocol .UNINDENT .UNINDENT .sp Salt defaults to the \fI\%ZeroMQ\fP transport, and the choice can be made at install time, for example: .sp .nf .ft C python setup.py install \-\-salt\-transport=raet .ft P .fi .sp This way, only the required dependencies are pulled by the setup script if need be. .sp If installing using pip, the \fB\-\-salt\-transport\fP install option can be provided like: .sp .nf .ft C pip install \-\-install\-option="\-\-salt\-transport=raet" salt .ft P .fi .SS Optional Dependencies .INDENT 0.0 .IP \(bu 2 \fI\%mako\fP \- an optional parser for Salt States (configurable in the master settings) .IP \(bu 2 gcc \- dynamic \fI\%Cython\fP module compiling .UNINDENT .SS Upgrading Salt .sp When upgrading Salt, the master(s) should always be upgraded first. Backward compatibility for minions running newer versions of salt than their masters is not guaranteed. .sp Whenever possible, backward compatibility between new masters and old minions will be preserved. Generally, the only exception to this policy is in case of a security vulnerability. .SH TUTORIALS .SS Introduction .SS Salt Masterless Quickstart .sp Running a masterless salt\-minion lets you use Salt\(aqs configuration management for a single machine without calling out to a Salt master on another machine. .sp Since the Salt minion contains such extensive functionality it can be useful to run it standalone. A standalone minion can be used to do a number of things: .INDENT 0.0 .IP \(bu 2 Stand up a master server via States (Salting a Salt Master) .IP \(bu 2 Use salt\-call commands on a system without connectivity to a master .IP \(bu 2 Masterless States, run states entirely from files local to the minion .UNINDENT .sp It is also useful for testing out state trees before deploying to a production setup. .SS Bootstrap Salt Minion .sp The \fI\%salt-bootstrap\fP script makes bootstrapping a server with Salt simple for any OS with a Bourne shell: .sp .nf .ft C wget \-O \- https://bootstrap.saltstack.com | sudo sh .ft P .fi .sp See the \fI\%salt-bootstrap\fP documentation for other one liners. When using \fI\%Vagrant\fP to test out salt, the \fI\%salty-vagrant\fP tool will provision the VM for you. .SS Telling Salt to Run Masterless .sp To instruct the minion to not look for a master when running the \fBfile_client\fP configuration option needs to be set. By default the \fBfile_client\fP is set to \fBremote\fP so that the minion knows that file server and pillar data are to be gathered from the master. When setting the \fBfile_client\fP option to \fBlocal\fP the minion is configured to not gather this data from the master. .sp .nf .ft C file_client: local .ft P .fi .sp Now the salt minion will not look for a master and will assume that the local system has all of the file and pillar resources. .SS Create State Tree .sp Following the successful installation of a salt\-minion, the next step is to create a state tree, which is where the SLS files that comprise the possible states of the minion are stored. .sp The following example walks through the steps necessary to create a state tree that ensures that the server has the Apache webserver installed. .INDENT 0.0 .IP 1. 3 Create the \fBtop.sls\fP file: .UNINDENT .sp \fB/srv/salt/top.sls:\fP .sp .nf .ft C base: \(aq*\(aq: \- webserver .ft P .fi .INDENT 0.0 .IP 2. 3 Create the webserver state tree: .UNINDENT .sp \fB/srv/salt/webserver.sls:\fP .sp .nf .ft C apache: # ID declaration pkg: # state declaration \- installed # function declaration .ft P .fi .sp The only thing left is to provision our minion using salt\-call and the highstate command. .SS Salt\-call .sp The salt\-call command is used to run module functions locally on a minion instead of executing them from the master. Normally the salt\-call command checks into the master to retrieve file server and pillar data, but when running standalone salt\-call needs to be instructed to not check the master for this data: .sp .nf .ft C salt\-call \-\-local state.highstate .ft P .fi .sp The \fB\-\-local\fP flag tells the salt\-minion to look for the state tree in the local file system and not to contact a Salt Master for instructions. .sp To provide verbose output, use \fB\-l debug\fP: .sp .nf .ft C salt\-call \-\-local state.highstate \-l debug .ft P .fi .sp The minion first examines the \fBtop.sls\fP file and determines that it is a part of the group matched by \fB*\fP glob and that the \fBwebserver\fP SLS should be applied. .sp It then examines the \fBwebserver.sls\fP file and finds the \fBapache\fP state, which installs the Apache package. .sp The minion should now have Apache installed, and the next step is to begin learning how to write \fBmore complex states\fP. .SS Basics .SS Salt Bootstrap .sp The Salt Bootstrap script allows for a user to install the Salt Minion or Master on a variety of system distributions and versions. This shell script known as \fBbootstrap\-salt.sh\fP runs through a series of checks to determine the operating system type and version. It then installs the Salt binaries using the appropriate methods. The Salt Bootstrap script installs the minimum number of packages required to run Salt. This means that in the event you run the bootstrap to install via package, Git will not be installed. Installing the minimum number of packages helps ensure the script stays as lightweight as possible, assuming the user will install any other required packages after the Salt binaries are present on the system. The script source is available on GitHub: \fI\%https://github.com/saltstack/salt-bootstrap\fP .SS Supported Operating Systems .INDENT 0.0 .IP \(bu 2 Amazon Linux 2012.09 .IP \(bu 2 Arch .IP \(bu 2 CentOS 5/6 .IP \(bu 2 Debian 6.x/7.x/8(git installations only) .IP \(bu 2 Fedora 17/18 .IP \(bu 2 FreeBSD 9.1/9.2/10 .IP \(bu 2 Gentoo .IP \(bu 2 Linaro .IP \(bu 2 Linux Mint 13/14 .IP \(bu 2 OpenSUSE 12.x .IP \(bu 2 Oracle Linux 5/5 .IP \(bu 2 Red Hat 5/6 .IP \(bu 2 Red Hat Enterprise 5/6 .IP \(bu 2 Scientific Linux 5/6 .IP \(bu 2 SmartOS .IP \(bu 2 SuSE 11 SP1/11 SP2 .IP \(bu 2 Ubuntu 10.x/11.x/12.x/13.04/13.10 .IP \(bu 2 Elementary OS 0.2 .UNINDENT .IP Note In the event you do not see your distribution or version available please review the develop branch on Github as it main contain updates that are not present in the stable release: \fI\%https://github.com/saltstack/salt-bootstrap/tree/develop\fP .RE .SS Example Usage .sp If you\(aqre looking for the \fIone\-liner\fP to install salt, please scroll to the bottom and use the instructions for \fI\%Installing via an Insecure One-Liner\fP .IP Note In every two\-step example, you would be well\-served to examine the downloaded file and examine it to ensure that it does what you expect. .RE .sp Using \fBcurl\fP to install latest git: .sp Using \fBwget\fP to install your distribution\(aqs stable packages: .sp Install a specific version from git using \fBwget\fP: .sp If you already have python installed, \fBpython 2.6\fP, then it\(aqs as easy as: .sp All python versions should support the following one liner: .sp On a FreeBSD base system you usually don\(aqt have either of the above binaries available. You \fBdo\fP have \fBfetch\fP available though: .sp If all you want is to install a \fBsalt\-master\fP using latest git: .sp If you want to install a specific release version (based on the git tags): .sp To install a specific branch from a git fork: .SS Installing via an Insecure One\-Liner .sp The following examples illustrate how to install Salt via a one\-liner. .IP Note Warning! These methods do not involve a verification step and assume that the delivered file is trustworthy. .RE .SS Examples .sp Installing the latest develop branch of Salt: .sp Any of the example above which use two\-lines can be made to run in a single\-line configuration with minor modifications. .SS Example Usage .sp The Salt Bootstrap script has a wide variety of options that can be passed as well as several ways of obtaining the bootstrap script itself. .sp For example, using \fBcurl\fP to install your distribution\(aqs stable packages: .sp .nf .ft C curl \-L https://bootstrap.saltstack.com | sudo sh .ft P .fi .sp Using \fBwget\fP to install your distribution\(aqs stable packages: .sp .nf .ft C wget \-O \- https://bootstrap.saltstack.com | sudo sh .ft P .fi .sp Installing the latest version available from git with \fBcurl\fP: .sp .nf .ft C curl \-L https://bootstrap.saltstack.com | sudo sh \-s \-\- git develop .ft P .fi .sp Install a specific version from git using \fBwget\fP: .sp .nf .ft C wget \-O \- https://bootstrap.saltstack.com | sh \-s \-\- \-P git v0.16.4 .ft P .fi .sp If you already have python installed, \fBpython 2.6\fP, then it\(aqs as easy as: .sp .nf .ft C python \-m urllib "https://bootstrap.saltstack.com" | sudo sh \-s \-\- git develop .ft P .fi .sp All python versions should support the following one liner: .sp .nf .ft C python \-c \(aqimport urllib; print urllib.urlopen("https://bootstrap.saltstack.com").read()\(aq | \e sudo sh \-s \-\- git develop .ft P .fi .sp On a FreeBSD base system you usually don\(aqt have either of the above binaries available. You \fBdo\fP have \fBfetch\fP available though: .sp .nf .ft C fetch \-o \- https://bootstrap.saltstack.com | sudo sh .ft P .fi .sp If all you want is to install a \fBsalt\-master\fP using latest git: .sp .nf .ft C curl \-L https://bootstrap.saltstack.com | sudo sh \-s \-\- \-M \-N git develop .ft P .fi .sp If you want to install a specific release version (based on the git tags): .sp .nf .ft C curl \-L https://bootstrap.saltstack.com | sudo sh \-s \-\- git v0.16.4 .ft P .fi .sp Downloading the develop branch (from here standard command line options may be passed): .sp .nf .ft C wget https://bootstrap.saltstack.com/develop .ft P .fi .SS Command Line Options .sp Here\(aqs a summary of the command line options: .sp .nf .ft C $ sh bootstrap\-salt.sh \-h Usage : bootstrap\-salt.sh [options] Installation types: \- stable (default) \- daily (ubuntu specific) \- git Examples: $ bootstrap\-salt.sh $ bootstrap\-salt.sh stable $ bootstrap\-salt.sh daily $ bootstrap\-salt.sh git $ bootstrap\-salt.sh git develop $ bootstrap\-salt.sh git v0.17.0 $ bootstrap\-salt.sh git 8c3fadf15ec183e5ce8c63739850d543617e4357 Options: \-h Display this message \-v Display script version \-n No colours. \-D Show debug output. \-c Temporary configuration directory \-g Salt repository URL. (default: git://github.com/saltstack/salt.git) \-k Temporary directory holding the minion keys which will pre\-seed the master. \-M Also install salt\-master \-S Also install salt\-syndic \-N Do not install salt\-minion \-X Do not start daemons after installation \-C Only run the configuration function. This option automatically bypasses any installation. \-P Allow pip based installations. On some distributions the required salt packages or its dependencies are not available as a package for that distribution. Using this flag allows the script to use pip as a last resort method. NOTE: This only works for functions which actually implement pip based installations. \-F Allow copied files to overwrite existing(config, init.d, etc) \-U If set, fully upgrade the system prior to bootstrapping salt \-K If set, keep the temporary files in the temporary directories specified with \-c and \-k. \-I If set, allow insecure connections while downloading any files. For example, pass \(aq\-\-no\-check\-certificate\(aq to \(aqwget\(aq or \(aq\-\-insecure\(aq to \(aqcurl\(aq \-A Pass the salt\-master DNS name or IP. This will be stored under ${BS_SALT_ETC_DIR}/minion.d/99\-master\-address.conf \-i Pass the salt\-minion id. This will be stored under ${BS_SALT_ETC_DIR}/minion_id \-L Install the Apache Libcloud package if possible(required for salt\-cloud) \-p Extra\-package to install while installing salt dependencies. One package per \-p flag. You\(aqre responsible for providing the proper package name. .ft P .fi .SS Standalone Minion .sp Since the Salt minion contains such extensive functionality it can be useful to run it standalone. A standalone minion can be used to do a number of things: .INDENT 0.0 .IP \(bu 2 Use salt\-call commands on a system without connectivity to a master .IP \(bu 2 Masterless States, run states entirely from files local to the minion .UNINDENT .SS Telling Salt Call to Run Masterless .sp The salt\-call command is used to run module functions locally on a minion instead of executing them from the master. Normally the salt\-call command checks into the master to retrieve file server and pillar data, but when running standalone salt\-call needs to be instructed to not check the master for this data. To instruct the minion to not look for a master when running salt\-call the \fBfile_client\fP configuration option needs to be set. By default the \fBfile_client\fP is set to \fBremote\fP so that the minion knows that file server and pillar data are to be gathered from the master. When setting the \fBfile_client\fP option to \fBlocal\fP the minion is configured to not gather this data from the master. .sp .nf .ft C file_client: local .ft P .fi .sp Now the salt\-call command will not look for a master and will assume that the local system has all of the file and pillar resources. .SS Running States Masterless .sp The state system can be easily run without a Salt master, with all needed files local to the minion. To do this the minion configuration file needs to be set up to know how to return file_roots information like the master. The file_roots setting defaults to /srv/salt for the base environment just like on the master: .sp .nf .ft C file_roots: base: \- /srv/salt .ft P .fi .sp Now set up the Salt State Tree, top file, and SLS modules in the same way that they would be set up on a master. Now, with the \fBfile_client\fP option set to \fBlocal\fP and an available state tree then calls to functions in the state module will use the information in the file_roots on the minion instead of checking in with the master. .sp Remember that when creating a state tree on a minion there are no syntax or path changes needed, SLS modules written to be used from a master do not need to be modified in any way to work with a minion. .sp This makes it easy to "script" deployments with Salt states without having to set up a master, and allows for these SLS modules to be easily moved into a Salt master as the deployment grows. .sp The declared state can now be executed with: .sp .nf .ft C salt\-call state.highstate .ft P .fi .sp Or the salt\-call command can be executed with the \fB\-\-local\fP flag, this makes it unnecessary to change the configuration file: .sp .nf .ft C salt\-call state.highstate \-\-local .ft P .fi .SS Opening the Firewall up for Salt .sp The Salt master communicates with the minions using an AES\-encrypted ZeroMQ connection. These communications are done over TCP ports 4505 and 4506, which need to be accessible on the master only. This document outlines suggested firewall rules for allowing these incoming connections to the master. .IP Note No firewall configuration needs to be done on Salt minions. These changes refer to the master only. .RE .SS RHEL 6 / CentOS 6 .sp The \fBlokkit\fP command packaged with some Linux distributions makes opening iptables firewall ports very simple via the command line. Just be careful to not lock out access to the server by neglecting to open the ssh port. .sp \fBlokkit example\fP: .sp .nf .ft C lokkit \-p 22:tcp \-p 4505:tcp \-p 4506:tcp .ft P .fi .sp The \fBsystem\-config\-firewall\-tui\fP command provides a text\-based interface to modifying the firewall. .sp \fBsystem\-config\-firewall\-tui\fP: .sp .nf .ft C system\-config\-firewall\-tui .ft P .fi .SS openSUSE .sp Salt installs firewall rules in \fI\%/etc/sysconfig/SuSEfirewall2.d/services/salt\fP. Enable with: .sp .nf .ft C SuSEfirewall2 open SuSEfirewall2 start .ft P .fi .sp If you have an older package of Salt where the above configuration file is not included, the \fBSuSEfirewall2\fP command makes opening iptables firewall ports very simple via the command line. .sp \fBSuSEfirewall example\fP: .sp .nf .ft C SuSEfirewall2 open EXT TCP 4505 SuSEfirewall2 open EXT TCP 4506 .ft P .fi .sp The firewall module in YaST2 provides a text\-based interface to modifying the firewall. .sp \fBYaST2\fP: .sp .nf .ft C yast2 firewall .ft P .fi .SS iptables .sp Different Linux distributions store their \fIiptables\fP (also known as \fI\%netfilter\fP) rules in different places, which makes it difficult to standardize firewall documentation. Included are some of the more common locations, but your mileage may vary. .sp \fBFedora / RHEL / CentOS\fP: .sp .nf .ft C /etc/sysconfig/iptables .ft P .fi .sp \fBArch Linux\fP: .sp .nf .ft C /etc/iptables/iptables.rules .ft P .fi .sp \fBDebian\fP .sp Follow these instructions: \fI\%https://wiki.debian.org/iptables\fP .sp Once you\(aqve found your firewall rules, you\(aqll need to add the two lines below to allow traffic on \fBtcp/4505\fP and \fBtcp/4506\fP: .sp .nf .ft C \-A INPUT \-m state \-\-state new \-m tcp \-p tcp \-\-dport 4505 \-j ACCEPT \-A INPUT \-m state \-\-state new \-m tcp \-p tcp \-\-dport 4506 \-j ACCEPT .ft P .fi .sp \fBUbuntu\fP .sp Salt installs firewall rules in \fI\%/etc/ufw/applications.d/salt.ufw\fP. Enable with: .sp .nf .ft C ufw allow salt .ft P .fi .SS pf.conf .sp The BSD\-family of operating systems uses \fI\%packet filter (pf)\fP. The following example describes the additions to \fBpf.conf\fP needed to access the Salt master. .sp .nf .ft C pass in on $int_if proto tcp from any to $int_if port 4505 pass in on $int_if proto tcp from any to $int_if port 4506 .ft P .fi .sp Once these additions have been made to the \fBpf.conf\fP the rules will need to be reloaded. This can be done using the \fBpfctl\fP command. .sp .nf .ft C pfctl \-vf /etc/pf.conf .ft P .fi .SS Whitelist communication to Master .sp There are situations where you want to selectively allow Minion traffic from specific hosts or networks into your Salt Master. The first scenario which comes to mind is to prevent unwanted traffic to your Master out of security concerns, but another scenario is to handle Minion upgrades when there are backwards incompatible changes between the installed Salt versions in your environment. .sp Here is an example \fILinux iptables\fP ruleset to be set on the Master: .sp .nf .ft C # Allow Minions from these networks \-I INPUT \-s 10.1.2.0/24 \-p tcp \-m multiport \-\-dports 4505,4506 \-j ACCEPT \-I INPUT \-s 10.1.3.0/24 \-p tcp \-m multiport \-\-dports 4505,4506 \-j ACCEPT # Allow Salt to communicate with Master on the loopback interface \-A INPUT \-i lo \-p tcp \-m multiport \-\-dports 4505,4506 \-j ACCEPT # Reject everything else \-A INPUT \-p tcp \-m multiport \-\-dports 4505,4506 \-j REJECT .ft P .fi .IP Note The important thing to note here is that the \fBsalt\fP command needs to communicate with the listening network socket of \fBsalt\-master\fP on the \fIloopback\fP interface. Without this you will see no outgoing Salt traffic from the master, even for a simple \fBsalt \(aq*\(aq test.ping\fP, because the \fBsalt\fP client never reached the \fBsalt\-master\fP to tell it to carry out the execution. .RE .SS Using cron with Salt .sp The Salt Minion can initiate its own highstate using the \fBsalt\-call\fP command. .sp .nf .ft C $ salt\-call state.highstate .ft P .fi .sp This will cause the minion to check in with the master and ensure it is in the correct \(aqstate\(aq. .SS Use cron to initiate a highstate .sp If you would like the Salt Minion to regularly check in with the master you can use the venerable cron to run the \fBsalt\-call\fP command. .sp .nf .ft C # PATH=/bin:/sbin:/usr/bin:/usr/sbin 00 00 * * * salt\-call state.highstate .ft P .fi .sp The above cron entry will run a highstate every day at midnight. .IP Note Be aware that you may need to ensure the PATH for cron includes any scripts or commands that need to be executed. .RE .SS Remote execution tutorial .sp \fBBefore continuing\fP make sure you have a working Salt installation by following the \fBinstallation\fP and the \fBconfiguration\fP instructions. .IP "Stuck?" .sp There are many ways to \fIget help from the Salt community\fP including our \fI\%mailing list\fP and our \fI\%IRC channel\fP #salt. .RE .SS Order your minions around .sp Now that you have a \fImaster\fP and at least one \fIminion\fP communicating with each other you can perform commands on the minion via the \fBsalt\fP command. Salt calls are comprised of three main components: .sp .nf .ft C salt \(aq\(aq [arguments] .ft P .fi .IP "See also" .sp \fBsalt manpage\fP .RE .SS target .sp The target component allows you to filter which minions should run the following function. The default filter is a glob on the minion id. For example: .sp .nf .ft C salt \(aq*\(aq test.ping salt \(aq*.example.org\(aq test.ping .ft P .fi .sp Targets can be based on minion system information using the Grains system: .sp .nf .ft C salt \-G \(aqos:Ubuntu\(aq test.ping .ft P .fi .IP "See also" .sp \fBGrains system\fP .RE .sp Targets can be filtered by regular expression: .sp .nf .ft C salt \-E \(aqvirtmach[0\-9]\(aq test.ping .ft P .fi .sp Targets can be explicitly specified in a list: .sp .nf .ft C salt \-L \(aqfoo,bar,baz,quo\(aq test.ping .ft P .fi .sp Or Multiple target types can be combined in one command: .sp .nf .ft C salt \-C \(aqG@os:Ubuntu and webser* or E@database.*\(aq test.ping .ft P .fi .SS function .sp A function is some functionality provided by a module. Salt ships with a large collection of available functions. List all available functions on your minions: .sp .nf .ft C salt \(aq*\(aq sys.doc .ft P .fi .sp Here are some examples: .sp Show all currently available minions: .sp .nf .ft C salt \(aq*\(aq test.ping .ft P .fi .sp Run an arbitrary shell command: .sp .nf .ft C salt \(aq*\(aq cmd.run \(aquname \-a\(aq .ft P .fi .IP "See also" .sp \fBthe full list of modules\fP .RE .SS arguments .sp Space\-delimited arguments to the function: .sp .nf .ft C salt \(aq*\(aq cmd.exec_code python \(aqimport sys; print sys.version\(aq .ft P .fi .sp Optional, keyword arguments are also supported: .sp .nf .ft C salt \(aq*\(aq pip.install salt timeout=5 upgrade=True .ft P .fi .sp They are always in the form of \fBkwarg=argument\fP. .SS Pillar Walkthrough .IP Note This walkthrough assumes that the reader has already completed the initial Salt \fBwalkthrough\fP. .RE .sp Pillars are tree\-like structures of data defined on the Salt Master and passed through to minions. They allow confidential, targeted data to be securely sent only to the relevant minion. .IP Note Grains and Pillar are sometimes confused, just remember that Grains are data about a minion which is stored or generated from the minion. This is why information like the OS and CPU type are found in Grains. Pillar is information about a minion or many minions stored or generated on the Salt Master. .RE .sp Pillar data is useful for: .INDENT 0.0 .TP .B Highly Sensitive Data: Information transferred via pillar is guaranteed to only be presented to the minions that are targeted, making Pillar suitable for managing security information, such as cryptographic keys and passwords. .TP .B Minion Configuration: Minion modules such as the execution modules, states, and returners can often be configured via data stored in pillar. .TP .B Variables: Variables which need to be assigned to specific minions or groups of minions can be defined in pillar and then accessed inside sls formulas and template files. .TP .B Arbitrary Data: Pillar can contain any basic data structure, so a list of values, or a key/value store can be defined making it easy to iterate over a group of values in sls formulas .UNINDENT .sp Pillar is therefore one of the most important systems when using Salt. This walkthrough is designed to get a simple Pillar up and running in a few minutes and then to dive into the capabilities of Pillar and where the data is available. .SS Setting Up Pillar .sp The pillar is already running in Salt by default. To see the minion\(aqs pillar data: .sp .nf .ft C salt \(aq*\(aq pillar.items .ft P .fi .IP Note Prior to version 0.16.2, this function is named \fBpillar.data\fP. This function name is still supported for backwards compatibility. .RE .sp By default the contents of the master configuration file are loaded into pillar for all minions. This enables the master configuration file to be used for global configuration of minions. .sp Similar to the state tree, the pillar is comprised of sls files and has a top file. The default location for the pillar is in /srv/pillar. .IP Note The pillar location can be configured via the \fIpillar_roots\fP option inside the master configuration file. It must not be in a subdirectory of the state tree. .RE .sp To start setting up the pillar, the /srv/pillar directory needs to be present: .sp .nf .ft C mkdir /srv/pillar .ft P .fi .sp Now create a simple top file, following the same format as the top file used for states: .sp \fB/srv/pillar/top.sls\fP: .sp .nf .ft C base: \(aq*\(aq: \- data .ft P .fi .sp This top file associates the data.sls file to all minions. Now the \fB/srv/pillar/data.sls\fP file needs to be populated: .sp \fB/srv/pillar/data.sls\fP: .sp .nf .ft C info: some data .ft P .fi .sp Now that the file has been saved, the minions\(aq pillars will be updated: .sp .nf .ft C salt \(aq*\(aq pillar.items .ft P .fi .sp The key \fBinfo\fP should now appear in the returned pillar data. .SS More Complex Data .sp Unlike states, pillar files do not need to define \fBformulas\fP. This example sets up user data with a UID: .sp \fB/srv/pillar/users/init.sls\fP: .sp .nf .ft C users: thatch: 1000 shouse: 1001 utahdave: 1002 redbeard: 1003 .ft P .fi .IP Note The same directory lookups that exist in states exist in pillar, so the file \fBusers/init.sls\fP can be referenced with \fBusers\fP in the \fItop file\fP. .RE .sp The top file will need to be updated to include this sls file: .sp \fB/srv/pillar/top.sls\fP: .sp .nf .ft C base: \(aq*\(aq: \- data \- users .ft P .fi .sp Now the data will be available to the minions. To use the pillar data in a state, you can use Jinja: .sp \fB/srv/salt/users/init.sls\fP .sp .nf .ft C {% for user, uid in pillar.get(\(aqusers\(aq, {}).items() %} {{user}}: user.present: \- uid: {{uid}} {% endfor %} .ft P .fi .sp This approach allows for users to be safely defined in a pillar and then the user data is applied in an sls file. .SS Parameterizing States With Pillar .sp Pillar data can be accessed in state files to customise behaviour for each minion. All pillar (and grain) data applicable to each minion is substituted into the state files through templating before being run. Typical uses include setting directories appropriate for the minion and skipping states that don\(aqt apply. .sp A simple example is to set up a mapping of package names in pillar for separate Linux distributions: .sp \fB/srv/pillar/pkg/init.sls\fP: .sp .nf .ft C pkgs: {% if grains[\(aqos_family\(aq] == \(aqRedHat\(aq %} apache: httpd vim: vim\-enhanced {% elif grains[\(aqos_family\(aq] == \(aqDebian\(aq %} apache: apache2 vim: vim {% elif grains[\(aqos\(aq] == \(aqArch\(aq %} apache: apache vim: vim {% endif %} .ft P .fi .sp The new \fBpkg\fP sls needs to be added to the top file: .sp \fB/srv/pillar/top.sls\fP: .sp .nf .ft C base: \(aq*\(aq: \- data \- users \- pkg .ft P .fi .sp Now the minions will auto map values based on respective operating systems inside of the pillar, so sls files can be safely parameterized: .sp \fB/srv/salt/apache/init.sls\fP: .sp .nf .ft C apache: pkg.installed: \- name: {{ pillar[\(aqpkgs\(aq][\(aqapache\(aq] }} .ft P .fi .sp Or, if no pillar is available a default can be set as well: .IP Note The function \fBpillar.get\fP used in this example was added to Salt in version 0.14.0 .RE .sp \fB/srv/salt/apache/init.sls\fP: .sp .nf .ft C apache: pkg.installed: \- name: {{ salt[\(aqpillar.get\(aq](\(aqpkgs:apache\(aq, \(aqhttpd\(aq) }} .ft P .fi .sp In the above example, if the pillar value \fBpillar[\(aqpkgs\(aq][\(aqapache\(aq]\fP is not set in the minion\(aqs pillar, then the default of \fBhttpd\fP will be used. .IP Note Under the hood, pillar is just a Python dict, so Python dict methods such as \fIget\fP and \fIitems\fP can be used. .RE .SS Pillar Makes Simple States Grow Easily .sp One of the design goals of pillar is to make simple sls formulas easily grow into more flexible formulas without refactoring or complicating the states. .sp A simple formula: .sp \fB/srv/salt/edit/vim.sls\fP: .sp .nf .ft C vim: pkg: \- installed /etc/vimrc: file.managed: \- source: salt://edit/vimrc \- mode: 644 \- user: root \- group: root \- require: \- pkg: vim .ft P .fi .sp Can be easily transformed into a powerful, parameterized formula: .sp \fB/srv/salt/edit/vim.sls\fP: .sp .nf .ft C vim: pkg: \- installed \- name: {{ pillar[\(aqpkgs\(aq][\(aqvim\(aq] }} /etc/vimrc: file.managed: \- source: {{ pillar[\(aqvimrc\(aq] }} \- mode: 644 \- user: root \- group: root \- require: \- pkg: vim .ft P .fi .sp Where the vimrc source location can now be changed via pillar: .sp \fB/srv/pillar/edit/vim.sls\fP: .sp .nf .ft C {% if grains[\(aqid\(aq].startswith(\(aqdev\(aq) %} vimrc: salt://edit/dev_vimrc {% elif grains[\(aqid\(aq].startswith(\(aqqa\(aq) %} vimrc: salt://edit/qa_vimrc {% else %} vimrc: salt://edit/vimrc {% endif %} .ft P .fi .sp Ensuring that the right vimrc is sent out to the correct minions. .SS More On Pillar .sp Pillar data is generated on the Salt master and securely distributed to minions. Salt is not restricted to the pillar sls files when defining the pillar but can retrieve data from external sources. This can be useful when information about an infrastructure is stored in a separate location. .sp Reference information on pillar and the external pillar interface can be found in the Salt documentation: .sp \fBPillar\fP .SS States .SS How Do I Use Salt States? .sp Simplicity, Simplicity, Simplicity .sp Many of the most powerful and useful engineering solutions are founded on simple principles. Salt States strive to do just that: K.I.S.S. (Keep It Stupidly Simple) .sp The core of the Salt State system is the SLS, or \fBS\fPa\fBL\fPt \fBS\fPtate file. The SLS is a representation of the state in which a system should be in, and is set up to contain this data in a simple format. This is often called configuration management. .IP Note This is just the beginning of using states, make sure to read up on pillar \fBPillar\fP next. .RE .SS It is All Just Data .sp Before delving into the particulars, it will help to understand that the SLS file is just a data structure under the hood. While understanding that the SLS is just a data structure isn\(aqt critical for understanding and making use of Salt States, it should help bolster knowledge of where the real power is. .sp SLS files are therefore, in reality, just \fI\%dictionaries\fP, \fI\%lists\fP, \fI\%strings\fP, and \fI\%numbers\fP. By using this approach Salt can be much more flexible. As one writes more state files, it becomes clearer exactly what is being written. The result is a system that is easy to understand, yet grows with the needs of the admin or developer. .SS The Top File .sp The example SLS files in the below sections can be assigned to hosts using a file called \fBtop.sls\fP. This file is described in\-depth \fBhere\fP. .SS Default Data \- YAML .sp By default Salt represents the SLS data in what is one of the simplest serialization formats available \- \fI\%YAML\fP. .sp A typical SLS file will often look like this in YAML: .IP Note These demos use some generic service and package names, different distributions often use different names for packages and services. For instance \fIapache\fP should be replaced with \fIhttpd\fP on a Red Hat system. Salt uses the name of the init script, systemd name, upstart name etc. based on what the underlying service management for the platform. To get a list of the available service names on a platform execute the service.get_all salt function. .sp Information on how to make states work with multiple distributions is later in the tutorial. .RE .sp .nf .ft C apache: pkg: \- installed service: \- running \- require: \- pkg: apache .ft P .fi .sp This SLS data will ensure that the package named apache is installed, and that the apache service is running. The components can be explained in a simple way. .sp The first line is the ID for a set of data, and it is called the ID Declaration. This ID sets the name of the thing that needs to be manipulated. .sp The second and fourth lines are the start of the State Declarations, so they are using the pkg and service states respectively. The pkg state manages a software package to be installed via the system\(aqs native package manager, and the service state manages a system daemon. .sp The third and fifth lines are the function to run. This function defines what state the named package and service should be in. Here, the package is to be installed, and the service should be running. .sp Finally, on line six, is the word \fBrequire\fP. This is called a Requisite Statement, and it makes sure that the Apache service is only started after a successful installation of the apache package. .SS Adding Configs and Users .sp When setting up a service like an Apache web server, many more components may need to be added. The Apache configuration file will most likely be managed, and a user and group may need to be set up. .sp .nf .ft C apache: pkg: \- installed service: \- running \- watch: \- pkg: apache \- file: /etc/httpd/conf/httpd.conf \- user: apache user.present: \- uid: 87 \- gid: 87 \- home: /var/www/html \- shell: /bin/nologin \- require: \- group: apache group.present: \- gid: 87 \- require: \- pkg: apache /etc/httpd/conf/httpd.conf: file.managed: \- source: salt://apache/httpd.conf \- user: root \- group: root \- mode: 644 .ft P .fi .sp This SLS data greatly extends the first example, and includes a config file, a user, a group and new requisite statement: \fBwatch\fP. .sp Adding more states is easy, since the new user and group states are under the Apache ID, the user and group will be the Apache user and group. The \fBrequire\fP statements will make sure that the user will only be made after the group, and that the group will be made only after the Apache package is installed. .sp Next, the \fBrequire\fP statement under service was changed to watch, and is now watching 3 states instead of just one. The watch statement does the same thing as require, making sure that the other states run before running the state with a watch, but it adds an extra component. The \fBwatch\fP statement will run the state\(aqs watcher function for any changes to the watched states. So if the package was updated, the config file changed, or the user uid modified, then the service state\(aqs watcher will be run. The service state\(aqs watcher just restarts the service, so in this case, a change in the config file will also trigger a restart of the respective service. .SS Moving Beyond a Single SLS .sp When setting up Salt States in a scalable manner, more than one SLS will need to be used. The above examples were in a single SLS file, but two or more SLS files can be combined to build out a State Tree. The above example also references a file with a strange source \- \fBsalt://apache/httpd.conf\fP. That file will need to be available as well. .sp The SLS files are laid out in a directory structure on the Salt master; an SLS is just a file and files to download are just files. .sp The Apache example would be laid out in the root of the Salt file server like this: .sp .nf .ft C apache/init.sls apache/httpd.conf .ft P .fi .sp So the httpd.conf is just a file in the apache directory, and is referenced directly. .sp But when using more than one single SLS file, more components can be added to the toolkit. Consider this SSH example: .sp \fBssh/init.sls:\fP .sp .nf .ft C openssh\-client: pkg.installed /etc/ssh/ssh_config: file.managed: \- user: root \- group: root \- mode: 644 \- source: salt://ssh/ssh_config \- require: \- pkg: openssh\-client .ft P .fi .sp \fBssh/server.sls:\fP .sp .nf .ft C include: \- ssh openssh\-server: pkg.installed sshd: service.running: \- require: \- pkg: openssh\-client \- pkg: openssh\-server \- file: /etc/ssh/banner \- file: /etc/ssh/sshd_config /etc/ssh/sshd_config: file.managed: \- user: root \- group: root \- mode: 644 \- source: salt://ssh/sshd_config \- require: \- pkg: openssh\-server /etc/ssh/banner: file: \- managed \- user: root \- group: root \- mode: 644 \- source: salt://ssh/banner \- require: \- pkg: openssh\-server .ft P .fi .IP Note Notice that we use two similar ways of denoting that a file is managed by Salt. In the \fI/etc/ssh/sshd_config\fP state section above, we use the \fIfile.managed\fP state declaration whereas with the \fI/etc/ssh/banner\fP state section, we use the \fIfile\fP state declaration and add a \fImanaged\fP attribute to that state declaration. Both ways produce an identical result; the first way \-\- using \fIfile.managed\fP \-\- is merely a shortcut. .RE .sp Now our State Tree looks like this: .sp .nf .ft C apache/init.sls apache/httpd.conf ssh/init.sls ssh/server.sls ssh/banner ssh/ssh_config ssh/sshd_config .ft P .fi .sp This example now introduces the \fBinclude\fP statement. The include statement includes another SLS file so that components found in it can be required, watched or as will soon be demonstrated \- extended. .sp The include statement allows for states to be cross linked. When an SLS has an include statement it is literally extended to include the contents of the included SLS files. .sp Note that some of the SLS files are called init.sls, while others are not. More info on what this means can be found in the \fIStates Tutorial\fP. .SS Extending Included SLS Data .sp Sometimes SLS data needs to be extended. Perhaps the apache service needs to watch additional resources, or under certain circumstances a different file needs to be placed. .sp In these examples, the first will add a custom banner to ssh and the second will add more watchers to apache to include mod_python. .sp \fBssh/custom\-server.sls:\fP .sp .nf .ft C include: \- ssh.server extend: /etc/ssh/banner: file: \- source: salt://ssh/custom\-banner .ft P .fi .sp \fBpython/mod_python.sls:\fP .sp .nf .ft C include: \- apache extend: apache: service: \- watch: \- pkg: mod_python mod_python: pkg.installed .ft P .fi .sp The \fBcustom\-server.sls\fP file uses the extend statement to overwrite where the banner is being downloaded from, and therefore changing what file is being used to configure the banner. .sp In the new mod_python SLS the mod_python package is added, but more importantly the apache service was extended to also watch the mod_python package. .IP "Using extend with require or watch" .sp The \fBextend\fP statement works differently for \fBrequire\fP or \fBwatch\fP. It appends to, rather than replacing the requisite component. .RE .SS Understanding the Render System .sp Since SLS data is simply that (data), it does not need to be represented with YAML. Salt defaults to YAML because it is very straightforward and easy to learn and use. But the SLS files can be rendered from almost any imaginable medium, so long as a renderer module is provided. .sp The default rendering system is the \fByaml_jinja\fP renderer. The \fByaml_jinja\fP renderer will first pass the template through the \fI\%Jinja2\fP templating system, and then through the YAML parser. The benefit here is that full programming constructs are available when creating SLS files. .sp Other renderers available are \fByaml_mako\fP and \fByaml_wempy\fP which each use the \fI\%Mako\fP or \fI\%Wempy\fP templating system respectively rather than the jinja templating system, and more notably, the pure Python or \fBpy\fP, \fBpydsl\fP & \fBpyobjects\fP renderers. The \fBpy\fP renderer allows for SLS files to be written in pure Python, allowing for the utmost level of flexibility and power when preparing SLS data; while the \fBpydsl\fP renderer provides a flexible, domain\-specific language for authoring SLS data in Python; and the \fBpyobjects\fP renderer gives you a \fI\%"Pythonic"\fP interface to building state data. .IP Note The templating engines described above aren\(aqt just available in SLS files. They can also be used in \fBfile.managed\fP states, making file management much more dynamic and flexible. Some examples for using templates in managed files can be found in the documentation for the \fBfile states\fP, as well as the \fIMooseFS example\fP below. .RE .SS Getting to Know the Default \- yaml_jinja .sp The default renderer \- \fByaml_jinja\fP, allows for use of the jinja templating system. A guide to the Jinja templating system can be found here: \fI\%http://jinja.pocoo.org/docs\fP .sp When working with renderers a few very useful bits of data are passed in. In the case of templating engine based renderers, three critical components are available, \fBsalt\fP, \fBgrains\fP, and \fBpillar\fP. The \fBsalt\fP object allows for any Salt function to be called from within the template, and \fBgrains\fP allows for the Grains to be accessed from within the template. A few examples: .sp \fBapache/init.sls:\fP .sp .nf .ft C apache: pkg.installed: {% if grains[\(aqos\(aq] == \(aqRedHat\(aq%} \- name: httpd {% endif %} service.running: {% if grains[\(aqos\(aq] == \(aqRedHat\(aq%} \- name: httpd {% endif %} \- watch: \- pkg: apache \- file: /etc/httpd/conf/httpd.conf \- user: apache user.present: \- uid: 87 \- gid: 87 \- home: /var/www/html \- shell: /bin/nologin \- require: \- group: apache group.present: \- gid: 87 \- require: \- pkg: apache /etc/httpd/conf/httpd.conf: file.managed: \- source: salt://apache/httpd.conf \- user: root \- group: root \- mode: 644 .ft P .fi .sp This example is simple. If the \fBos\fP grain states that the operating system is Red Hat, then the name of the Apache package and service needs to be httpd. .sp A more aggressive way to use Jinja can be found here, in a module to set up a MooseFS distributed filesystem chunkserver: .sp \fBmoosefs/chunk.sls:\fP .sp .nf .ft C include: \- moosefs {% for mnt in salt[\(aqcmd.run\(aq](\(aqls /dev/data/moose*\(aq).split() %} /mnt/moose{{ mnt[\-1] }}: mount.mounted: \- device: {{ mnt }} \- fstype: xfs \- mkmnt: True file.directory: \- user: mfs \- group: mfs \- require: \- user: mfs \- group: mfs {% endfor %} /etc/mfshdd.cfg: file.managed: \- source: salt://moosefs/mfshdd.cfg \- user: root \- group: root \- mode: 644 \- template: jinja \- require: \- pkg: mfs\-chunkserver /etc/mfschunkserver.cfg: file.managed: \- source: salt://moosefs/mfschunkserver.cfg \- user: root \- group: root \- mode: 644 \- template: jinja \- require: \- pkg: mfs\-chunkserver mfs\-chunkserver: pkg: \- installed mfschunkserver: service: \- running \- require: {% for mnt in salt[\(aqcmd.run\(aq](\(aqls /dev/data/moose*\(aq) %} \- mount: /mnt/moose{{ mnt[\-1] }} \- file: /mnt/moose{{ mnt[\-1] }} {% endfor %} \- file: /etc/mfschunkserver.cfg \- file: /etc/mfshdd.cfg \- file: /var/lib/mfs .ft P .fi .sp This example shows much more of the available power of Jinja. Multiple for loops are used to dynamically detect available hard drives and set them up to be mounted, and the \fBsalt\fP object is used multiple times to call shell commands to gather data. .SS Introducing the Python, PyDSL and the Pyobjects Renderers .sp Sometimes the chosen default renderer might not have enough logical power to accomplish the needed task. When this happens, the Python renderer can be used. Normally a YAML renderer should be used for the majority of SLS files, but an SLS file set to use another renderer can be easily added to the tree. .sp This example shows a very basic Python SLS file: .sp \fBpython/django.sls:\fP .sp .nf .ft C #!py def run(): \(aq\(aq\(aq Install the django package \(aq\(aq\(aq return {\(aqinclude\(aq: [\(aqpython\(aq], \(aqdjango\(aq: {\(aqpkg\(aq: [\(aqinstalled\(aq]}} .ft P .fi .sp This is a very simple example; the first line has an SLS shebang that tells Salt to not use the default renderer, but to use the \fBpy\fP renderer. Then the run function is defined, the return value from the run function must be a Salt friendly data structure, or better known as a Salt \fBHighState data structure\fP. .sp Alternatively, using the \fBpydsl\fP renderer, the above example can be written more succinctly as: .sp .nf .ft C #!pydsl include(\(aqpython\(aq, delayed=True) state(\(aqdjango\(aq).pkg.installed() .ft P .fi .sp The \fBpyobjects\fP renderer provides an \fI\%"Pythonic"\fP object based approach for building the state data. The above example could be written as: .sp .nf .ft C #!pyobjects include(\(aqpython\(aq) Pkg.installed("django") .ft P .fi .sp This Python examples would look like this if they were written in YAML: .sp .nf .ft C include: \- python django: pkg.installed .ft P .fi .sp This example clearly illustrates that; one, using the YAML renderer by default is a wise decision and two, unbridled power can be obtained where needed by using a pure Python SLS. .SS Running and debugging salt states. .sp Once the rules in an SLS are ready, they should be tested to ensure they work properly. To invoke these rules, simply execute \fBsalt \(aq*\(aq state.highstate\fP on the command line. If you get back only hostnames with a \fB:\fP after, but no return, chances are there is a problem with one or more of the sls files. On the minion, use the \fBsalt\-call\fP command: \fBsalt\-call state.highstate \-l debug\fP to examine the output for errors. This should help troubleshoot the issue. The minions can also be started in the foreground in debug mode: \fBsalt\-minion \-l debug\fP. .SS Next Reading .sp With an understanding of states, the next recommendation is to become familiar with Salt\(aqs pillar interface: .INDENT 0.0 .INDENT 3.5 \fBPillar Walkthrough\fP .UNINDENT .UNINDENT .SS States tutorial, part 1 \- Basic Usage .sp The purpose of this tutorial is to demonstrate how quickly you can configure a system to be managed by Salt States. For detailed information about the state system please refer to the full \fBstates reference\fP. .sp This tutorial will walk you through using Salt to configure a minion to run the Apache HTTP server and to ensure the server is running. .sp \fBBefore continuing\fP make sure you have a working Salt installation by following the \fBinstallation\fP and the \fBconfiguration\fP instructions. .IP "Stuck?" .sp There are many ways to \fIget help from the Salt community\fP including our \fI\%mailing list\fP and our \fI\%IRC channel\fP #salt. .RE .SS Setting up the Salt State Tree .sp States are stored in text files on the master and transferred to the minions on demand via the master\(aqs File Server. The collection of state files make up the \fBState Tree\fP. .sp To start using a central state system in Salt, the Salt File Server must first be set up. Edit the master config file (\fBfile_roots\fP) and uncomment the following lines: .sp .nf .ft C file_roots: base: \- /srv/salt .ft P .fi .IP Note If you are deploying on FreeBSD via ports, the \fBfile_roots\fP path defaults to \fB/usr/local/etc/salt/states\fP. .RE .sp Restart the Salt master in order to pick up this change: .sp .nf .ft C pkill salt\-master salt\-master \-d .ft P .fi .SS Preparing the Top File .sp On the master, in the directory uncommented in the previous step, (\fB/srv/salt\fP by default), create a new file called \fBtop.sls\fP and add the following: .sp .nf .ft C base: \(aq*\(aq: \- webserver .ft P .fi .sp The \fItop file\fP is separated into environments (discussed later). The default environment is \fBbase\fP. Under the \fBbase\fP environment a collection of minion matches is defined; for now simply specify all hosts (\fB*\fP). .IP "Targeting minions" .sp The expressions can use any of the targeting mechanisms used by Salt — minions can be matched by glob, PCRE regular expression, or by \fBgrains\fP. For example: .sp .nf .ft C base: \(aqos:Fedora\(aq: \- match: grain \- webserver .ft P .fi .RE .SS Create an \fBsls\fP file .sp In the same directory as the \fItop file\fP, create a file named \fBwebserver.sls\fP, containing the following: .sp .nf .ft C apache: # ID declaration pkg: # state declaration \- installed # function declaration .ft P .fi .sp The first line, called the \fIid\-declaration\fP, is an arbitrary identifier. In this case it defines the name of the package to be installed. .IP Note The package name for the Apache httpd web server may differ depending on OS or distro — for example, on Fedora it is \fBhttpd\fP but on Debian/Ubuntu it is \fBapache2\fP. .RE .sp The second line, called the \fIstate\-declaration\fP, defines which of the Salt States we are using. In this example, we are using the \fBpkg state\fP to ensure that a given package is installed. .sp The third line, called the \fIfunction\-declaration\fP, defines which function in the \fBpkg state\fP module to call. .IP "Renderers" .sp States \fBsls\fP files can be written in many formats. Salt requires only a simple data structure and is not concerned with how that data structure is built. Templating languages and \fI\%DSLs\fP are a dime\-a\-dozen and everyone has a favorite. .sp Building the expected data structure is the job of Salt \fBrenderers\fP and they are dead\-simple to write. .sp In this tutorial we will be using YAML in Jinja2 templates, which is the default format. The default can be changed by editing \fBrenderer\fP in the master configuration file. .RE .SS Install the package .sp Next, let\(aqs run the state we created. Open a terminal on the master and run: .sp .nf .ft C % salt \(aq*\(aq state.highstate .ft P .fi .sp Our master is instructing all targeted minions to run \fBstate.highstate\fP. When a minion executes a highstate call it will download the \fItop file\fP and attempt to match the expressions. When it does match an expression the modules listed for it will be downloaded, compiled, and executed. .sp Once completed, the minion will report back with a summary of all actions taken and all changes made. .IP Warning If you have created \fIcustom grain modules\fP, they will not be available in the top file until after the first \fIhighstate\fP. To make custom grains available on a minion\(aqs first highstate, it is recommended to use \fIthis example\fP to ensure that the custom grains are synced when the minion starts. .RE .IP "SLS File Namespace" .sp Note that in the \fIexample\fP above, the SLS file \fBwebserver.sls\fP was referred to simply as \fBwebserver\fP. The namespace for SLS files follows a few simple rules: .INDENT 0.0 .IP 1. 3 The \fB.sls\fP is discarded (i.e. \fBwebserver.sls\fP becomes \fBwebserver\fP). .IP 2. 3 .INDENT 3.0 .TP .B Subdirectories can be used for better organization. .INDENT 7.0 .IP a. 3 Each subdirectory is represented by a dot. .IP b. 3 \fBwebserver/dev.sls\fP is referred to as \fBwebserver.dev\fP. .UNINDENT .UNINDENT .IP 3. 3 A file called \fBinit.sls\fP in a subdirectory is referred to by the path of the directory. So, \fBwebserver/init.sls\fP is referred to as \fBwebserver\fP. .IP 4. 3 If both \fBwebserver.sls\fP and \fBwebserver/init.sls\fP happen to exist, \fBwebserver/init.sls\fP will be ignored and \fBwebserver.sls\fP will be the file referred to as \fBwebserver\fP. .UNINDENT .RE .IP "Troubleshooting Salt" .sp If the expected output isn\(aqt seen, the following tips can help to narrow down the problem. .INDENT 0.0 .TP .B Turn up logging Salt can be quite chatty when you change the logging setting to \fBdebug\fP: .sp .nf .ft C salt\-minion \-l debug .ft P .fi .TP .B Run the minion in the foreground By not starting the minion in daemon mode (\fI\-d\fP) one can view any output from the minion as it works: .sp .nf .ft C salt\-minion & .ft P .fi .UNINDENT .sp Increase the default timeout value when running \fBsalt\fP. For example, to change the default timeout to 60 seconds: .sp .nf .ft C salt \-t 60 .ft P .fi .sp For best results, combine all three: .sp .nf .ft C salt\-minion \-l debug & # On the minion salt \(aq*\(aq state.highstate \-t 60 # On the master .ft P .fi .RE .SS Next steps .sp This tutorial focused on getting a simple Salt States configuration working. \fBPart 2\fP will build on this example to cover more advanced \fBsls\fP syntax and will explore more of the states that ship with Salt. .SS States tutorial, part 2 \- More Complex States, Requisites .IP Note This tutorial builds on topics covered in \fBpart 1\fP. It is recommended that you begin there. .RE .sp In the \fBlast part\fP of the Salt States tutorial we covered the basics of installing a package. We will now modify our \fBwebserver.sls\fP file to have requirements, and use even more Salt States. .SS Call multiple States .sp You can specify multiple \fIstate\-declaration\fP under an \fIid\-declaration\fP. For example, a quick modification to our \fBwebserver.sls\fP to also start Apache if it is not running: .sp .nf .ft C apache: pkg: \- installed service: \- running \- require: \- pkg: apache .ft P .fi .sp Try stopping Apache before running \fBstate.highstate\fP once again and observe the output. .SS Expand the SLS module .sp As you have seen, SLS modules are appended with the file extension \fB.sls\fP and are referenced by name starting at the root of the state tree. An SLS module can be also defined as a directory. Demonstrate that now by creating a directory named \fBwebserver\fP and moving and renaming \fBwebserver.sls\fP to \fBwebserver/init.sls\fP. Your state directory should now look like this: .sp .nf .ft C |\- top.sls \(ga\- webserver/ \(ga\- init.sls .ft P .fi .IP "Organizing SLS modules" .sp You can place additional \fB.sls\fP files in a state file directory. This affords much cleaner organization of your state tree on the filesystem. For example, if we created a \fBwebserver/django.sls\fP file that module would be referenced as \fBwebserver.django\fP. .sp In addition, States provide powerful includes and extending functionality which we will cover in \fBPart 3\fP. .RE .SS Require other states .sp We now have a working installation of Apache so let\(aqs add an HTML file to customize our website. It isn\(aqt exactly useful to have a website without a webserver so we don\(aqt want Salt to install our HTML file until Apache is installed and running. Include the following at the bottom of your \fBwebserver/init.sls\fP file: .sp .nf .ft C apache: pkg: \- installed service: \- running \- require: \- pkg: apache /var/www/index.html: # ID declaration file: # state declaration \- managed # function \- source: salt://webserver/index.html # function arg \- require: # requisite declaration \- pkg: apache # requisite reference .ft P .fi .sp \fBline 9\fP is the \fIid\-declaration\fP. In this example it is the location we want to install our custom HTML file. (\fBNote:\fP the default location that Apache serves may differ from the above on your OS or distro. \fB/srv/www\fP could also be a likely place to look.) .sp \fBLine 10\fP the \fIstate\-declaration\fP. This example uses the Salt \fBfile state\fP. .sp \fBLine 11\fP is the \fIfunction\-declaration\fP. The \fBmanaged function\fP will download a file from the master and install it in the location specified. .sp \fBLine 12\fP is a \fIfunction\-arg\-declaration\fP which, in this example, passes the \fBsource\fP argument to the \fBmanaged function\fP. .sp \fBLine 13\fP is a \fIrequisite\-declaration\fP. .sp \fBLine 14\fP is a \fIrequisite\-reference\fP which refers to a state and an ID. In this example, it is referring to the \fBID declaration\fP from our example in \fBpart 1\fP. This declaration tells Salt not to install the HTML file until Apache is installed. .sp Next, create the \fBindex.html\fP file and save it in the \fBwebserver\fP directory: .sp .nf .ft C Salt rocks

This file brought to you by Salt

.ft P .fi .sp Last, call \fBstate.highstate\fP again and the minion will fetch and execute the highstate as well as our HTML file from the master using Salt\(aqs File Server: .sp .nf .ft C salt \(aq*\(aq state.highstate .ft P .fi .sp Verify that Apache is now serving your custom HTML. .IP "\fBrequire\fP vs. \fBwatch\fP" .sp There are two \fIrequisite\-declaration\fP, “require” and “watch”. Not every state supports “watch”. The \fBservice state\fP does support “watch” and will restart a service based on the watch condition. .sp For example, if you use Salt to install an Apache virtual host configuration file and want to restart Apache whenever that file is changed you could modify our Apache example from earlier as follows: .sp .nf .ft C /etc/httpd/extra/httpd\-vhosts.conf: file: \- managed \- source: salt://webserver/httpd\-vhosts.conf apache: pkg: \- installed service: \- running \- watch: \- file: /etc/httpd/extra/httpd\-vhosts.conf \- require: \- pkg: apache .ft P .fi .sp If the pkg and service names differ on your OS or distro of choice you can specify each one separately using a \fIname\-declaration\fP which explained in \fBPart 3\fP. .RE .SS Next steps .sp In \fBpart 3\fP we will discuss how to use includes, extends and templating to make a more complete State Tree configuration. .SS States tutorial, part 3 \- Templating, Includes, Extends .IP Note This tutorial builds on topics covered in \fBpart 1\fP and \fBpart 2\fP. It is recommended that you begin there. .RE .sp This part of the tutorial will cover more advanced templating and configuration techniques for \fBsls\fP files. .SS Templating SLS modules .sp SLS modules may require programming logic or inline execution. This is accomplished with module templating. The default module templating system used is \fI\%Jinja2\fP and may be configured by changing the \fBrenderer\fP value in the master config. .sp All states are passed through a templating system when they are initially read. To make use of the templating system, simply add some templating markup. An example of an sls module with templating markup may look like this: .sp .nf .ft C {% for usr in [\(aqmoe\(aq,\(aqlarry\(aq,\(aqcurly\(aq] %} {{ usr }}: user.present {% endfor %} .ft P .fi .sp This templated sls file once generated will look like this: .sp .nf .ft C moe: user.present larry: user.present curly: user.present .ft P .fi .sp Here\(aqs a more complex example: .sp .nf .ft C {% for usr in \(aqmoe\(aq,\(aqlarry\(aq,\(aqcurly\(aq %} {{ usr }}: group: \- present user: \- present \- gid_from_name: True \- require: \- group: {{ usr }} {% endfor %} .ft P .fi .SS Using Grains in SLS modules .sp Often times a state will need to behave differently on different systems. \fBSalt grains\fP objects are made available in the template context. The \fIgrains\fP can be used from within sls modules: .sp .nf .ft C apache: pkg.installed: {% if grains[\(aqos\(aq] == \(aqRedHat\(aq %} \- name: httpd {% elif grains[\(aqos\(aq] == \(aqUbuntu\(aq %} \- name: apache2 {% endif %} .ft P .fi .SS Calling Salt modules from templates .sp All of the Salt modules loaded by the minion are available within the templating system. This allows data to be gathered in real time on the target system. It also allows for shell commands to be run easily from within the sls modules. .sp The Salt module functions are also made available in the template context as \fBsalt:\fP .sp .nf .ft C moe: user: \- present \- gid: {{ salt[\(aqfile.group_to_gid\(aq](\(aqsome_group_that_exists\(aq) }} .ft P .fi .sp Note that for the above example to work, \fBsome_group_that_exists\fP must exist before the state file is processed by the templating engine. .sp Below is an example that uses the \fBnetwork.hw_addr\fP function to retrieve the MAC address for eth0: .sp .nf .ft C salt[\(aqnetwork.hw_addr\(aq](\(aqeth0\(aq) .ft P .fi .SS Advanced SLS module syntax .sp Lastly, we will cover some incredibly useful techniques for more complex State trees. .SS Include declaration .sp A previous example showed how to spread a Salt tree across several files. Similarly, \fBrequisites\fP span multiple files by using an \fIinclude\-declaration\fP. For example: .sp \fBpython/python\-libs.sls:\fP .sp .nf .ft C python\-dateutil: pkg.installed .ft P .fi .sp \fBpython/django.sls:\fP .sp .nf .ft C include: \- python.python\-libs django: pkg.installed: \- require: \- pkg: python\-dateutil .ft P .fi .SS Extend declaration .sp You can modify previous declarations by using an \fIextend\-declaration\fP. For example the following modifies the Apache tree to also restart Apache when the vhosts file is changed: .sp \fBapache/apache.sls:\fP .sp .nf .ft C apache: pkg.installed .ft P .fi .sp \fBapache/mywebsite.sls:\fP .sp .nf .ft C include: \- apache.apache extend: apache: service: \- running \- watch: \- file: /etc/httpd/extra/httpd\-vhosts.conf /etc/httpd/extra/httpd\-vhosts.conf: file.managed: \- source: salt://apache/httpd\-vhosts.conf .ft P .fi .IP "Using extend with require or watch" .sp The \fBextend\fP statement works differently for \fBrequire\fP or \fBwatch\fP. It appends to, rather than replacing the requisite component. .RE .SS Name declaration .sp You can override the \fIid\-declaration\fP by using a \fIname\-declaration\fP. For example, the previous example is a bit more maintainable if rewritten as follows: .sp \fBapache/mywebsite.sls:\fP .sp .nf .ft C include: \- apache.apache extend: apache: service: \- running \- watch: \- file: mywebsite mywebsite: file.managed: \- name: /etc/httpd/extra/httpd\-vhosts.conf \- source: salt://apache/httpd\-vhosts.conf .ft P .fi .SS Names declaration .sp Even more powerful is using a \fInames\-declaration\fP to override the \fIid\-declaration\fP for multiple states at once. This often can remove the need for looping in a template. For example, the first example in this tutorial can be rewritten without the loop: .sp .nf .ft C stooges: user.present: \- names: \- moe \- larry \- curly .ft P .fi .SS Next steps .sp In \fBpart 4\fP we will discuss how to use salt\(aqs \fBfile_roots\fP to set up a workflow in which states can be "promoted" from dev, to QA, to production. .SS States tutorial, part 4 .IP Note This tutorial builds on topics covered in \fBpart 1\fP, \fBpart 2\fP and \fBpart 3\fP. It is recommended that you begin there. .RE .sp This part of the tutorial will show how to use salt\(aqs \fBfile_roots\fP to set up a workflow in which states can be "promoted" from dev, to QA, to production. .SS Salt fileserver path inheritance .sp Salt\(aqs fileserver allows for more than one root directory per environment, like in the below example, which uses both a local directory and a secondary location shared to the salt master via NFS: .sp .nf .ft C # In the master config file (/etc/salt/master) file_roots: base: \- /srv/salt \- /mnt/salt\-nfs/base .ft P .fi .sp Salt\(aqs fileserver collapses the list of root directories into a single virtual environment containing all files from each root. If the same file exists at the same relative path in more than one root, then the top\-most match "wins". For example, if \fB/srv/salt/foo.txt\fP and \fB/mnt/salt\-nfs/base/foo.txt\fP both exist, then \fBsalt://foo.txt\fP will point to \fB/srv/salt/foo.txt\fP. .SS Environment configuration .sp Configure a multiple\-environment setup like so: .sp .nf .ft C file_roots: base: \- /srv/salt/prod qa: \- /srv/salt/qa \- /srv/salt/prod dev: \- /srv/salt/dev \- /srv/salt/qa \- /srv/salt/prod .ft P .fi .sp Given the path inheritance described above, files within \fB/srv/salt/prod\fP would be available in all environments. Files within \fB/srv/salt/qa\fP would be available in both \fBqa\fP, and \fBdev\fP. Finally, the files within \fB/srv/salt/dev\fP would only be available within the \fBdev\fP environment. .sp Based on the order in which the roots are defined, new files/states can be placed within \fB/srv/salt/dev\fP, and pushed out to the dev hosts for testing. .sp Those files/states can then be moved to the same relative path within \fB/srv/salt/qa\fP, and they are now available only in the \fBdev\fP and \fBqa\fP environments, allowing them to be pushed to QA hosts and tested. .sp Finally, if moved to the same relative path within \fB/srv/salt/prod\fP, the files are now available in all three environments. .SS Practical Example .sp As an example, consider a simple website, installed to \fB/var/www/foobarcom\fP. Below is a top.sls that can be used to deploy the website: .sp \fB/srv/salt/prod/top.sls:\fP .sp .nf .ft C base: \(aqweb*prod*\(aq: \- webserver.foobarcom qa: \(aqweb*qa*\(aq: \- webserver.foobarcom dev: \(aqweb*dev*\(aq: \- webserver.foobarcom .ft P .fi .sp Using pillar, roles can be assigned to the hosts: .sp \fB/srv/pillar/top.sls:\fP .sp .nf .ft C base: \(aqweb*prod*\(aq: \- webserver.prod \(aqweb*qa*\(aq: \- webserver.qa \(aqweb*dev*\(aq: \- webserver.dev .ft P .fi .sp \fB/srv/pillar/webserver/prod.sls:\fP .sp .nf .ft C webserver_role: prod .ft P .fi .sp \fB/srv/pillar/webserver/qa.sls:\fP .sp .nf .ft C webserver_role: qa .ft P .fi .sp \fB/srv/pillar/webserver/dev.sls:\fP .sp .nf .ft C webserver_role: dev .ft P .fi .sp And finally, the SLS to deploy the website: .sp \fB/srv/salt/prod/webserver/foobarcom.sls:\fP .sp .nf .ft C {% if pillar.get(\(aqwebserver_role\(aq, \(aq\(aq) %} /var/www/foobarcom: file.recurse: \- source: salt://webserver/src/foobarcom \- env: {{ pillar[\(aqwebserver_role\(aq] }} \- user: www \- group: www \- dir_mode: 755 \- file_mode: 644 {% endif %} .ft P .fi .sp Given the above SLS, the source for the website should initially be placed in \fB/srv/salt/dev/webserver/src/foobarcom\fP. .sp First, let\(aqs deploy to dev. Given the configuration in the top file, this can be done using \fBstate.highstate\fP: .sp .nf .ft C salt \-\-pillar \(aqwebserver_role:dev\(aq state.highstate .ft P .fi .sp However, in the event that it is not desirable to apply all states configured in the top file (which could be likely in more complex setups), it is possible to apply just the states for the \fBfoobarcom\fP website, using \fBstate.sls\fP: .sp .nf .ft C salt \-\-pillar \(aqwebserver_role:dev\(aq state.sls webserver.foobarcom .ft P .fi .sp Once the site has been tested in dev, then the files can be moved from \fB/srv/salt/dev/webserver/src/foobarcom\fP to \fB/srv/salt/qa/webserver/src/foobarcom\fP, and deployed using the following: .sp .nf .ft C salt \-\-pillar \(aqwebserver_role:qa\(aq state.sls webserver.foobarcom .ft P .fi .sp Finally, once the site has been tested in qa, then the files can be moved from \fB/srv/salt/qa/webserver/src/foobarcom\fP to \fB/srv/salt/prod/webserver/src/foobarcom\fP, and deployed using the following: .sp .nf .ft C salt \-\-pillar \(aqwebserver_role:prod\(aq state.sls webserver.foobarcom .ft P .fi .sp Thanks to Salt\(aqs fileserver inheritance, even though the files have been moved to within \fB/srv/salt/prod\fP, they are still available from the same \fBsalt://\fP URI in both the qa and dev environments. .SS Continue Learning .sp The best way to continue learning about Salt States is to read through the \fBreference documentation\fP and to look through examples of existing state trees. Many pre\-configured state trees can be found on Github in the \fI\%saltstack-formulas\fP collection of repositories. .sp If you have any questions, suggestions, or just want to chat with other people who are using Salt, we have a very \fIactive community\fP and we\(aqd love to hear from you. .sp In addition, by continuing to \fBpart 5\fP, you can learn about the powerful orchestration of which Salt is capable. .SS States Tutorial, Part 5 \- Orchestration with Salt .IP Note This tutorial builds on some of the topics covered in the earlier \fBStates Walkthrough\fP pages. It is recommended to start with \fBPart 1\fP if you are not familiar with how to use states. .RE .sp Orchestration can be accomplished in two distinct ways: .INDENT 0.0 .IP 1. 3 The \fIOverState System\fP. Added in version 0.11.0, this Salt \fBRunner\fP allows for SLS files to be organized into stages, and to require that one or more stages successfully execute before another stage will run. .IP 2. 3 The \fIOrchestrate Runner\fP. Added in version 0.17.0, this Salt \fBRunner\fP can use the full suite of \fBrequisites\fP available in states, and can also execute states/functions using salt\-ssh. This runner was designed with the eventual goal of replacing the \fIOverState\fP. .UNINDENT .SS The OverState System .sp Often, servers need to be set up and configured in a specific order, and systems should only be set up if systems earlier in the sequence have been set up without any issues. .sp The OverState system can be used to orchestrate deployment in a smooth and reliable way across multiple systems in small to large environments. .SS The OverState SLS .sp The OverState system is managed by an SLS file named \fBoverstate.sls\fP, located in the root of a Salt fileserver environment. .sp The overstate.sls configures an unordered list of stages, each stage defines the minions on which to execute the state, and can define what sls files to run, execute a \fBstate.highstate\fP, or execute a function. Here\(aqs a sample \fBoverstate.sls\fP: .sp .nf .ft C mysql: match: \(aqdb*\(aq sls: \- mysql.server \- drbd webservers: match: \(aqweb*\(aq require: \- mysql all: match: \(aq*\(aq require: \- mysql \- webservers .ft P .fi .IP Note The \fBmatch\fP argument uses \fIcompound matching\fP .RE .sp Given the above setup, the OverState will be carried out as follows: .INDENT 0.0 .IP 1. 3 The \fBmysql\fP stage will be executed first because it is required by the \fBwebservers\fP and \fBall\fP stages. It will execute \fBstate.sls\fP once for each of the two listed SLS targets (\fBmysql.server\fP and \fBdrbd\fP). These states will be executed on all minions whose minion ID starts with "db". .IP 2. 3 The \fBwebservers\fP stage will then be executed, but only if the \fBmysql\fP stage executes without any failures. The \fBwebservers\fP stage will execute a \fBstate.highstate\fP on all minions whose minion IDs start with "web". .IP 3. 3 Finally, the \fBall\fP stage will execute, running \fBstate.highstate\fP on all systems, if and only if the \fBmysql\fP and \fBwebservers\fP stages completed without any failures. .UNINDENT .sp Any failure in the above steps would cause the requires to fail, preventing the dependent stages from executing. .SS Using Functions with OverState .sp In the above example, you\(aqll notice that the stages lacking an \fBsls\fP entry run a \fBstate.highstate\fP. As mentioned earlier, it is also possible to execute other functions in a stage. This functionality was added in version 0.15.0. .sp Running a function is easy: .sp .nf .ft C http: function: pkg.install: \- httpd .ft P .fi .sp The list of function arguments are defined after the declared function. So, the above stage would run \fBpkg.install http\fP. Requisites only function properly if the given function supports returning a custom return code. .SS Executing an OverState .sp Since the OverState is a \fBRunner\fP, it is executed using the \fBsalt\-run\fP command. The runner function for the OverState is \fBstate.over\fP. .sp .nf .ft C salt\-run state.over .ft P .fi .sp The function will by default look in the root of the \fBbase\fP environment (as defined in \fBfile_roots\fP) for a file called \fBoverstate.sls\fP, and then execute the stages defined within that file. .sp Different environments and paths can be used as well, by adding them as positional arguments: .sp .nf .ft C salt\-run state.over dev /root/other\-overstate.sls .ft P .fi .sp The above would run an OverState using the \fBdev\fP fileserver environment, with the stages defined in \fB/root/other\-overstate.sls\fP. .IP Warning Since these are positional arguments, when defining the path to the overstate file the environment must also be specified, even if it is the \fBbase\fP environment. .RE .IP Note Remember, salt\-run is always executed on the master. .RE .SS The Orchestrate Runner .sp New in version 0.17.0. .sp As noted above in the introduction, the Orchestrate Runner (originally called the state.sls runner) offers all the functionality of the OverState, but with a couple advantages: .INDENT 0.0 .IP \(bu 2 All \fBrequisites\fP available in states can be used. .IP \(bu 2 The states/functions can be executed using salt\-ssh. .UNINDENT .sp The Orchestrate Runner was added with the intent to eventually deprecate the OverState system, however the OverState will still be maintained for the foreseeable future. .SS Configuration Syntax .sp The configuration differs slightly from that of the OverState, and more closely resembles the configuration schema used for states. .sp To execute a state, use \fBsalt.state\fP: .sp .nf .ft C install_nginx: salt.state: \- tgt: \(aqweb*\(aq \- sls: \- nginx .ft P .fi .sp To execute a function, use \fBsalt.function\fP: .sp .nf .ft C cmd.run: salt.function: \- tgt: \(aq*\(aq \- arg: \- rm \-rf /tmp/foo .ft P .fi .SS Triggering a Highstate .sp Wheras with the OverState, a Highstate is run by simply omitting an \fBsls\fP or \fBfunction\fP argument, with the Orchestrate Runner the Highstate must explicitly be requested by using \fBhighstate: True\fP: .sp .nf .ft C webserver_setup: salt.state: \- tgt: \(aqweb*\(aq \- highstate: True .ft P .fi .SS Executing the Orchestrate Runner .sp The Orchestrate Runner can be executed using the \fBstate.orchestrate\fP runner function. \fBstate.orch\fP also works, for those that would like to type less. .sp Assuming that your \fBbase\fP environment is located at \fB/srv/salt\fP, and you have placed a configuration file in \fB/srv/salt/orchestration/webserver.sls\fP, then the following could both be used: .sp .nf .ft C salt\-run state.orchestrate orchestration.webserver salt\-run state.orch orchestration.webserver .ft P .fi .sp Changed in version 2014.1.1. .SS More Complex Orchestration .sp Many states/functions can be configured in a single file, which when combined with the full suite of \fBrequisites\fP, can be used to easily configure complex orchestration tasks. Additionally, the states/functions will be executed in the order in which they are defined, unless prevented from doing so by any \fBrequisites\fP, as is the default in SLS files since 0.17.0. .sp .nf .ft C cmd.run: salt.function: \- tgt: 10.0.0.0/24 \- tgt_type: ipcidr \- arg: \- bootstrap storage_setup: salt.state: \- tgt: \(aqrole:storage\(aq \- tgt_type: grain \- sls: ceph \- require: \- salt: webserver_setup webserver_setup: salt.state: \- tgt: \(aqweb*\(aq \- highstate: True .ft P .fi .sp Given the above setup, the orchestration will be carried out as follows: .INDENT 0.0 .IP 1. 3 The shell command \fBbootstrap\fP will be executed on all minions in the 10.0.0.0/24 subnet. .IP 2. 3 A Highstate will be run on all minions whose ID starts with "web", since the \fBstorage_setup\fP state requires it. .IP 3. 3 Finally, the \fBceph\fP SLS target will be executed on all minions which have a grain called \fBrole\fP with a value of \fBstorage\fP. .UNINDENT .SS Advanced Topics .SS SaltStack Walk\-through .IP Note Welcome to SaltStack! I am excited that you are interested in Salt and starting down the path to better infrastructure management. I developed (and am continuing to develop) Salt with the goal of making the best software available to manage computers of almost any kind. I hope you enjoy working with Salt and that the software can solve your real world needs! .INDENT 0.0 .IP \(bu 2 Thomas S Hatch .IP \(bu 2 Salt creator and Chief Developer .IP \(bu 2 CTO of SaltStack, Inc. .UNINDENT .RE .SS Getting Started .SS What is Salt? .sp Salt is a different approach to infrastructure management, founded on the idea that high\-speed communication with large numbers of systems can open up new capabilities. This approach makes Salt a powerful multitasking system that can solve many specific problems in an infrastructure. .sp The backbone of Salt is the remote execution engine, which creates a high\-speed, secure and bi\-directional communication net for groups of systems. On top of this communication system, Salt provides an extremely fast, flexible and easy\-to\-use configuration management system called \fBSalt States\fP. .SS Installing Salt .sp SaltStack has been made to be very easy to install and get started. Setting up Salt should be as easy as installing Salt via distribution packages on Linux or via the Windows installer. The \fBinstallation documents\fP cover platform\-specific installation in depth. .SS Starting Salt .sp Salt functions on a master/minion topology. A master server acts as a central control bus for the clients, which are called \fBminions\fP. The minions connect back to the master. .SS Setting Up the Salt Master .sp Turning on the Salt Master is easy \-\- just turn it on! The default configuration is suitable for the vast majority of installations. The Salt Master can be controlled by the local Linux/Unix service manager: .sp On Systemd based platforms (OpenSuse, Fedora): .sp .nf .ft C systemctl start salt\-master .ft P .fi .sp On Upstart based systems (Ubuntu, older Fedora/RHEL): .sp .nf .ft C service salt\-master start .ft P .fi .sp On SysV Init systems (Debian, Gentoo etc.): .sp .nf .ft C /etc/init.d/salt\-master start .ft P .fi .sp Alternatively, the Master can be started directly on the command\-line: .sp .nf .ft C salt\-master \-d .ft P .fi .sp The Salt Master can also be started in the foreground in debug mode, thus greatly increasing the command output: .sp .nf .ft C salt\-master \-l debug .ft P .fi .sp The Salt Master needs to bind to two TCP network ports on the system. These ports are \fB4505\fP and \fB4506\fP. For more in depth information on firewalling these ports, the firewall tutorial is available \fBhere\fP. .SS Setting up a Salt Minion .IP Note The Salt Minion can operate with or without a Salt Master. This walk\-through assumes that the minion will be connected to the master, for information on how to run a master\-less minion please see the master\-less quick\-start guide: .sp \fBMasterless Minion Quickstart\fP .RE .sp The Salt Minion only needs to be aware of one piece of information to run, the network location of the master. .sp By default the minion will look for the DNS name \fBsalt\fP for the master, making the easiest approach to set internal DNS to resolve the name \fBsalt\fP back to the Salt Master IP. .sp Otherwise, the minion configuration file will need to be edited so that the configuration option \fBmaster\fP points to the DNS name or the IP of the Salt Master: .IP Note The default location of the configuration files is \fB/etc/salt\fP. Most platforms adhere to this convention, but platforms such as FreeBSD and Microsoft Windows place this file in different locations. .RE .sp \fB/etc/salt/minion:\fP .sp .nf .ft C master: saltmaster.example.com .ft P .fi .sp Now that the master can be found, start the minion in the same way as the master; with the platform init system or via the command line directly: .sp As a daemon: .sp .nf .ft C salt\-minion \-d .ft P .fi .sp In the foreground in debug mode: .sp .nf .ft C salt\-minion \-l debug .ft P .fi .sp When the minion is started, it will generate an \fBid\fP value, unless it has been generated on a previous run and cached in the configuration directory, which is \fB/etc/salt\fP by default. This is the name by which the minion will attempt to authenticate to the master. The following steps are attempted, in order to try to find a value that is not \fBlocalhost\fP: .INDENT 0.0 .IP 1. 3 The Python function \fBsocket.getfqdn()\fP is run .IP 2. 3 \fB/etc/hostname\fP is checked (non\-Windows only) .IP 3. 3 \fB/etc/hosts\fP (\fB%WINDIR%\esystem32\edrivers\eetc\ehosts\fP on Windows hosts) is checked for hostnames that map to anything within \fB127.0.0.0/8\fP. .UNINDENT .sp If none of the above are able to produce an id which is not \fBlocalhost\fP, then a sorted list of IP addresses on the minion (excluding any within \fB127.0.0.0/8\fP) is inspected. The first publicly\-routable IP address is used, if there is one. Otherwise, the first privately\-routable IP address is used. .sp If all else fails, then \fBlocalhost\fP is used as a fallback. .IP Note Overriding the \fBid\fP .sp The minion id can be manually specified using the \fBid\fP parameter in the minion config file. If this configuration value is specified, it will override all other sources for the \fBid\fP. .RE .sp Now that the minion is started, it will generate cryptographic keys and attempt to connect to the master. The next step is to venture back to the master server and accept the new minion\(aqs public key. .SS Using salt\-key .sp Salt authenticates minions using public\-key encryption and authentication. For a minion to start accepting commands from the master, the minion keys need to be accepted by the master. .sp The \fBsalt\-key\fP command is used to manage all of the keys on the master. To list the keys that are on the master: .sp .nf .ft C salt\-key \-L .ft P .fi .sp The keys that have been rejected, accepted and pending acceptance are listed. The easiest way to accept the minion key is to accept all pending keys: .sp .nf .ft C salt\-key \-A .ft P .fi .IP Note Keys should be verified! The secure thing to do before accepting a key is to run \fBsalt\-key \-f minion\-id\fP to print the fingerprint of the minion\(aqs public key. This fingerprint can then be compared against the fingerprint generated on the minion. .sp On the master: .sp .nf .ft C # salt\-key \-f foo.domain.com Unaccepted Keys: foo.domain.com: 39:f9:e4:8a:aa:74:8d:52:1a:ec:92:03:82:09:c8:f9 .ft P .fi .sp On the minion: .sp .nf .ft C # salt\-call key.finger \-\-local local: 39:f9:e4:8a:aa:74:8d:52:1a:ec:92:03:82:09:c8:f9 .ft P .fi .sp If they match, approve the key with \fBsalt\-key \-a foo.domain.com\fP. .RE .SS Sending the First Commands .sp Now that the minion is connected to the master and authenticated, the master can start to command the minion. .sp Salt commands allow for a vast set of functions to be executed and for specific minions and groups of minions to be targeted for execution. .sp The \fBsalt\fP command is comprised of command options, target specification, the function to execute, and arguments to the function. .sp A simple command to start with looks like this: .sp .nf .ft C salt \(aq*\(aq test.ping .ft P .fi .sp The \fB*\fP is the target, which specifies all minions. .sp \fBtest.ping\fP tells the minion to run the \fBtest.ping\fP function. .sp In the case of \fBtest.ping\fP, \fBtest\fP refers to a \fBexecution module\fP. \fBping\fP refers to the \fBping\fP function contained in the aforementioned \fBtest\fP module. .IP Note Execution modules are the workhorses of Salt. They do the work on the system to perform various tasks, such as manipulating files and restarting services. .RE .sp The result of running this command will be the master instructing all of the minions to execute \fBtest.ping\fP in parallel and return the result. .sp This is not an actual ICMP ping, but rather a simple function which returns \fBTrue\fP. Using \fBtest.ping\fP is a good way of confirming that a minion is connected. .IP Note Each minion registers itself with a unique minion ID. This ID defaults to the minion\(aqs hostname, but can be explicitly defined in the minion config as well by using the \fBid\fP parameter. .RE .sp Of course, there are hundreds of other modules that can be called just as \fBtest.ping\fP can. For example, the following would return disk usage on all targeted minions: .sp .nf .ft C salt \(aq*\(aq disk.percent .ft P .fi .SS Getting to Know the Functions .sp Salt comes with a vast library of functions available for execution, and Salt functions are self\-documenting. To see what functions are available on the minions execute the \fBsys.doc\fP function: .sp .nf .ft C salt \(aq*\(aq sys.doc .ft P .fi .sp This will display a very large list of available functions and documentation on them. .IP Note Module documentation is also available \fBon the web\fP. .RE .sp These functions cover everything from shelling out to package management to manipulating database servers. They comprise a powerful system management API which is the backbone to Salt configuration management and many other aspects of Salt. .IP Note Salt comes with many plugin systems. The functions that are available via the \fBsalt\fP command are called \fBExecution Modules\fP. .RE .SS Helpful Functions to Know .sp The \fBcmd\fP module contains functions to shell out on minions, such as \fBcmd.run\fP and \fBcmd.run_all\fP: .sp .nf .ft C salt \(aq*\(aq cmd.run \(aqls \-l /etc\(aq .ft P .fi .sp The \fBpkg\fP functions automatically map local system package managers to the same salt functions. This means that \fBpkg.install\fP will install packages via \fByum\fP on Red Hat based systems, \fBapt\fP on Debian systems, etc.: .sp .nf .ft C salt \(aq*\(aq pkg.install vim .ft P .fi .IP Note Some custom Linux spins and derivatives of other distributions are not properly detected by Salt. If the above command returns an error message saying that \fBpkg.install\fP is not available, then you may need to override the pkg provider. This process is explained \fBhere\fP. .RE .sp The \fBnetwork.interfaces\fP function will list all interfaces on a minion, along with their IP addresses, netmasks, MAC addresses, etc: .sp .nf .ft C salt \(aq*\(aq network.interfaces .ft P .fi .SS Changing the Output Format .sp The default output format used for most Salt commands is called the \fBnested\fP outputter, but there are several other outputters that can be used to change the way the output is displayed. For instance, the \fBpprint\fP outputter can be used to display the return data using Python\(aqs \fBpprint\fP module: .sp .nf .ft C root@saltmaster:~# salt myminion grains.item pythonpath \-\-out=pprint {\(aqmyminion\(aq: {\(aqpythonpath\(aq: [\(aq/usr/lib64/python2.7\(aq, \(aq/usr/lib/python2.7/plat\-linux2\(aq, \(aq/usr/lib64/python2.7/lib\-tk\(aq, \(aq/usr/lib/python2.7/lib\-tk\(aq, \(aq/usr/lib/python2.7/site\-packages\(aq, \(aq/usr/lib/python2.7/site\-packages/gst\-0.10\(aq, \(aq/usr/lib/python2.7/site\-packages/gtk\-2.0\(aq]}} .ft P .fi .sp The full list of Salt outputters, as well as example output, can be found \fIhere\fP. .SS \fBsalt\-call\fP .sp The examples so far have described running commands from the Master using the \fBsalt\fP command, but when troubleshooting it can be more beneficial to login to the minion directly and use \fBsalt\-call\fP. .sp Doing so allows you to see the minion log messages specific to the command you are running (which are \fInot\fP part of the return data you see when running the command from the Master using \fBsalt\fP), making it unnecessary to tail the minion log. More information on \fBsalt\-call\fP and how to use it can be found \fIhere\fP. .SS Grains .sp Salt uses a system called \fBGrains\fP to build up static data about minions. This data includes information about the operating system that is running, CPU architecture and much more. The grains system is used throughout Salt to deliver platform data to many components and to users. .sp Grains can also be statically set, this makes it easy to assign values to minions for grouping and managing. .sp A common practice is to assign grains to minions to specify what the role or roles a minion might be. These static grains can be set in the minion configuration file or via the \fBgrains.setval\fP function. .SS Targeting .sp Salt allows for minions to be targeted based on a wide range of criteria. The default targeting system uses globular expressions to match minions, hence if there are minions named \fBlarry1\fP, \fBlarry2\fP, \fBcurly1\fP and \fBcurly2\fP, a glob of \fBlarry*\fP will match \fBlarry1\fP and \fBlarry2\fP, and a glob of \fB*1\fP will match \fBlarry1\fP and \fBcurly1\fP. .sp Many other targeting systems can be used other than globs, these systems include: .INDENT 0.0 .TP .B Regular Expressions Target using PCRE\-compliant regular expressions .TP .B Grains Target based on grains data: \fBTargeting with Grains\fP .TP .B Pillar Target based on pillar data: \fBTargeting with Pillar\fP .TP .B IP Target based on IP address/subnet/range .TP .B Compound Create logic to target based on multiple targets: \fBTargeting with Compound\fP .TP .B Nodegroup Target with nodegroups: \fBTargeting with Nodegroup\fP .UNINDENT .sp The concepts of targets are used on the command line with Salt, but also function in many other areas as well, including the state system and the systems used for ACLs and user permissions. .SS Passing in Arguments .sp Many of the functions available accept arguments which can be passed in on the command line: .sp .nf .ft C salt \(aq*\(aq pkg.install vim .ft P .fi .sp This example passes the argument \fBvim\fP to the pkg.install function. Since many functions can accept more complex input then just a string, the arguments are parsed through YAML, allowing for more complex data to be sent on the command line: .sp .nf .ft C salt \(aq*\(aq test.echo \(aqfoo: bar\(aq .ft P .fi .sp In this case Salt translates the string \(aqfoo: bar\(aq into the dictionary "{\(aqfoo\(aq: \(aqbar\(aq}" .IP Note Any line that contains a newline will not be parsed by YAML. .RE .SS Salt States .sp Now that the basics are covered the time has come to evaluate \fBStates\fP. Salt \fBStates\fP, or the \fBState System\fP is the component of Salt made for configuration management. .sp The state system is already available with a basic Salt setup, no additional configuration is required. States can be set up immediately. .IP Note Before diving into the state system, a brief overview of how states are constructed will make many of the concepts clearer. Salt states are based on data modeling and build on a low level data structure that is used to execute each state function. Then more logical layers are built on top of each other. .sp The high layers of the state system which this tutorial will cover consists of everything that needs to be known to use states, the two high layers covered here are the \fIsls\fP layer and the highest layer \fIhighstate\fP. .sp Understanding the layers of data management in the State System will help with understanding states, but they never need to be used. Just as understanding how a compiler functions assists when learning a programming language, understanding what is going on under the hood of a configuration management system will also prove to be a valuable asset. .RE .SS The First SLS Formula .sp The state system is built on SLS formulas. These formulas are built out in files on Salt\(aqs file server. To make a very basic SLS formula open up a file under /srv/salt named vim.sls. The following state ensures that vim is installed on a system to which that state has been applied. .sp \fB/srv/salt/vim.sls:\fP .sp .nf .ft C vim: pkg.installed .ft P .fi .sp Now install vim on the minions by calling the SLS directly: .sp .nf .ft C salt \(aq*\(aq state.sls vim .ft P .fi .sp This command will invoke the state system and run the \fBvim\fP SLS. .sp Now, to beef up the vim SLS formula, a \fBvimrc\fP can be added: .sp \fB/srv/salt/vim.sls:\fP .sp .nf .ft C vim: pkg.installed /etc/vimrc: file.managed: \- source: salt://vimrc \- mode: 644 \- user: root \- group: root .ft P .fi .sp Now the desired \fBvimrc\fP needs to be copied into the Salt file server to \fB/srv/salt/vimrc\fP. In Salt, everything is a file, so no path redirection needs to be accounted for. The \fBvimrc\fP file is placed right next to the \fBvim.sls\fP file. The same command as above can be executed to all the vim SLS formulas and now include managing the file. .IP Note Salt does not need to be restarted/reloaded or have the master manipulated in any way when changing SLS formulas. They are instantly available. .RE .SS Adding Some Depth .sp Obviously maintaining SLS formulas right in a single directory at the root of the file server will not scale out to reasonably sized deployments. This is why more depth is required. Start by making an nginx formula a better way, make an nginx subdirectory and add an init.sls file: .sp \fB/srv/salt/nginx/init.sls:\fP .sp .nf .ft C nginx: pkg: \- installed service: \- running \- require: \- pkg: nginx .ft P .fi .sp A few concepts are introduced in this SLS formula. .sp First is the service statement which ensures that the \fBnginx\fP service is running. .sp Of course, the nginx service can\(aqt be started unless the package is installed \-\- hence the \fBrequire\fP statement which sets up a dependency between the two. .sp The \fBrequire\fP statement makes sure that the required component is executed before and that it results in success. .IP Note The \fIrequire\fP option belongs to a family of options called \fIrequisites\fP. Requisites are a powerful component of Salt States, for more information on how requisites work and what is available see: \fBRequisites\fP .sp Also evaluation ordering is available in Salt as well: \fBOrdering States\fP .RE .sp This new sls formula has a special name \-\- \fBinit.sls\fP. When an SLS formula is named \fBinit.sls\fP it inherits the name of the directory path that contains it. This formula can be referenced via the following command: .sp .nf .ft C salt \(aq*\(aq state.sls nginx .ft P .fi .IP Note Reminder! .sp Just as one could call the \fBtest.ping\fP or \fBdisk.usage\fP execution modules, \fBstate.sls\fP is simply another execution module. It simply takes the name of an SLS file as an argument. .RE .sp Now that subdirectories can be used, the \fBvim.sls\fP formula can be cleaned up. To make things more flexible, move the \fBvim.sls\fP and vimrc into a new subdirectory called \fBedit\fP and change the \fBvim.sls\fP file to reflect the change: .sp \fB/srv/salt/edit/vim.sls:\fP .sp .nf .ft C vim: pkg.installed /etc/vimrc: file.managed: \- source: salt://edit/vimrc \- mode: 644 \- user: root \- group: root .ft P .fi .sp Only the source path to the vimrc file has changed. Now the formula is referenced as \fBedit.vim\fP because it resides in the edit subdirectory. Now the edit subdirectory can contain formulas for emacs, nano, joe or any other editor that may need to be deployed. .SS Next Reading .sp Two walk\-throughs are specifically recommended at this point. First, a deeper run through States, followed by an explanation of Pillar. .INDENT 0.0 .IP 1. 3 \fBStarting States\fP .IP 2. 3 \fBPillar Walkthrough\fP .UNINDENT .sp An understanding of Pillar is extremely helpful in using States. .SS Getting Deeper Into States .sp Two more in\-depth States tutorials exist, which delve much more deeply into States functionality. .INDENT 0.0 .IP 1. 3 Thomas\(aq original states tutorial, \fBHow Do I Use Salt States?\fP, covers much more to get off the ground with States. .IP 2. 3 The \fBStates Tutorial\fP also provides a fantastic introduction. .UNINDENT .sp These tutorials include much more in\-depth information including templating SLS formulas etc. .SS So Much More! .sp This concludes the initial Salt walk\-through, but there are many more things still to learn! These documents will cover important core aspects of Salt: .INDENT 0.0 .IP \(bu 2 \fBPillar\fP .IP \(bu 2 \fBJob Management\fP .UNINDENT .sp A few more tutorials are also available: .INDENT 0.0 .IP \(bu 2 \fBRemote Execution Tutorial\fP .IP \(bu 2 \fBStandalone Minion\fP .UNINDENT .sp This still is only scratching the surface, many components such as the reactor and event systems, extending Salt, modular components and more are not covered here. For an overview of all Salt features and documentation, look at the \fBTable of Contents\fP. .SS MinionFS Backend Walkthrough .sp New in version 2014.1.0: (Hydrogen) .sp Sometimes, you might need to propagate files that are generated on a minion. Salt already has a feature to send files from a minion to the master: .sp .nf .ft C salt \(aqminion\-id\(aq cp.push /path/to/the/file .ft P .fi .sp This command will store the file, including its full path, under \fBcachedir\fP \fB/master/minions/minion\-id/files\fP. With the default \fBcachedir\fP the example file above would be stored as \fI/var/cache/salt/master/minions/minion\-id/files/path/to/the/file\fP. .IP Note This walkthrough assumes basic knowledge of Salt and \fBcp.push\fP. To get up to speed, check out the \fBwalkthrough\fP. .RE .sp Since it is not a good idea to expose the whole \fBcachedir\fP, MinionFS should be used to send these files to other minions. .SS Simple Configuration .sp To use the minionfs backend only two configuration changes are required on the master. The \fBfileserver_backend\fP option needs to contain a value of \fBminion\fP and \fBfile_recv\fP needs to be set to true: .sp .nf .ft C fileserver_backend: \- roots \- minion file_recv: True .ft P .fi .sp These changes require a restart of the master, then new requests for the \fBsalt://minion\-id/\fP protocol will send files that are pushed by \fBcp.push\fP from \fBminion\-id\fP to the master. .IP Note All of the files that are pushed to the master are going to be available to all of the minions. If this is not what you want, please remove \fBminion\fP from \fBfileserver_backend\fP in the master config file. .RE .IP Note Having directories with the same name as your minions in the root that can be accessed like \fBsalt://minion\-id/\fP might cause confusion. .RE .SS Commandline Example .sp Lets assume that we are going to generate SSH keys on a minion called \fBminion\-source\fP and put the public part in \fB~/.ssh/authorized_keys\fP of root user of a minion called \fBminion\-destination\fP. .sp First, lets make sure that \fB/root/.ssh\fP exists and has the right permissions: .sp .nf .ft C [root@salt\-master file]# salt \(aq*\(aq file.mkdir dir_path=/root/.ssh user=root group=root mode=700 minion\-source: None minion\-destination: None .ft P .fi .sp We create an RSA key pair without a passphrase [*]: .sp .nf .ft C [root@salt\-master file]# salt \(aqminion\-source\(aq cmd.run \(aqssh\-keygen \-N "" \-f /root/.ssh/id_rsa\(aq minion\-source: Generating public/private rsa key pair. Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 9b:cd:1c:b9:c2:93:8e:ad:a3:52:a0:8b:0a:cc:d4:9b root@minion\-source The key\(aqs randomart image is: +\-\-[ RSA 2048]\-\-\-\-+ | | | | | | | o . | | o o S o | |= + . B o | |o+ E B = | |+ . .+ o | |o ...ooo | +\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-+ .ft P .fi .sp and we send the public part to the master to be available to all minions: .sp .nf .ft C [root@salt\-master file]# salt \(aqminion\-source\(aq cp.push /root/.ssh/id_rsa.pub minion\-source: True .ft P .fi .sp now it can be seen by everyone: .sp .nf .ft C [root@salt\-master file]# salt \(aqminion\-destination\(aq cp.list_master_dirs minion\-destination: \- . \- etc \- minion\-source/root \- minion\-source/root/.ssh .ft P .fi .sp Lets copy that as the only authorized key to \fBminion\-destination\fP: .sp .nf .ft C [root@salt\-master file]# salt \(aqminion\-destination\(aq cp.get_file salt://minion\-source/root/.ssh/id_rsa.pub /root/.ssh/authorized_keys minion\-destination: /root/.ssh/authorized_keys .ft P .fi .sp Or we can use a more elegant and salty way to add an SSH key: .sp .nf .ft C [root@salt\-master file]# salt \(aqminion\-destination\(aq ssh.set_auth_key_from_file user=root source=salt://minion\-source/root/.ssh/id_rsa.pub minion\-destination: new .ft P .fi .IP [*] 5 Yes, that was the actual key on my server, but the server is already destroyed. .SS Automatic Updates / Frozen Deployments .sp New in version 0.10.3.d. .sp Salt has support for the \fI\%Esky\fP application freezing and update tool. This tool allows one to build a complete zipfile out of the salt scripts and all their dependencies \- including shared objects / DLLs. .SS Getting Started .sp To build frozen applications, you\(aqll need a suitable build environment for each of your platforms. You should probably set up a virtualenv in order to limit the scope of Q/A. .sp This process does work on Windows. Follow the directions at \fI\%https://github.com/saltstack/salt-windows-install\fP for details on installing Salt in Windows. Only the 32\-bit Python and dependencies have been tested, but they have been tested on 64\-bit Windows. .sp You will need to install \fBesky\fP and \fBbbfreeze\fP from PyPI in order to enable the \fBbdist_esky\fP command in \fBsetup.py\fP. .SS Building and Freezing .sp Once you have your tools installed and the environment configured, you can then \fBpython setup.py bdist\fP to get the eggs prepared. After that is done, run \fBpython setup.py bdist_esky\fP to have Esky traverse the module tree and pack all the scripts up into a redistributable. There will be an appropriately versioned \fBsalt\-VERSION.zip\fP in \fBdist/\fP if everything went smoothly. .SS Windows .sp You will need to add \fBC:\ePython27\elib\esite\-packages\ezmq\fP to your PATH variable. This helps bbfreeze find the zmq DLL so it can pack it up. .SS Using the Frozen Build .sp Unpack the zip file in your desired install location. Scripts like \fBsalt\-minion\fP and \fBsalt\-call\fP will be in the root of the zip file. The associated libraries and bootstrapping will be in the directories at the same level. (Check the \fI\%Esky\fP documentation for more information) .sp To support updating your minions in the wild, put your builds on a web server that your minions can reach. \fBsalt.modules.saltutil.update()\fP will trigger an update and (optionally) a restart of the minion service under the new version. .SS Gotchas .SS My Windows minion isn\(aqt responding .sp The process dispatch on Windows is slower than it is on *nix. You may need to add \(aq\-t 15\(aq to your salt calls to give them plenty of time to return. .SS Windows and the Visual Studio Redist .sp You will need to install the Visual C++ 2008 32\-bit redistributable on all Windows minions. Esky has an option to pack the library into the zipfile, but OpenSSL does not seem to acknowledge the new location. If you get a \fBno OPENSSL_Applink\fP error on the console when trying to start your frozen minion, you have forgotten to install the redistributable. .SS Mixed Linux environments and Yum .sp The Yum Python module doesn\(aqt appear to be available on any of the standard Python package mirrors. If you need to support RHEL/CentOS systems, you should build on that platform to support all your Linux nodes. Also remember to build your virtualenv with \fB\-\-system\-site\-packages\fP so that the \fByum\fP module is included. .SS Automatic (Python) module discovery .sp Automatic (Python) module discovery does not work with the late\-loaded scheme that Salt uses for (Salt) modules. You will need to explicitly add any misbehaving modules to the \fBfreezer_includes\fP in Salt\(aqs \fBsetup.py\fP. Always check the zipped application to make sure that the necessary modules were included. .SS Multi Master Tutorial .sp As of Salt 0.16.0, the ability to connect minions to multiple masters has been made available. The multi\-master system allows for redundancy of Salt masters and facilitates multiple points of communication out to minions. When using a multi\-master setup, all masters are running hot, and any active master can be used to send commands out to the minions. .sp In 0.16.0, the masters do not share any information, keys need to be accepted on both masters, and shared files need to be shared manually or use tools like the git fileserver backend to ensure that the \fBfile_roots\fP are kept consistent. .SS Summary of Steps .INDENT 0.0 .IP 1. 3 Create a redundant master server .IP 2. 3 Copy primary master key to redundant master .IP 3. 3 Start redundant master .IP 4. 3 Configure minions to connect to redundant master .IP 5. 3 Restart minions .IP 6. 3 Accept keys on redundant master .UNINDENT .SS Prepping a Redundant Master .sp The first task is to prepare the redundant master. There is only one requirement when preparing a redundant master, which is that masters share the same private key. When the first master was created, the master\(aqs identifying key was generated and placed in the master\(aqs \fBpki_dir\fP. The default location of the key is \fB/etc/salt/pki/master/master.pem\fP. Take this key and copy it to the same location on the redundant master. Assuming that no minions have yet been connected to the new redundant master, it is safe to delete any existing key in this location and replace it. .IP Note There is no logical limit to the number of redundant masters that can be used. .RE .sp Once the new key is in place, the redundant master can be safely started. .SS Configure Minions .sp Since minions need to be master\-aware, the new master needs to be added to the minion configurations. Simply update the minion configurations to list all connected masters: .sp .nf .ft C master: \- saltmaster1.example.com \- saltmaster2.example.com .ft P .fi .sp Now the minion can be safely restarted. .sp Now the minions will check into the original master and also check into the new redundant master. Both masters are first\-class and have rights to the minions. .SS Sharing Files Between Masters .sp Salt does not automatically share files between multiple masters. A number of files should be shared or sharing of these files should be strongly considered. .SS Minion Keys .sp Minion keys can be accepted the normal way using \fBsalt\-key\fP on both masters. Keys accepted, deleted, or rejected on one master will NOT be automatically managed on redundant masters; this needs to be taken care of by running salt\-key on both masters or sharing the \fB/etc/salt/pki/master/{minions,minions_pre,minions_rejected}\fP directories between masters. .IP Note While sharing the \fB/etc/salt/pki/master\fP directory will work, it is strongly discouraged, since allowing access to the \fBmaster.pem\fP key outside of Salt creates a \fISERIOUS\fP security risk. .RE .SS File_Roots .sp The \fBfile_roots\fP contents should be kept consistent between masters. Otherwise state runs will not always be consistent on minions since instructions managed by one master will not agree with other masters. .sp The recommended way to sync these is to use a fileserver backend like gitfs or to keep these files on shared storage. .SS Pillar_Roots .sp Pillar roots should be given the same considerations as \fBfile_roots\fP. .SS Master Configurations .sp While reasons may exist to maintain separate master configurations, it is wise to remember that each master maintains independent control over minions. Therefore, access controls should be in sync between masters unless a valid reason otherwise exists to keep them inconsistent. .sp These access control options include but are not limited to: .INDENT 0.0 .IP \(bu 2 external_auth .IP \(bu 2 client_acl .IP \(bu 2 peer .IP \(bu 2 peer_run .UNINDENT .SS Preseed Minion with Accepted Key .sp In some situations, it is not convenient to wait for a minion to start before accepting its key on the master. For instance, you may want the minion to bootstrap itself as soon as it comes online. You may also want to to let your developers provision new development machines on the fly. .sp There is a general four step process to do this: .INDENT 0.0 .IP 1. 3 Generate the keys on the master: .UNINDENT .sp .nf .ft C root@saltmaster# salt\-key \-\-gen\-keys=[key_name] .ft P .fi .sp Pick a name for the key, such as the minion\(aqs id. .INDENT 0.0 .IP 2. 3 Add the public key to the accepted minion folder: .UNINDENT .sp .nf .ft C root@saltmaster# cp key_name.pub /etc/salt/pki/master/minions/[minion_id] .ft P .fi .sp It is necessary that the public key file has the same name as your minion id. This is how Salt matches minions with their keys. Also note that the pki folder could be in a different location, depending on your OS or if specified in the master config file. .INDENT 0.0 .IP 3. 3 Distribute the minion keys. .UNINDENT .sp There is no single method to get the keypair to your minion. The difficulty is finding a distribution method which is secure. For Amazon EC2 only, an AWS best practice is to use IAM Roles to pass credentials. (See blog post, \fI\%http://blogs.aws.amazon.com/security/post/Tx610S2MLVZWEA/Using-IAM-roles-to-distribute-non-AWS-credentials-to-your-EC2-instances\fP ) .IP "Security Warning" .sp Since the minion key is already accepted on the master, distributing the private key poses a potential security risk. A malicious party will have access to your entire state tree and other sensitive data if they gain access to a preseeded minion key. .RE .INDENT 0.0 .IP 4. 3 Preseed the Minion with the keys .UNINDENT .sp You will want to place the minion keys before starting the salt\-minion daemon: .sp .nf .ft C /etc/salt/pki/minion/minion.pem /etc/salt/pki/minion/minion.pub .ft P .fi .sp Once in place, you should be able to start salt\-minion and run \fBsalt\-call state.highstate\fP or any other salt commands that require master authentication. .SS Salt Bootstrap .sp The Salt Bootstrap script allows for a user to install the Salt Minion or Master on a variety of system distributions and versions. This shell script known as \fBbootstrap\-salt.sh\fP runs through a series of checks to determine the operating system type and version. It then installs the Salt binaries using the appropriate methods. The Salt Bootstrap script installs the minimum number of packages required to run Salt. This means that in the event you run the bootstrap to install via package, Git will not be installed. Installing the minimum number of packages helps ensure the script stays as lightweight as possible, assuming the user will install any other required packages after the Salt binaries are present on the system. The script source is available on GitHub: \fI\%https://github.com/saltstack/salt-bootstrap\fP .SS Supported Operating Systems .INDENT 0.0 .IP \(bu 2 Amazon Linux 2012.09 .IP \(bu 2 Arch .IP \(bu 2 CentOS 5/6 .IP \(bu 2 Debian 6.x/7.x/8(git installations only) .IP \(bu 2 Fedora 17/18 .IP \(bu 2 FreeBSD 9.1/9.2/10 .IP \(bu 2 Gentoo .IP \(bu 2 Linaro .IP \(bu 2 Linux Mint 13/14 .IP \(bu 2 OpenSUSE 12.x .IP \(bu 2 Oracle Linux 5/5 .IP \(bu 2 Red Hat 5/6 .IP \(bu 2 Red Hat Enterprise 5/6 .IP \(bu 2 Scientific Linux 5/6 .IP \(bu 2 SmartOS .IP \(bu 2 SuSE 11 SP1/11 SP2 .IP \(bu 2 Ubuntu 10.x/11.x/12.x/13.04/13.10 .IP \(bu 2 Elementary OS 0.2 .UNINDENT .IP Note In the event you do not see your distribution or version available please review the develop branch on Github as it main contain updates that are not present in the stable release: \fI\%https://github.com/saltstack/salt-bootstrap/tree/develop\fP .RE .SS Example Usage .sp If you\(aqre looking for the \fIone\-liner\fP to install salt, please scroll to the bottom and use the instructions for \fI\%Installing via an Insecure One-Liner\fP .IP Note In every two\-step example, you would be well\-served to examine the downloaded file and examine it to ensure that it does what you expect. .RE .sp Using \fBcurl\fP to install latest git: .sp Using \fBwget\fP to install your distribution\(aqs stable packages: .sp Install a specific version from git using \fBwget\fP: .sp If you already have python installed, \fBpython 2.6\fP, then it\(aqs as easy as: .sp All python versions should support the following one liner: .sp On a FreeBSD base system you usually don\(aqt have either of the above binaries available. You \fBdo\fP have \fBfetch\fP available though: .sp If all you want is to install a \fBsalt\-master\fP using latest git: .sp If you want to install a specific release version (based on the git tags): .sp To install a specific branch from a git fork: .SS Installing via an Insecure One\-Liner .sp The following examples illustrate how to install Salt via a one\-liner. .IP Note Warning! These methods do not involve a verification step and assume that the delivered file is trustworthy. .RE .SS Examples .sp Installing the latest develop branch of Salt: .sp Any of the example above which use two\-lines can be made to run in a single\-line configuration with minor modifications. .SS Example Usage .sp The Salt Bootstrap script has a wide variety of options that can be passed as well as several ways of obtaining the bootstrap script itself. .sp For example, using \fBcurl\fP to install your distribution\(aqs stable packages: .sp .nf .ft C curl \-L https://bootstrap.saltstack.com | sudo sh .ft P .fi .sp Using \fBwget\fP to install your distribution\(aqs stable packages: .sp .nf .ft C wget \-O \- https://bootstrap.saltstack.com | sudo sh .ft P .fi .sp Installing the latest version available from git with \fBcurl\fP: .sp .nf .ft C curl \-L https://bootstrap.saltstack.com | sudo sh \-s \-\- git develop .ft P .fi .sp Install a specific version from git using \fBwget\fP: .sp .nf .ft C wget \-O \- https://bootstrap.saltstack.com | sh \-s \-\- \-P git v0.16.4 .ft P .fi .sp If you already have python installed, \fBpython 2.6\fP, then it\(aqs as easy as: .sp .nf .ft C python \-m urllib "https://bootstrap.saltstack.com" | sudo sh \-s \-\- git develop .ft P .fi .sp All python versions should support the following one liner: .sp .nf .ft C python \-c \(aqimport urllib; print urllib.urlopen("https://bootstrap.saltstack.com").read()\(aq | \e sudo sh \-s \-\- git develop .ft P .fi .sp On a FreeBSD base system you usually don\(aqt have either of the above binaries available. You \fBdo\fP have \fBfetch\fP available though: .sp .nf .ft C fetch \-o \- https://bootstrap.saltstack.com | sudo sh .ft P .fi .sp If all you want is to install a \fBsalt\-master\fP using latest git: .sp .nf .ft C curl \-L https://bootstrap.saltstack.com | sudo sh \-s \-\- \-M \-N git develop .ft P .fi .sp If you want to install a specific release version (based on the git tags): .sp .nf .ft C curl \-L https://bootstrap.saltstack.com | sudo sh \-s \-\- git v0.16.4 .ft P .fi .sp Downloading the develop branch (from here standard command line options may be passed): .sp .nf .ft C wget https://bootstrap.saltstack.com/develop .ft P .fi .SS Command Line Options .sp Here\(aqs a summary of the command line options: .sp .nf .ft C $ sh bootstrap\-salt.sh \-h Usage : bootstrap\-salt.sh [options] Installation types: \- stable (default) \- daily (ubuntu specific) \- git Examples: $ bootstrap\-salt.sh $ bootstrap\-salt.sh stable $ bootstrap\-salt.sh daily $ bootstrap\-salt.sh git $ bootstrap\-salt.sh git develop $ bootstrap\-salt.sh git v0.17.0 $ bootstrap\-salt.sh git 8c3fadf15ec183e5ce8c63739850d543617e4357 Options: \-h Display this message \-v Display script version \-n No colours. \-D Show debug output. \-c Temporary configuration directory \-g Salt repository URL. (default: git://github.com/saltstack/salt.git) \-k Temporary directory holding the minion keys which will pre\-seed the master. \-M Also install salt\-master \-S Also install salt\-syndic \-N Do not install salt\-minion \-X Do not start daemons after installation \-C Only run the configuration function. This option automatically bypasses any installation. \-P Allow pip based installations. On some distributions the required salt packages or its dependencies are not available as a package for that distribution. Using this flag allows the script to use pip as a last resort method. NOTE: This only works for functions which actually implement pip based installations. \-F Allow copied files to overwrite existing(config, init.d, etc) \-U If set, fully upgrade the system prior to bootstrapping salt \-K If set, keep the temporary files in the temporary directories specified with \-c and \-k. \-I If set, allow insecure connections while downloading any files. For example, pass \(aq\-\-no\-check\-certificate\(aq to \(aqwget\(aq or \(aq\-\-insecure\(aq to \(aqcurl\(aq \-A Pass the salt\-master DNS name or IP. This will be stored under ${BS_SALT_ETC_DIR}/minion.d/99\-master\-address.conf \-i Pass the salt\-minion id. This will be stored under ${BS_SALT_ETC_DIR}/minion_id \-L Install the Apache Libcloud package if possible(required for salt\-cloud) \-p Extra\-package to install while installing salt dependencies. One package per \-p flag. You\(aqre responsible for providing the proper package name. .ft P .fi .SS GitFS Backend Walkthrough .sp Salt can retrieve states and pillars from local and remote Git repositories configured as GitFS remotes. .IP Note This walkthrough assumes basic knowledge of Salt. To get up to speed, check out the \fBwalkthrough\fP. .RE .sp By default, Salt state trees and pillars are served from \fB/srv/salt\fP and \fB/srv/pillar\fP, as configured by the \fBroots\fP \fBfileserver_backend\fP, \fBfile_roots\fP, and \fBpillar_roots\fP configuration settings in \fB/etc/salt/master\fP or \fB/etc/salt/minion\fP. .sp GitFS support is enabled by configuring the \fBgit\fP \fBfileserver_backend\fP, \fBgitfs_remotes\fP, and/or \fBext_pillar\fP settings. .sp Git branches in GitFS remotes are mapped to Salt environments. Merging a QA or staging branch up to a production branch can be all that is required to make state and pillar changes available to Salt minions. .SS Installing Python Dependencies .sp The GitFS backend requires \fI\%GitPython\fP, version 0.3.0 or newer. For RHEL\-based Linux distros, a compatible versions is available in EPEL, and can be easily installed on the master using yum: .sp .nf .ft C # yum install GitPython .ft P .fi .sp Ubuntu 14.04 LTS and Debian Wheezy (7.x) also have a compatible version packaged: .sp .nf .ft C # apt\-get install python\-git .ft P .fi .sp If your master is running an older version (such as Ubuntu 12.04 LTS or Debian Squeeze), then you will need to install GitPython using either \fI\%pip\fP or easy_install (it is recommended to use pip). Version 0.3.2.RC1 is now marked as the stable release in PyPI, so it should be a simple matter of running \fBpip install GitPython\fP (or \fBeasy_install GitPython\fP) as root. .IP Warning Keep in mind that if GitPython has been previously installed on the master using pip (even if it was subsequently uninstalled), then it may still exist in the build cache (typically \fB/tmp/pip\-build\-root/GitPython\fP) if the cache is not cleared after installation. The package in the build cache will override any requirement specifiers, so if you try upgrading to version 0.3.2.RC1 by running \fBpip install \(aqGitPython==0.3.2.RC1\(aq\fP then it will ignore this and simply install the version from the cache directory. Therefore, it may be necessary to delete the GitPython directory from the build cache in order to ensure that the specified version is installed. .RE .SS Simple Configuration .sp To use the gitfs backend, only two configuration changes are required on the master: .INDENT 0.0 .IP 1. 3 Include \fBgit\fP in the \fBfileserver_backend\fP option to enable the GitFS backend: .UNINDENT .sp .nf .ft C fileserver_backend: \- git .ft P .fi .INDENT 0.0 .IP 2. 3 Specify one or more \fBgit://\fP, \fBgit+ssh://\fP, \fBhttps://\fP, or \fBfile://\fP URLs in \fBgitfs_remotes\fP to configure which repositories to cache and search for requested files: .UNINDENT .sp .nf .ft C gitfs_remotes: \- https://github.com/saltstack\-formulas/salt\-formula.git .ft P .fi .INDENT 0.0 .IP 3. 3 Restart the master so that the git repository cache on the master is updated, and new \fBsalt://\fP requests will send the latest files from the remote git repository. This step is not necessary with a standalone minion configuration. .UNINDENT .IP Note In a master/minion setup, files from a GitFS remote are cached once by the master; so minions do not need direct access to the git repository. In a standalone minion configuration, files from each GitFS remote are cached by the minion. .RE .SS Multiple Remotes .sp The \fBgitfs_remotes\fP option accepts an ordered list of git remotes to cache and search, in listed order, for requested files. .sp A simple scenario illustrates this cascading lookup behavior: .sp If the \fBgitfs_remotes\fP option specifies three remotes: .sp .nf .ft C gitfs_remotes: \- git://github.com/example/first.git \- https://github.com/example/second.git \- file:///root/third .ft P .fi .IP Note This example is purposefully contrived to illustrate the behavior of the gitfs backend. This example should not be read as a recommended way to lay out files and git repos. .sp The \fBfile://\fP prefix denotes a git repository in a local directory. However, it will still use the given \fBfile://\fP URL as a remote, rather than copying the git repo to the salt cache. This means that any refs you want accessible must exist as \fIlocal\fP refs in the specified repo. .RE .IP Warning Salt versions prior to 2014.1.0 (Hydrogen) are not tolerant of changing the order of remotes or modifying the URI of existing remotes. In those versions, when modifying remotes it is a good idea to remove the gitfs cache directory (\fB/var/cache/salt/master/gitfs\fP) before restarting the salt\-master service. .RE .sp And each repository contains some files: .sp .nf .ft C first.git: top.sls edit/vim.sls edit/vimrc nginx/init.sls second.git: edit/dev_vimrc haproxy/init.sls third: haproxy/haproxy.conf edit/dev_vimrc .ft P .fi .sp Salt will attempt to lookup the requested file from each GitFS remote repository in the order in which they are defined in the configuration. The \fBgit://github.com/example/first.git\fP remote will be searched first. If the requested file is found, then it is served and no further searching is executed. For example: .INDENT 0.0 .IP \(bu 2 A request for \fBsalt://haproxy/init.sls\fP will be pulled from the \fBhttps://github.com/example/second.git\fP git repo. .IP \(bu 2 A request for \fBsalt://haproxy/haproxy.conf\fP will be pulled from the \fBfile:///root/third\fP repo. .UNINDENT .SS Serving from a Subdirectory .sp The \fBgitfs_root\fP parameter allows files to be served from a subdirectory within the repository. This allows for only part of a repository to be exposed to the Salt fileserver. .sp Assume the below layout: .sp .nf .ft C \&.gitignore README.txt foo/ foo/bar/ foo/bar/one.txt foo/bar/two.txt foo/bar/three.txt foo/baz/ foo/baz/top.sls foo/baz/edit/vim.sls foo/baz/edit/vimrc foo/baz/nginx/init.sls .ft P .fi .sp The below configuration would serve only the files from \fBfoo/baz\fP, ignoring the other files in the repository: .sp .nf .ft C gitfs_remotes: \- git://mydomain.com/stuff.git gitfs_root: foo/baz .ft P .fi .SS Multiple Backends .sp Sometimes it may make sense to use multiple backends; for instance, if \fBsls\fP files are stored in git but larger files are stored directly on the master. .sp The cascading lookup logic used for multiple remotes is also used with multiple backends. If the \fBfileserver_backend\fP option contains multiple backends: .sp .nf .ft C fileserver_backend: \- roots \- git .ft P .fi .sp Then the \fBroots\fP backend (the default backend of files in \fB/srv/salt\fP) will be searched first for the requested file; then, if it is not found on the master, each configured git remote will be searched. .SS Branches, Environments and Top Files .sp When using the \fBgitfs\fP backend, branches and tags will be mapped to environments using the branch/tag name as an identifier. .sp There is one exception to this rule: the \fBmaster\fP branch is implicitly mapped to the \fBbase\fP environment. .sp So, for a typical \fBbase\fP, \fBqa\fP, \fBdev\fP setup, the following branches could be used: .sp .nf .ft C master qa dev .ft P .fi .sp \fBtop.sls\fP files from different branches will be merged into one at runtime. Since this can lead to overly complex configurations, the recommended setup is to have the \fBtop.sls\fP file only in the master branch and use environment\-specific branches for state definitions. .sp To map a branch other than \fBmaster\fP as the \fBbase\fP environment, use the \fBgitfs_base\fP parameter. .sp .nf .ft C gitfs_base: salt\-base .ft P .fi .SS GitFS Remotes over SSH .sp To configure a \fBgitfs_remotes\fP repository over SSH transport, use the \fBgit+ssh\fP URL form: .sp .nf .ft C gitfs_remotes: \- git+ssh://git@github.com/example/salt\-states.git .ft P .fi .sp The private key used to connect to the repository must be located in \fB~/.ssh/id_rsa\fP for the user running the salt\-master. .SS Upcoming Features .sp The upcoming feature release will bring a number of new features to gitfs: .INDENT 0.0 .IP 1. 3 \fBEnvironment Blacklist/Whitelist\fP .sp Two new configuration parameters, \fBgitfs_env_whitelist\fP and \fBgitfs_env_blacklist\fP, allow greater control over which branches/tags are exposed as fileserver environments. .IP 2. 3 \fBMountpoint\fP .sp Prior to the addition of this feature, to serve a file from the URI \fBsalt://webapps/foo/files/foo.conf\fP, it was necessary to ensure that the git repository contained the parent directories (i.e. \fBwebapps/foo/files/\fP). The \fBgitfs_mountpoint\fP parameter will prepend the specified path to the files served from gitfs, allowing you to use an existing repository rather than reorganizing it to fit your Salt fileserver layout. .IP 3. 3 \fBPer\-remote Configuration Parameters\fP .sp \fBgitfs_base\fP, \fBgitfs_root\fP, and \fBgitfs_mountpoint\fP are all global parameters. That is, they affect \fIall\fP of your gitfs remotes. The upcoming feature release allows for these parameters to be overridden on a per\-remote basis. This allows for a tremendous amount of customization. See \fBhere\fP for an example of how use per\-remote configuration. .IP 4. 3 \fBSupport for pygit2 and dulwich\fP .sp \fI\%GitPython\fP is no longer being actively developed, so support has been added for both \fI\%pygit2\fP and \fI\%dulwich\fP as a Python interface for git. Neither is yet as full\-featured as GitPython, for instance authentication via public key is not yet supported. Salt will default to using GitPython, but the \fBgitfs_provider\fP parameter can be used to specify one of the other providers. .UNINDENT .SS Using Git as an External Pillar Source .sp Git repositories can also be used to provide \fBPillar\fP data, using the \fBExternal Pillar\fP system. To define a git external pillar, add a section like the following to the salt master config file: .sp .nf .ft C ext_pillar: \- git: [root=] .ft P .fi .sp Changed in version Helium: The optional \fBroot\fP parameter will be added. .sp The \fB\fP param is the branch containing the pillar SLS tree. The \fB\fP param is the URI for the repository. To add the \fBmaster\fP branch of the specified repo as an external pillar source: .sp .nf .ft C ext_pillar: \- git: master https://domain.com/pillar.git .ft P .fi .sp Use the \fBroot\fP parameter to use pillars from a subdirectory of a git repository: .sp .nf .ft C ext_pillar: \- git: master https://domain.com/pillar.git root=subdirectory .ft P .fi .sp More information on the git external pillar can be found in the \fBsalt.pillar.get_pillar docs\fP. .SS Why aren\(aqt my custom modules/states/etc. syncing to my Minions? .sp In versions 0.16.3 and older, when using the \fBgit fileserver backend\fP, certain versions of GitPython may generate errors when fetching, which Salt fails to catch. While not fatal to the fetch process, these interrupt the fileserver update that takes place before custom types are synced, and thus interrupt the sync itself. Try disabling the git fileserver backend in the master config, restarting the master, and attempting the sync again. .sp This issue is worked around in Salt 0.16.4 and newer. .SS The MacOS X (Maverick) Developer Step By Step Guide To Salt Installation .sp This document provides a step\-by\-step guide to installing a Salt cluster consisting of one master, and one minion running on a local VM hosted on Mac OS X. .IP Note This guide is aimed at developers who wish to run Salt in a virtual machine. The official (Linux) walkthrough can be found \fI\%here\fP. .RE .SS The 5 Cent Salt Intro .sp Since you\(aqre here you\(aqve probably already heard about Salt, so you already know Salt lets you configure and run commands on hordes of servers easily. Here\(aqs a brief overview of a Salt cluster: .INDENT 0.0 .IP \(bu 2 Salt works by having a "master" server sending commands to one or multiple "minion" servers [1]. The master server is the "command center". It is going to be the place where you store your configuration files, aka: "which server is the db, which is the web server, and what libraries and software they should have installed". The minions receive orders from the master. Minions are the servers actually performing work for your business. .IP \(bu 2 Salt has two types of configuration files: .sp 1. the "salt communication channels" or "meta" or "config" configuration files (not official names): one for the master (usually is /etc/salt/master , \fBon the master server\fP), and one for minions (default is /etc/salt/minion or /etc/salt/minion.conf, \fBon the minion servers\fP). Those files are used to determine things like the Salt Master IP, port, Salt folder locations, etc.. If these are configured incorrectly, your minions will probably be unable to receive orders from the master, or the master will not know which software a given minion should install. .sp 2. the "business" or "service" configuration files (once again, not an official name): these are configuration files, ending with ".sls" extension, that describe which software should run on which server, along with particular configuration properties for the software that is being installed. These files should be created in the /srv/salt folder by default, but their location can be changed using ... /etc/salt/master configuration file! .UNINDENT .IP Note This tutorial contains a third important configuration file, not to be confused with the previous two: the virtual machine provisioning configuration file. This in itself is not specifically tied to Salt, but it also contains some Salt configuration. More on that in step 3. Also note that all configuration files are YAML files. So indentation matters. .RE .IP [1] 5 Salt also works with "masterless" configuration where a minion is autonomous (in which case salt can be seen as a local configuration tool), or in "multiple master" configuration. See the documentation for more on that. .SS Before Digging In, The Architecture Of The Salt Cluster .SS Salt Master .sp The "Salt master" server is going to be the Mac OS machine, directly. Commands will be run from a terminal app, so Salt will need to be installed on the Mac. This is going to be more convenient for toying around with configuration files. .SS Salt Minion .sp We\(aqll only have one "Salt minion" server. It is going to be running on a Virtual Machine running on the Mac, using VirtualBox. It will run an Ubuntu distribution. .SS Step 1 \- Configuring The Salt Master On Your Mac .sp \fI\%official documentation\fP .sp Because Salt has a lot of dependencies that are not built in Mac OS X, we will use Homebrew to install Salt. Homebrew is a package manager for Mac, it\(aqs great, use it (for this tutorial at least!). Some people spend a lot of time installing libs by hand to better understand dependencies, and then realize how useful a package manager is once they\(aqre configuring a brand new machine and have to do it all over again. It also lets you \fIuninstall\fP things easily. .IP Note Brew is a Ruby program (Ruby is installed by default with your Mac). Brew downloads, compiles and links software. The linking phase is when compiled software is deployed on your machine. It may conflict with manually installed software, especially in the /usr/local directory. It\(aqs ok, remove the manually installed version then refresh the link by typing \fBbrew link \(aqpackageName\(aq\fP. Brew has a \fBbrew doctor\fP command that can help you troubleshoot. It\(aqs a great command, use it often. Brew requires xcode command line tools. When you run brew the first time it asks you to install them if they\(aqre not already on your system. Brew installs software in /usr/local/bin (system bins are in /usr/bin). In order to use those bins you need your $PATH to search there first. Brew tells you if your $PATH needs to be fixed. .RE .IP Tip Use the keyboard shortcut \fBcmd + shift + period\fP in the "open" Mac OS X dialog box to display hidden files and folders, such as .profile. .RE .SS Install Homebrew .sp Install Homebrew here \fI\%http://brew.sh/\fP Or just type .sp .nf .ft C ruby \-e "$(curl \-fsSL https://raw.github.com/Homebrew/homebrew/go/install)" .ft P .fi .sp Now type the following commands in your terminal (you may want to type \fBbrew doctor\fP after each to make sure everything\(aqs fine): .sp .nf .ft C brew install python brew install swig brew install zmq .ft P .fi .IP Note zmq is ZeroMQ. It\(aqs a fantastic library used for server to server network communication and is at the core of Salt efficiency. .RE .SS Install Salt .sp You should now have everything ready to launch this command: .sp .nf .ft C pip install salt .ft P .fi .IP Note There should be no need for \fBsudo pip install salt\fP. Brew installed Python for your user, so you should have all the access. In case you would like to check, type \fBwhich python\fP to ensure that it\(aqs /usr/local/bin/python, and \fBwhich pip\fP which should be /usr/local/bin/pip. .RE .sp Now type \fBpython\fP in a terminal then, \fBimport salt\fP. There should be no errors. Now exit the Python terminal using \fBexit()\fP. .SS Create The Master Configuration .sp If the default /etc/salt/master configuration file was not created, copy\-paste it from here: \fI\%http://docs.saltstack.com/ref/configuration/examples.html#configuration-examples-master\fP .IP Note \fB/etc/salt/master\fP is a file, not a folder. .RE .sp Salt Master configuration changes. The Salt master needs a few customization to be able to run on Mac OS X: .sp .nf .ft C sudo launchctl limit maxfiles 4096 8192 .ft P .fi .sp In the /etc/salt/master file, change max_open_files to 8192 (or just add the line: \fBmax_open_files: 8192\fP (no quote) if it doesn\(aqt already exists). .sp You should now be able to launch the Salt master: .sp .nf .ft C sudo salt\-master \-\-log\-level=all .ft P .fi .sp There should be no errors when running the above command. .IP Note This command is supposed to be a daemon, but for toying around, we\(aqll keep it running on a terminal to monitor the activity. .RE .sp Now that the master is set, let\(aqs configure a minion on a VM. .SS Step 2 \- Configuring The Minion VM .sp The Salt minion is going to run on a Virtual Machine. There are a lot of software options that let you run virtual machines on a mac, But for this tutorial we\(aqre going to use VirtualBox. In addition to virtualBox, we will use Vagrant, which allows you to create the base VM configuration. .sp Vagrant lets you build ready to use VM images, starting from an OS image and customizing it using "provisioners". In our case, we\(aqll use it to: .INDENT 0.0 .IP \(bu 2 Download the base Ubuntu image .IP \(bu 2 Install salt on that Ubuntu image (Salt is going to be the "provisioner" for the VM). .IP \(bu 2 Launch the VM .IP \(bu 2 SSH into the VM to debug .IP \(bu 2 Stop the VM once you\(aqre done. .UNINDENT .SS Install VirtualBox .sp Go get it here: \fI\%https://www.virtualBox.org/wiki/Downloads\fP (click on VirtualBox for OS X hosts => x86/amd64) .SS Install Vagrant .sp Go get it here: \fI\%http://downloads.vagrantup.com/\fP and choose the latest version (1.3.5 at time of writing), then the .dmg file. Double\-click to install it. Make sure the \fBvagrant\fP command is found when run in the terminal. Type \fBvagrant\fP. It should display a list of commands. .SS Create The Minion VM Folder .sp Create a folder in which you will store your minion\(aqs VM. In this tutorial, it\(aqs going to be a minion folder in the $home directory. .sp .nf .ft C cd $home mkdir minion .ft P .fi .SS Initialize Vagrant .sp From the minion folder, type .sp .nf .ft C vagrant init .ft P .fi .sp This command creates a default Vagrantfile configuration file. This configuration file will be used to pass configuration parameters to the Salt provisioner in Step 3. .SS Import Precise64 Ubuntu Box .sp .nf .ft C vagrant box add precise64 http://files.vagrantup.com/precise64.box .ft P .fi .IP Note This box is added at the global Vagrant level. You only need to do it once as each VM will use this same file. .RE .SS Modify the Vagrantfile .sp Modify ./minion/Vagrantfile to use th precise64 box. Change the \fBconfig.vm.box\fP line to: .sp .nf .ft C config.vm.box = "precise64" .ft P .fi .sp Uncomment the line creating a host\-only IP. This is the ip of your minion (you can change it to something else if that IP is already in use): .sp .nf .ft C config.vm.network :private_network, ip: "192.168.33.10" .ft P .fi .sp At this point you should have a VM that can run, although there won\(aqt be much in it. Let\(aqs check that. .SS Checking The VM .sp From the $home/minion folder type: .sp .nf .ft C vagrant up .ft P .fi .sp A log showing the VM booting should be present. Once it\(aqs done you\(aqll be back to the terminal: .sp .nf .ft C ping 192.168.33.10 .ft P .fi .sp The VM should respond to your ping request. .sp Now log into the VM in ssh using Vagrant again: .sp .nf .ft C vagrant ssh .ft P .fi .sp You should see the shell prompt change to something similar to \fBvagrant@precise64:~$\fP meaning you\(aqre inside the VM. From there, enter the following: .sp .nf .ft C ping 10.0.2.2 .ft P .fi .IP Note That ip is the ip of your VM host (the Mac OS X OS). The number is a VirtualBox default and is displayed in the log after the Vagrant ssh command. We\(aqll use that IP to tell the minion where the Salt master is. Once you\(aqre done, end the ssh session by typing \fBexit\fP. .RE .sp It\(aqs now time to connect the VM to the salt master .SS Step 3 \- Connecting Master and Minion .SS Creating The Minion Configuration File .sp Create the \fB/etc/salt/minion\fP file. In that file, put the following lines, giving the ID for this minion, and the IP of the master: .sp .nf .ft C master: 10.0.2.2 id: \(aqminion1\(aq file_client: remote .ft P .fi .sp Minions authenticate with the master using keys. Keys are generated automatically if you don\(aqt provide one and can accept them later on. However, this requires accepting the minion key every time the minion is destroyed or created (which could be quite often). A better way is to create those keys in advance, feed them to the minion, and authorize them once. .SS Preseed minion keys .sp From the minion folder on your Mac run: .sp .nf .ft C sudo salt\-key \-\-gen\-keys=minion1 .ft P .fi .sp This should create two files: minion1.pem, and minion1.pub. Since those files have been created using sudo, but will be used by vagrant, you need to change ownership: .sp .nf .ft C sudo chown youruser:yourgroup minion1.pem sudo chown youruser:yourgroup minion1.pub .ft P .fi .sp Then copy the .pub file into the list of accepted minions: .sp .nf .ft C sudo cp minion1.pub /etc/salt/pki/master/minions/minion1 .ft P .fi .SS Modify Vagrantfile to Use Salt Provisioner .sp Let\(aqs now modify the Vagrantfile used to provision the Salt VM. Add the following section in the Vagrantfile (note: it should be at the same indentation level as the other properties): .sp .nf .ft C # salt\-vagrant config config.vm.provision :salt do |salt| salt.run_highstate = true salt.minion_config = "/etc/salt/minion" salt.minion_key = "./minion1.pem" salt.minion_pub = "./minion1.pub" end .ft P .fi .sp Now destroy the vm and recreate it from the /minion folder: .sp .nf .ft C vagrant destroy vagrant up .ft P .fi .sp If everything is fine you should see the following message: .sp .nf .ft C "Bootstrapping Salt... (this may take a while) Salt successfully configured and installed!" .ft P .fi .SS Checking Master\-Minion Communication .sp To make sure the master and minion are talking to each other, enter the following: .sp .nf .ft C sudo salt \(aq*\(aq test.ping .ft P .fi .sp You should see your minion answering the ping. It\(aqs now time to do some configuration. .SS Step 4 \- Configure Services to Install On the Minion .sp In this step we\(aqll use the Salt master to instruct our minion to install Nginx. .SS Checking the system\(aqs original state .sp First, make sure that an HTTP server is not installed on our minion. When opening a browser directed at \fBhttp://192.168.33.10/\fP You should get an error saying the site cannot be reached. .SS Initialize the top.sls file .sp System configuration is done in the /srv/salt/top.sls file (and subfiles/folders), and then applied by running the \fBstate.highstate\fP command to have the Salt master give orders so minions will update their instructions and run the associated commands. .sp First Create an empty file on your Salt master (Mac OS X machine): .sp .nf .ft C touch /srv/salt/top.sls .ft P .fi .sp When the file is empty, or if no configuration is found for our minion an error is reported: .sp .nf .ft C sudo salt \(aqminion1\(aq state.highstate .ft P .fi .sp Should return an error stating: "No Top file or external nodes data matches found". .SS Create The Nginx Configuration .sp Now is finally the time to enter the real meat of our server\(aqs configuration. For this tutorial our minion will be treated as a web server that needs to have Nginx installed. .sp Insert the following lines into the \fB/srv/salt/top.sls\fP file (which should current be empty). .sp .nf .ft C base: \(aqminion1\(aq: \- bin.nginx .ft P .fi .sp Now create a \fB/srv/salt/bin/nginx.sls\fP file containing the following: .sp .nf .ft C nginx: pkg.installed: \- name: nginx service.running: \- enable: True \- reload: True .ft P .fi .SS Check Minion State .sp Finally run the state.highstate command again: .sp .nf .ft C sudo salt \(aqminion1\(aq state.highstate .ft P .fi .sp You should see a log showing that the Nginx package has been installed and the service configured. To prove it, open your browser and navigate to \fI\%http://192.168.33.10/\fP, you should see the standard Nginx welcome page. .sp Congratulations! .SS Where To Go From Here .sp A full description of configuration management within Salt (sls files among other things) is available here: \fI\%http://docs.saltstack.com/index.html#configuration-management\fP .SS Writing Salt Tests .IP Note THIS TUTORIAL IS A WORK IN PROGRESS .RE .sp Salt comes with a powerful integration and unit test suite. The test suite allows for the fully automated run of integration and/or unit tests from a single interface. The integration tests are surprisingly easy to write and can be written to be either destructive or non\-destructive. .SS Getting Set Up For Tests .sp To walk through adding an integration test, start by getting the latest development code and the test system from GitHub: .IP Note The develop branch often has failing tests and should always be considered a staging area. For a checkout that tests should be running perfectly on, please check out a specific release tag (such as v2014.1.4). .RE .sp .nf .ft C git clone git@github.com:saltstack/salt.git pip install git+https://github.com/saltstack/salt\-testing.git#egg=SaltTesting .ft P .fi .sp Now that a fresh checkout is available run the test suite .SS Destructive vs Non\-destructive .sp Since Salt is used to change the settings and behavior of systems, often, the best approach to run tests is to make actual changes to an underlying system. This is where the concept of destructive integration tests comes into play. Tests can be written to alter the system they are running on. This capability is what fills in the gap needed to properly test aspects of system management like package installation. .sp To write a destructive test import and use the \fIdestructiveTest\fP decorator for the test method: .sp .nf .ft C import integration from salttesting.helpers import destructiveTest class PkgTest(integration.ModuleCase): @destructiveTest def test_pkg_install(self): ret = self.run_function(\(aqpkg.install\(aq, name=\(aqfinch\(aq) self.assertSaltTrueReturn(ret) ret = self.run_function(\(aqpkg.purge\(aq, name=\(aqfinch\(aq) self.assertSaltTrueReturn(ret) .ft P .fi .SS Automated Test Runs .sp SaltStack maintains a Jenkins server which can be viewed at \fI\%http://jenkins.saltstack.com\fP. The tests executed from this Jenkins server create fresh virtual machines for each test run, then execute the destructive tests on the new clean virtual machine. This allows for the execution of tests across supported platforms. .SS Salt Cloud .SS Salt as a Cloud Controller .sp In Salt 0.14.0 advanced cloud control systems were introduced, allowing for private cloud vms to be managed directly with Salt. This system is generally referred to as \fBSalt Virt\fP. .sp The Salt Virt system already exists and is installed within Salt itself, this means that beyond setting up Salt no additional salt code needs to be deployed. .sp The main goal of Salt Virt is to facilitate a very fast and simple cloud. A cloud that can scale and a fully featured cloud. Salt Virt comes with the ability to set up and manage complex virtual machine networking, powerful image and disk management, as well as virtual machine migration with and without shared storage. .sp This means that Salt Virt can be used to create a cloud from a blade center and a SAN, but can also create a cloud out of a swarm of Linux Desktops without a single shared storage system. Salt Virt can make clouds from truly commodity hardware, but can also stand up the power of specialized hardware as well. .SS Setting up Hypervisors .sp The first step to set up the hypervisors involves getting the correct software installed and setting up the hypervisor network interfaces. .SS Installing Hypervisor Software .sp Salt Virt is made to be hypervisor agnostic but currently the only fully implemented hypervisor is KVM via libvirt. .sp The required software for a hypervisor is libvirt and kvm. For advanced features install libguestfs or qemu\-nbd. .IP Note Libguestfs and qemu\-nbd allow for virtual machine images to be mounted before startup and get pre\-seeded with configurations and a salt minion .RE .sp This sls will set up the needed software for a hypervisor, and run the routines to set up the libvirt pki keys. .IP Note Package names and setup used is Red Hat specific, different package names will be required for different platforms .RE .sp .nf .ft C libvirt: pkg: \- installed file: \- managed \- name: /etc/sysconfig/libvirtd \- contents: \(aqLIBVIRTD_ARGS="\-\-listen"\(aq \- require: \- pkg: libvirt libvirt: \- keys \- require: \- pkg: libvirt service: \- running \- name: libvirtd \- require: \- pkg: libvirt \- network: br0 \- libvirt: libvirt \- watch: \- file: libvirt libvirt\-python: pkg: \- installed libguestfs: pkg: \- installed \- pkgs: \- libguestfs \- libguestfs\-tools .ft P .fi .SS Hypervisor Network Setup .sp The hypervisors will need to be running a network bridge to serve up network devices for virtual machines, this formula will set up a standard bridge on a hypervisor connecting the bridge to eth0: .sp .nf .ft C eth0: network.managed: \- enabled: True \- type: eth \- bridge: br0 br0: network.managed: \- enabled: True \- type: bridge \- proto: dhcp \- require: \- network: eth0 .ft P .fi .SS Virtual Machine Network Setup .sp Salt Virt comes with a system to model the network interfaces used by the deployed virtual machines; by default a single interface is created for the deployed virtual machine and is bridged to \fBbr0\fP. To get going with the default networking setup, ensure that the bridge interface named \fBbr0\fP exists on the hypervisor and is bridged to an active network device. .IP Note To use more advanced networking in Salt Virt, read the \fISalt Virt Networking\fP document: .sp \fBSalt Virt Networking\fP .RE .SS Libvirt State .sp One of the challenges of deploying a libvirt based cloud is the distribution of libvirt certificates. These certificates allow for virtual machine migration. Salt comes with a system used to auto deploy these certificates. Salt manages the signing authority key and generates keys for libvirt clients on the master, signs them with the certificate authority and uses pillar to distribute them. This is managed via the \fBlibvirt\fP state. Simply execute this formula on the minion to ensure that the certificate is in place and up to date: .IP Note The above formula includes the calls needed to set up libvirt keys. .RE .sp .nf .ft C libvirt_keys: libvirt.keys .ft P .fi .SS Getting Virtual Machine Images Ready .sp Salt Virt, requires that virtual machine images be provided as these are not generated on the fly. Generating these virtual machine images differs greatly based on the underlying platform. .sp Virtual machine images can be manually created using KVM and running through the installer, but this process is not recommended since it is very manual and prone to errors. .sp Virtual Machine generation applications are available for many platforms: .INDENT 0.0 .TP .B vm\-builder: \fI\%https://wiki.debian.org/VMBuilder\fP .UNINDENT .sp Once virtual machine images are available, the easiest way to make them available to Salt Virt is to place them in the Salt file server. Just copy an image into \fB/srv/salt\fP and it can now be used by Salt Virt. .sp For purposes of this demo, the file name \fBcentos.img\fP will be used. .SS Existing Virtual Machine Images .sp Many existing Linux distributions distribute virtual machine images which can be used with Salt Virt. Please be advised that NONE OF THESE IMAGES ARE SUPPORTED BY SALTSTACK. .SS CentOS .sp These images have been prepared for OpenNebula but should work without issue with Salt Virt, only the raw qcow image file is needed: \fI\%http://wiki.centos.org/Cloud/OpenNebula\fP .SS Fedora Linux .sp Images for Fedora Linux can be found here: \fI\%http://fedoraproject.org/en/get-fedora#clouds\fP .SS Ubuntu Linux .sp Images for Ubuntu Linux can be found here: \fI\%http://cloud-images.ubuntu.com/\fP .SS Using Salt Virt .sp With hypervisors set up and virtual machine images ready, Salt can start issuing cloud commands. .sp Start by running a Salt Virt hypervisor info command: .sp .nf .ft C salt\-run virt.hyper_info .ft P .fi .sp This will query what the running hypervisor stats are and display information for all configured hypervisors. This command will also validate that the hypervisors are properly configured. .sp Now that hypervisors are available a virtual machine can be provisioned. The \fBvirt.init\fP routine will create a new virtual machine: .sp .nf .ft C salt\-run virt.init centos1 2 512 salt://centos.img .ft P .fi .sp This command assumes that the CentOS virtual machine image is sitting in the root of the Salt fileserver. Salt Virt will now select a hypervisor to deploy the new virtual machine on and copy the virtual machine image down to the hypervisor. .sp Once the VM image has been copied down the new virtual machine will be seeded. Seeding the VMs involves setting pre\-authenticated Salt keys on the new VM and if needed, will install the Salt Minion on the new VM before it is started. .IP Note The biggest bottleneck in starting VMs is when the Salt Minion needs to be installed. Making sure that the source VM images already have Salt installed will GREATLY speed up virtual machine deployment. .RE .sp Now that the new VM has been prepared, it can be seen via the \fBvirt.query\fP command: .sp .nf .ft C salt\-run virt.query .ft P .fi .sp This command will return data about all of the hypervisors and respective virtual machines. .sp Now that the new VM is booted it should have contacted the Salt Master, a \fBtest.ping\fP will reveal if the new VM is running. .SS Migrating Virtual Machines .sp Salt Virt comes with full support for virtual machine migration, and using the libvirt state in the above formula makes migration possible. .sp A few things need to be available to support migration. Many operating systems turn on firewalls when originally set up, the firewall needs to be opened up to allow for libvirt and kvm to cross communicate and execution migration routines. On Red Hat based hypervisors in particular port 16514 needs to be opened on hypervisors: .sp .nf .ft C iptables \-A INPUT \-m state \-\-state NEW \-m tcp \-p tcp \-\-dport 16514 \-j ACCEPT .ft P .fi .IP Note More in\-depth information regarding distribution specific firewall settings can read in: .sp \fBOpening the Firewall up for Salt\fP .RE .sp Salt also needs an additional flag to be turned on as well. The \fBvirt.tunnel\fP option needs to be turned on. This flag tells Salt to run migrations securely via the libvirt TLS tunnel and to use port 16514. Without \fBvirt.tunnel\fP libvirt tries to bind to random ports when running migrations. To turn on \fBvirt.tunnel\fP simple apply it to the master config file: .sp .nf .ft C virt.tunnel: True .ft P .fi .sp Once the master config has been updated, restart the master and send out a call to the minions to refresh the pillar to pick up on the change: .sp .nf .ft C salt \e* saltutil.refresh_modules .ft P .fi .sp Now, migration routines can be run! To migrate a VM, simply run the Salt Virt migrate routine: .sp .nf .ft C salt\-run virt.migrate centos .ft P .fi .SS VNC Consoles .sp Salt Virt also sets up VNC consoles by default, allowing for remote visual consoles to be oped up. The information from a \fBvirt.query\fP routine will display the vnc console port for the specific vms: .sp .nf .ft C centos CPU: 2 Memory: 524288 State: running Graphics: vnc \- hyper6:5900 Disk \- vda: Size: 2.0G File: /srv/salt\-images/ubuntu2/system.qcow2 File Format: qcow2 Nic \- ac:de:48:98:08:77: Source: br0 Type: bridge .ft P .fi .sp The line \fIGraphics: vnc \- hyper6:5900\fP holds the key. First the port named, in this case 5900, will need to be available in the hypervisor\(aqs firewall. Once the port is open, then the console can be easily opened via vncviewer: .sp .nf .ft C vncviewer hyper6:5900 .ft P .fi .sp By default there is no VNC security set up on these ports, which suggests that keeping them firewalled and mandating that SSH tunnels be used to access these VNC interfaces. Keep in mind that activity on a VNC interface that is accessed can be viewed by any other user that accesses that same VNC interface, and any other user logging in can also operate with the logged in user on the virtual machine. .SS Conclusion .sp Now with Salt Virt running, new hypervisors can be seamlessly added just by running the above states on new bare metal machines, and these machines will be instantly available to Salt Virt. .SS Halite .SS Installing and Configuring Halite .sp In this tutorial, we\(aqll walk through installing and setting up Halite. The current version of Halite is considered pre\-alpha and is supported only in Salt \fBv2014.1.0 (Hydrogen)\fP or greater. Additional information is available on GitHub: \fI\%https://github.com/saltstack/halite\fP .sp Before beginning this tutorial, ensure that the salt\-master is installed. To install the salt\-master, please review the installation documentation: \fI\%http://docs.saltstack.com/topics/installation/index.html\fP .IP Note Halite only works with Salt versions greater than 2014.1.0 (Hydrogen). .RE .SS Installing Halite Via Package .sp On CentOS, RHEL, or Fedora: .sp .nf .ft C $ yum install python\-halite .ft P .fi .IP Note By default python\-halite only installs CherryPy. If you would like to use a different webserver please review the instructions below to install pip and your server of choice. The package does not modify the master configuration with \fB/etc/salt/master\fP. .RE .SS Installing Halite Using pip .sp To begin the installation of Halite from PyPI, you\(aqll need to install pip. The Salt package, as well as the bootstrap, do not install pip by default. .sp On CentOS, RHEL, or Fedora: .sp .nf .ft C $ yum install python\-pip .ft P .fi .sp On Debian: .sp .nf .ft C $ apt\-get install python\-pip .ft P .fi .sp Once you have pip installed, use it to install halite: .sp .nf .ft C $ pip install \-U halite .ft P .fi .sp Depending on the webserver you want to run halite through, you\(aqll need to install that piece as well. On RHEL based distros, use one of the following: .sp .nf .ft C $ pip install cherrypy .ft P .fi .sp .nf .ft C $ pip install paste .ft P .fi .sp .nf .ft C $ yum install python\-devel $ yum install gcc $ pip install gevent $ pip install pyopenssl .ft P .fi .sp On Debian based distributions: .sp .nf .ft C $ pip install CherryPy .ft P .fi .sp .nf .ft C $ pip install paste .ft P .fi .sp .nf .ft C $ apt\-get install gcc $ apt\-get install python\-dev $ apt\-get install libevent\-dev $ pip install gevent $ pip install pyopenssl .ft P .fi .SS Configuring Halite Permissions .sp Configuring Halite access permissions is easy. By default, you only need to ensure that the @runner group is configured. In the \fB/etc/salt/master\fP file, uncomment and modify the following lines: .sp .nf .ft C external_auth: pam: testuser: \- .* \- \(aq@runner\(aq .ft P .fi .IP Note You cannot use the root user for pam login; it will fail to authenticate. .RE .sp Halite uses the runner manage.present to get the status of minions, so runner permissions are required. For example: .sp .nf .ft C external_auth: pam: mytestuser: \- .* \- \(aq@runner\(aq \- \(aq@wheel\(aq .ft P .fi .sp Currently Halite allows, but does not require, any wheel modules. .SS Configuring Halite Settings .sp Once you\(aqve configured the permissions for Halite, you\(aqll need to set up the Halite settings in the /etc/salt/master file. Halite supports CherryPy, Paste and Gevent out of the box. .sp To configure cherrypy, add the following to the bottom of your /etc/salt/master file: .sp .nf .ft C halite: level: \(aqdebug\(aq server: \(aqcherrypy\(aq host: \(aq0.0.0.0\(aq port: \(aq8080\(aq cors: False tls: True certpath: \(aq/etc/pki/tls/certs/localhost.crt\(aq keypath: \(aq/etc/pki/tls/certs/localhost.key\(aq pempath: \(aq/etc/pki/tls/certs/localhost.pem\(aq .ft P .fi .sp If you wish to use paste: .sp .nf .ft C halite: level: \(aqdebug\(aq server: \(aqpaste\(aq host: \(aq0.0.0.0\(aq port: \(aq8080\(aq cors: False tls: True certpath: \(aq/etc/pki/tls/certs/localhost.crt\(aq keypath: \(aq/etc/pki/tls/certs/localhost.key\(aq pempath: \(aq/etc/pki/tls/certs/localhost.pem\(aq .ft P .fi .sp To use gevent: .sp .nf .ft C halite: level: \(aqdebug\(aq server: \(aqgevent\(aq host: \(aq0.0.0.0\(aq port: \(aq8080\(aq cors: False tls: True certpath: \(aq/etc/pki/tls/certs/localhost.crt\(aq keypath: \(aq/etc/pki/tls/certs/localhost.key\(aq pempath: \(aq/etc/pki/tls/certs/localhost.pem\(aq .ft P .fi .sp The "cherrypy" and "gevent" servers require the certpath and keypath files to run tls/ssl. The .crt file holds the public cert and the .key file holds the private key. Whereas the "paste" server requires a single .pem file that contains both the cert and key. This can be created simply by concatenating the .crt and .key files. .sp If you want to use a self\-signed cert, you can create one using the Salt.tls module: .IP Note The following command needs to be run on your salt master. .RE .sp .nf .ft C salt\-call tls.create_self_signed_cert tls .ft P .fi .sp Note that certs generated by the above command can be found under the \fB/etc/pki/tls/certs/\fP directory. When using self\-signed certs, browsers will need approval before accepting the cert. If the web application page has been cached with a non\-HTTPS version of the app, then the browser cache will have to be cleared before it will recognize and prompt to accept the self\-signed certificate. .SS Starting Halite .sp Once you\(aqve configured the halite section of your /etc/salt/master, you can restart the salt\-master service, and your halite instance will be available. Depending on your configuration, the instance will be available either at \fI\%http://localhost:8080/app\fP, \fI\%http://domain:8080/app\fP, or \fI\%http://123.456.789.012:8080/app\fP . .IP Note halite requires an HTML 5 compliant browser. .RE .sp All logs relating to halite are logged to the default /var/log/salt/master file. .SH TARGETING MINIONS .sp Targeting minions is specifying which minions should run a command or execute a state by matching against hostnames, or system information, or defined groups, or even combinations thereof. .sp For example the command \fBsalt web1 apache.signal restart\fP to restart the Apache httpd server specifies the machine \fBweb1\fP as the target and the command will only be run on that one minion. .sp Similarly when using States, the following \fItop file\fP specifies that only the \fBweb1\fP minion should execute the contents of \fBwebserver.sls\fP: .sp .nf .ft C base: \(aqweb1\(aq: \- webserver .ft P .fi .sp There are many ways to target individual minions or groups of minions in Salt: .SS Matching the \fBminion id\fP .sp Each minion needs a unique identifier. By default when a minion starts for the first time it chooses its FQDN as that identifier. The minion id can be overridden via the minion\(aqs \fBid\fP configuration setting. .IP Tip minion id and minion keys .sp The \fIminion id\fP is used to generate the minion\(aqs public/private keys and if it ever changes the master must then accept the new key as though the minion was a new host. .RE .SS Globbing .sp The default matching that Salt utilizes is \fI\%shell-style globbing\fP around the \fIminion id\fP. This also works for states in the \fItop file\fP. .IP Note You must wrap \fBsalt\fP calls that use globbing in single\-quotes to prevent the shell from expanding the globs before Salt is invoked. .RE .sp Match all minions: .sp .nf .ft C salt \(aq*\(aq test.ping .ft P .fi .sp Match all minions in the example.net domain or any of the example domains: .sp .nf .ft C salt \(aq*.example.net\(aq test.ping salt \(aq*.example.*\(aq test.ping .ft P .fi .sp Match all the \fBwebN\fP minions in the example.net domain (\fBweb1.example.net\fP, \fBweb2.example.net\fP … \fBwebN.example.net\fP): .sp .nf .ft C salt \(aqweb?.example.net\(aq test.ping .ft P .fi .sp Match the \fBweb1\fP through \fBweb5\fP minions: .sp .nf .ft C salt \(aqweb[1\-5]\(aq test.ping .ft P .fi .sp Match the \fBweb1\fP and \fBweb3\fP minions: .sp .nf .ft C salt \(aqweb[1,3]\(aq test.ping .ft P .fi .sp Match the \fBweb\-x\fP, \fBweb\-y\fP, and \fBweb\-z\fP minions: .sp .nf .ft C salt \(aqweb\-[x\-z]\(aq test.ping .ft P .fi .IP Note For additional targeting methods please review the \fBcompound matchers\fP documentation. .RE .SS Regular Expressions .sp Minions can be matched using Perl\-compatible \fI\%regular expressions\fP (which is globbing on steroids and a ton of caffeine). .sp Match both \fBweb1\-prod\fP and \fBweb1\-devel\fP minions: .sp .nf .ft C salt \-E \(aqweb1\-(prod|devel)\(aq test.ping .ft P .fi .sp When using regular expressions in a State\(aqs \fItop file\fP, you must specify the matcher as the first option. The following example executes the contents of \fBwebserver.sls\fP on the above\-mentioned minions. .sp .nf .ft C base: \(aqweb1\-(prod|devel)\(aq: \- match: pcre \- webserver .ft P .fi .SS Lists .sp At the most basic level, you can specify a flat list of minion IDs: .sp .nf .ft C salt \-L \(aqweb1,web2,web3\(aq test.ping .ft P .fi .SS Grains .sp Salt comes with an interface to derive information about the underlying system. This is called the grains interface, because it presents salt with grains of information. .sp The grains interface is made available to Salt modules and components so that the right salt minion commands are automatically available on the right systems. .sp It is important to remember that grains are bits of information loaded when the salt minion starts, so this information is static. This means that the information in grains is unchanging, therefore the nature of the data is static. So grains information are things like the running kernel, or the operating system. .IP Note Grains resolve to lowercase letters. For example, \fBFOO\fP and \fBfoo\fP target the same grain. .RE .sp Match all CentOS minions: .sp .nf .ft C salt \-G \(aqos:CentOS\(aq test.ping .ft P .fi .sp Match all minions with 64\-bit CPUs, and return number of CPU cores for each matching minion: .sp .nf .ft C salt \-G \(aqcpuarch:x86_64\(aq grains.item num_cpus .ft P .fi .sp Additionally, globs can be used in grain matches, and grains that are nested in a \fI\%dictionary\fP can be matched by adding a colon for each level that is traversed. For example, the following will match hosts that have a grain called \fBec2_tags\fP, which itself is a \fI\%dict\fP with a key named \fBenvironment\fP, which has a value that contains the word \fBproduction\fP: .sp .nf .ft C salt \-G \(aqec2_tags:environment:*production*\(aq .ft P .fi .SS Listing Grains .sp Available grains can be listed by using the \(aqgrains.ls\(aq module: .sp .nf .ft C salt \(aq*\(aq grains.ls .ft P .fi .sp Grains data can be listed by using the \(aqgrains.items\(aq module: .sp .nf .ft C salt \(aq*\(aq grains.items .ft P .fi .SS Grains in the Minion Config .sp Grains can also be statically assigned within the minion configuration file. Just add the option \fBgrains\fP and pass options to it: .sp .nf .ft C grains: roles: \- webserver \- memcache deployment: datacenter4 cabinet: 13 cab_u: 14\-15 .ft P .fi .sp Then status data specific to your servers can be retrieved via Salt, or used inside of the State system for matching. It also makes targeting, in the case of the example above, simply based on specific data about your deployment. .SS Grains in /etc/salt/grains .sp If you do not want to place your custom static grains in the minion config file, you can also put them in \fB/etc/salt/grains\fP on the minion. They are configured in the same way as in the above example, only without a top\-level \fBgrains:\fP key: .sp .nf .ft C roles: \- webserver \- memcache deployment: datacenter4 cabinet: 13 cab_u: 14\-15 .ft P .fi .SS Matching Grains in the Top File .sp With correctly configured grains on the Minion, the \fItop file\fP used in Pillar or during Highstate can be made very efficient. For example, consider the following configuration: .sp .nf .ft C \(aqnode_type:web\(aq: \- match: grain \- webserver \(aqnode_type:postgres\(aq: \- match: grain \- database \(aqnode_type:redis\(aq: \- match: grain \- redis \(aqnode_type:lb\(aq: \- match: grain \- lb .ft P .fi .sp For this example to work, you would need to have defined the grain \fBnode_type\fP for the minions you wish to match. This simple example is nice, but too much of the code is similar. To go one step further, Jinja templating can be used to simplify the the \fItop file\fP. .sp .nf .ft C {% set node_type = salt[\(aqgrains.get\(aq](\(aqnode_type\(aq, \(aq\(aq) %} {% if node_type %} \(aqnode_type:{{ self }}\(aq: \- match: grain \- {{ self }} {% endif %} .ft P .fi .sp Using Jinja templating, only one match entry needs to be defined. .IP Note The example above uses the \fBgrains.get\fP function to account for minions which do not have the \fBnode_type\fP grain set. .RE .SS Writing Grains .sp Grains are easy to write. The grains interface is derived by executing all of the "public" functions found in the modules located in the grains package or the custom grains directory. The functions in the modules of the grains must return a Python \fI\%dict\fP, where the keys in the \fI\%dict\fP are the names of the grains and the values are the values. .sp Custom grains should be placed in a \fB_grains\fP directory located under the \fBfile_roots\fP specified by the master config file. They will be distributed to the minions when \fBstate.highstate\fP is run, or by executing the \fBsaltutil.sync_grains\fP or \fBsaltutil.sync_all\fP functions. .sp Before adding a grain to Salt, consider what the grain is and remember that grains need to be static data. If the data is something that is likely to change, consider using \fBPillar\fP instead. .IP Warning Custom grains will not be available in the top file until after the first \fIhighstate\fP. To make custom grains available on a minion\(aqs first highstate, it is recommended to use \fIthis example\fP to ensure that the custom grains are synced when the minion starts. .RE .SS Precedence .sp Core grains can be overridden by custom grains. As there are several ways of defining custom grains, there is an order of precedence which should be kept in mind when defining them. The order of evaluation is as follows: .INDENT 0.0 .IP 1. 3 Core grains. .IP 2. 3 Custom grains in \fB/etc/salt/grains\fP. .IP 3. 3 Custom grains in \fB/etc/salt/minion\fP. .IP 4. 3 Custom grain modules in \fB_grains\fP directory, synced to minions. .UNINDENT .sp Each successive evaluation overrides the previous ones, so any grains defined in \fB/etc/salt/grains\fP that have the same name as a core grain will override that core grain. Similarly, \fB/etc/salt/minion\fP overrides both core grains and grains set in \fB/etc/salt/grains\fP, and custom grain modules will override \fIany\fP grains of the same name. .SS Examples of Grains .sp The core module in the grains package is where the main grains are loaded by the Salt minion and provides the principal example of how to write grains: .sp \fI\%https://github.com/saltstack/salt/blob/develop/salt/grains/core.py\fP .SS Syncing Grains .sp Syncing grains can be done a number of ways, they are automatically synced when \fBstate.highstate\fP is called, or (as noted above) the grains can be manually synced and reloaded by calling the \fBsaltutil.sync_grains\fP or \fBsaltutil.sync_all\fP functions. .SS Subnet/IP Address Matching .sp Minions can easily be matched based on IP address, or by subnet (using \fI\%CIDR\fP notation). .sp .nf .ft C salt \-S 192.168.40.20 test.ping salt \-S 10.0.0.0/24 test.ping .ft P .fi .IP Note Only IPv4 matching is supported at this time. .RE .SS Compound matchers .sp Compound matchers allow very granular minion targeting using any of Salt\(aqs matchers. The default matcher is a \fI\%glob\fP match, just as with CLI and \fItop file\fP matching. To match using anything other than a glob, prefix the match string with the appropriate letter from the table below, followed by an \fB@\fP sign. .TS center; |l|l|l|. _ T{ Letter T} T{ Match Type T} T{ Example T} _ T{ G T} T{ Grains glob T} T{ \fBG@os:Ubuntu\fP T} _ T{ E T} T{ PCRE Minion ID T} T{ \fBE@web\ed+\e.(dev|qa|prod)\e.loc\fP T} _ T{ P T} T{ Grains PCRE T} T{ \fBP@os:(RedHat|Fedora|CentOS)\fP T} _ T{ L T} T{ List of minions T} T{ \fBL@minion1.example.com,minion3.domain.com or bl*.domain.com\fP T} _ T{ I T} T{ Pillar glob T} T{ \fBI@pdata:foobar\fP T} _ T{ S T} T{ Subnet/IP address T} T{ \fBS@192.168.1.0/24\fP or \fBS@192.168.1.100\fP T} _ T{ R T} T{ Range cluster T} T{ \fBR@%foo.bar\fP T} _ .TE .sp Matchers can be joined using boolean \fBand\fP, \fBor\fP, and \fBnot\fP operators. .sp For example, the following string matches all Debian minions with a hostname that begins with \fBwebserv\fP, as well as any minions that have a hostname which matches the \fI\%regular expression\fP \fBweb\-dc1\-srv.*\fP: .sp .nf .ft C salt \-C \(aqwebserv* and G@os:Debian or E@web\-dc1\-srv.*\(aq test.ping .ft P .fi .sp That same example expressed in a \fItop file\fP looks like the following: .sp .nf .ft C base: \(aqwebserv* and G@os:Debian or E@web\-dc1\-srv.*\(aq: \- match: compound \- webserver .ft P .fi .sp Note that a leading \fBnot\fP is not supported in compound matches. Instead, something like the following must be done: .sp .nf .ft C salt \-C \(aq* and not G@kernel:Darwin\(aq test.ping .ft P .fi .sp Excluding a minion based on its ID is also possible: .sp .nf .ft C salt \-C \(aq* and not web\-dc1\-srv\(aq test.ping .ft P .fi .SS Precedence Matching .sp Matches can be grouped together with parentheses to explicitly declare precedence amongst groups. .sp .nf .ft C salt \-C \(aq( ms\-1 or G@id:ms\-3 ) and G@id:ms\-3\(aq test.ping .ft P .fi .IP Note Be certain to note that spaces are required between the parentheses and targets. Failing to obey this rule may result in incorrect targeting! .RE .SS Node groups .sp Nodegroups are declared using a compound target specification. The compound target documentation can be found \fBhere\fP. .sp The \fBnodegroups\fP master config file parameter is used to define nodegroups. Here\(aqs an example nodegroup configuration within \fB/etc/salt/master\fP: .sp .nf .ft C nodegroups: group1: \(aqL@foo.domain.com,bar.domain.com,baz.domain.com or bl*.domain.com\(aq group2: \(aqG@os:Debian and foo.domain.com\(aq .ft P .fi .IP Note The \fBL\fP within group1 is matching a list of minions, while the \fBG\fP in group2 is matching specific grains. See the \fBcompound matchers\fP documentation for more details. .RE .sp To match a nodegroup on the CLI, use the \fB\-N\fP command\-line option: .sp .nf .ft C salt \-N group1 test.ping .ft P .fi .sp To match a nodegroup in your \fItop file\fP, make sure to put \fB\- match: nodegroup\fP on the line directly following the nodegroup name. .sp .nf .ft C base: group1: \- match: nodegroup \- webserver .ft P .fi .IP Note When adding or modifying nodegroups to a master configuration file, the master must be restarted for those changes to be fully recognized. .sp A limited amount of functionality, such as targeting with \-N from the command\-line may be available without a restart. .RE .SS Batch Size .sp The \fB\-b\fP (or \fB\-\-batch\-size\fP) option allows commands to be executed on only a specified number of minions at a time. Both percentages and finite numbers are supported. .sp .nf .ft C salt \(aq*\(aq \-b 10 test.ping salt \-G \(aqos:RedHat\(aq \-\-batch\-size 25% apache.signal restart .ft P .fi .sp This will only run test.ping on 10 of the targeted minions at a time and then restart apache on 25% of the minions matching \fBos:RedHat\fP at a time and work through them all until the task is complete. This makes jobs like rolling web server restarts behind a load balancer or doing maintenance on BSD firewalls using carp much easier with salt. .sp The batch system maintains a window of running minions, so, if there are a total of 150 minions targeted and the batch size is 10, then the command is sent to 10 minions, when one minion returns then the command is sent to one additional minion, so that the job is constantly running on 10 minions. .SH STORING STATIC DATA IN THE PILLAR .sp Pillar is an interface for Salt designed to offer global values that can be distributed to all minions. Pillar data is managed in a similar way as the Salt State Tree. .sp Pillar was added to Salt in version 0.9.8 .IP Note Storing sensitive data .sp Unlike state tree, pillar data is only available for the targeted minion specified by the matcher type. This makes it useful for storing sensitive data specific to a particular minion. .RE .SS Declaring the Master Pillar .sp The Salt Master server maintains a pillar_roots setup that matches the structure of the file_roots used in the Salt file server. Like the Salt file server the \fBpillar_roots\fP option in the master config is based on environments mapping to directories. The pillar data is then mapped to minions based on matchers in a top file which is laid out in the same way as the state top file. Salt pillars can use the same matcher types as the standard top file. .sp The configuration for the \fBpillar_roots\fP in the master config file is identical in behavior and function as \fBfile_roots\fP: .sp .nf .ft C pillar_roots: base: \- /srv/pillar .ft P .fi .sp This example configuration declares that the base environment will be located in the \fB/srv/pillar\fP directory. It must not be in a subdirectory of the state tree. .sp The top file used matches the name of the top file used for States, and has the same structure: .sp \fB/srv/pillar/top.sls\fP .sp .nf .ft C base: \(aq*\(aq: \- packages .ft P .fi .sp In the above top file, it is declared that in the \(aqbase\(aq environment, the glob matching all minions will have the pillar data found in the \(aqpackages\(aq pillar available to it. Assuming the \(aqpillar_roots\(aq value of \(aq/srv/salt\(aq taken from above, the \(aqpackages\(aq pillar would be located at \(aq/srv/salt/packages.sls\(aq. .sp Another example shows how to use other standard top matching types to deliver specific salt pillar data to minions with different properties. .sp Here is an example using the \(aqgrains\(aq matcher to target pillars to minions by their \(aqos\(aq grain: .sp .nf .ft C dev: \(aqos:Debian\(aq: \- match: grain \- servers .ft P .fi .sp \fB/srv/pillar/packages.sls\fP .sp .nf .ft C {% if grains[\(aqos\(aq] == \(aqRedHat\(aq %} apache: httpd git: git {% elif grains[\(aqos\(aq] == \(aqDebian\(aq %} apache: apache2 git: git\-core {% endif %} company: Foo Industries .ft P .fi .sp The above pillar sets two key/value pairs. If a minion is running RedHat, then the \(aqapache\(aq key is set to \(aqhttpd\(aq and the \(aqgit\(aq key is set to the value of \(aqgit\(aq. If the minion is running Debian, those values are changed to \(aqapache2\(aq and \(aqgit\-core\(aq respctively. All minions that have this pillar targeting to them via a top file will have the key of \(aqcompany\(aq with a value of \(aqFoo Industries\(aq. .sp Consequently this data can be used from within modules, renderers, State SLS files, and more via the shared pillar \fI\%dict\fP: .sp .nf .ft C apache: pkg: \- installed \- name: {{ pillar[\(aqapache\(aq] }} .ft P .fi .sp .nf .ft C git: pkg: \- installed \- name: {{ pillar[\(aqgit\(aq] }} .ft P .fi .sp Finally, the above states can utilize the values provided to them via Pillar. All pillar values targeted to a minion are available via the \(aqpillar\(aq dictionary. As seen in the above example, Jinja substitution can then be utilized to access the keys and values in the Pillar dictionary. .sp Note that you cannot just list key/value\-information in \fBtop.sls\fP. Instead, target a minion to a pillar file and then list the keys and values in the pillar. Here is an example top file that illustrates this point: .sp And the actual pillar file at \(aq/srv/salt/common_pillar.sls\(aq: .SS Pillar namespace flattened .sp The separate pillar files all share the same namespace. Given a \fBtop.sls\fP of: .sp .nf .ft C base: \(aq*\(aq: \- packages \- services .ft P .fi .sp a \fBpackages.sls\fP file of: .sp .nf .ft C bind: bind9 .ft P .fi .sp and a \fBservices.sls\fP file of: .sp .nf .ft C bind: named .ft P .fi .sp Then a request for the \fBbind\fP pillar will only return \(aqnamed\(aq; the \(aqbind9\(aq value is not available. It is better to structure your pillar files with more hierarchy. For example your \fBpackage.sls\fP file could look like: .sp .nf .ft C packages: bind: bind9 .ft P .fi .SS Including Other Pillars .sp New in version 0.16.0. .sp Pillar SLS files may include other pillar files, similar to State files. Two syntaxes are available for this purpose. The simple form simply includes the additional pillar as if it were part of the same file: .sp .nf .ft C include: \- users .ft P .fi .sp The full include form allows two additional options \-\- passing default values to the templating engine for the included pillar file as well as an optional key under which to nest the results of the included pillar: .sp .nf .ft C include: \- users: defaults: sudo: [\(aqbob\(aq, \(aqpaul\(aq] key: users .ft P .fi .sp With this form, the included file (users.sls) will be nested within the \(aqusers\(aq key of the compiled pillar. Additionally, the \(aqsudo\(aq value will be available as a template variable to users.sls. .SS Viewing Minion Pillar .sp Once the pillar is set up the data can be viewed on the minion via the \fBpillar\fP module, the pillar module comes with two functions, \fBpillar.items\fP and and \fBpillar.raw\fP. \fBpillar.items\fP will return a freshly reloaded pillar and \fBpillar.raw\fP will return the current pillar without a refresh: .sp .nf .ft C salt \(aq*\(aq pillar.items .ft P .fi .IP Note Prior to version 0.16.2, this function is named \fBpillar.data\fP. This function name is still supported for backwards compatibility. .RE .SS Pillar "get" Function .sp New in version 0.14.0. .sp The \fBpillar.get\fP function works much in the same way as the \fBget\fP method in a python dict, but with an enhancement: nested dict components can be extracted using a \fI:\fP delimiter. .sp If a structure like this is in pillar: .sp .nf .ft C foo: bar: baz: qux .ft P .fi .sp Extracting it from the raw pillar in an sls formula or file template is done this way: .sp .nf .ft C {{ pillar[\(aqfoo\(aq][\(aqbar\(aq][\(aqbaz\(aq] }} .ft P .fi .sp Now, with the new \fBpillar.get\fP function the data can be safely gathered and a default can be set, allowing the template to fall back if the value is not available: .sp .nf .ft C {{ salt[\(aqpillar.get\(aq](\(aqfoo:bar:baz\(aq, \(aqqux\(aq) }} .ft P .fi .sp This makes handling nested structures much easier. .IP Note \fBpillar.get()\fP vs \fBsalt[\(aqpillar.get\(aq]()\fP .sp It should be noted that within templating, the \fBpillar\fP variable is just a dictionary. This means that calling \fBpillar.get()\fP inside of a template will just use the default dictionary \fB.get()\fP function which does not include the extra \fB:\fP delimiter functionality. It must be called using the above syntax (\fBsalt[\(aqpillar.get\(aq](\(aqfoo:bar:baz\(aq, \(aqqux\(aq)\fP) to get the salt function, instead of the default dictionary behavior. .RE .SS Refreshing Pillar Data .sp When pillar data is changed on the master the minions need to refresh the data locally. This is done with the \fBsaltutil.refresh_pillar\fP function. .sp .nf .ft C salt \(aq*\(aq saltutil.refresh_pillar .ft P .fi .sp This function triggers the minion to asynchronously refresh the pillar and will always return \fBNone\fP. .SS Targeting with Pillar .sp Pillar data can be used when targeting minions. This allows for ultimate control and flexibility when targeting minions. .sp .nf .ft C salt \-I \(aqsomekey:specialvalue\(aq test.ping .ft P .fi .sp Like with \fBGrains\fP, it is possible to use globbing as well as match nested values in Pillar, by adding colons for each level that is being traversed. The below example would match minions with a pillar named \fBfoo\fP, which is a dict containing a key \fBbar\fP, with a value beginning with \fBbaz\fP: .sp .nf .ft C salt \-I \(aqfoo:bar:baz*\(aq test.ping .ft P .fi .SS Master Config In Pillar .sp For convenience the data stored in the master configuration file is made available in all minion\(aqs pillars. This makes global configuration of services and systems very easy but may not be desired if sensitive data is stored in the master configuration. .sp To disable the master config from being added to the pillar set \fBpillar_opts\fP to \fBFalse\fP: .sp .nf .ft C pillar_opts: False .ft P .fi .SH REACTOR SYSTEM .sp Salt version 0.11.0 introduced the reactor system. The premise behind the reactor system is that with Salt\(aqs events and the ability to execute commands, a logic engine could be put in place to allow events to trigger actions, or more accurately, reactions. .sp This system binds sls files to event tags on the master. These sls files then define reactions. This means that the reactor system has two parts. First, the reactor option needs to be set in the master configuration file. The reactor option allows for event tags to be associated with sls reaction files. Second, these reaction files use highdata (like the state system) to define reactions to be executed. .SS Event System .sp A basic understanding of the event system is required to understand reactors. The event system is a local ZeroMQ PUB interface which fires salt events. This event bus is an open system used for sending information notifying Salt and other systems about operations. .sp The event system fires events with a very specific criteria. Every event has a \fBtag\fP. Event tags allow for fast top level filtering of events. In addition to the tag, each event has a data structure. This data structure is a dict, which contains information about the event. .SS Mapping Events to Reactor SLS Files .sp Reactor SLS files and event tags are associated in the master config file. By default this is /etc/salt/master, or /etc/salt/master.d/reactor.conf. .sp In the master config section \(aqreactor:\(aq is a list of event tags to be matched and each event tag has a list of reactor SLS files to be run. .sp .nf .ft C reactor: # Master config section "reactor" \- \(aqsalt/minion/*/start\(aq: # Match tag "salt/minion/*/start" \- /srv/reactor/start.sls # Things to do when a minion starts \- /srv/reactor/monitor.sls # Other things to do \- \(aqsalt/cloud/*/destroyed\(aq: # Globs can be used to matching tags \- /srv/reactor/decommision.sls # Things to do when a server is removed .ft P .fi .sp Reactor sls files are similar to state and pillar sls files. They are by default yaml + Jinja templates and are passed familar context variables. .sp They differ because of the addition of the \fBtag\fP and \fBdata\fP variables. .INDENT 0.0 .IP \(bu 2 The \fBtag\fP variable is just the tag in the fired event. .IP \(bu 2 The \fBdata\fP variable is the event\(aqs data dict. .UNINDENT .sp Here is a simple reactor sls: .sp .nf .ft C {% if data[\(aqid\(aq] == \(aqmysql1\(aq %} highstate_run: cmd.state.highstate: \- tgt: mysql1 {% endif %} .ft P .fi .sp This simple reactor file uses Jinja to further refine the reaction to be made. If the \fBid\fP in the event data is \fBmysql1\fP (in other words, if the name of the minion is \fBmysql1\fP) then the following reaction is defined. The same data structure and compiler used for the state system is used for the reactor system. The only difference is that the data is matched up to the salt command API and the runner system. In this example, a command is published to the \fBmysql1\fP minion with a function of \fBstate.highstate\fP. Similarly, a runner can be called: .sp .nf .ft C {% if data[\(aqdata\(aq][\(aqoverstate\(aq] == \(aqrefresh\(aq %} overstate_run: runner.state.over {% endif %} .ft P .fi .sp This example will execute the state.overstate runner and initiate an overstate execution. .SS Fire an event .sp To fire an event from a minion call \fBevent.fire_master\fP .sp .nf .ft C salt\-call event.fire_master \(aq{"overstate": "refresh"}\(aq \(aqfoo\(aq .ft P .fi .sp After this is called, any reactor sls files matching event tag \fBfoo\fP will execute with \fB{{ data[\(aqdata\(aq][\(aqoverstate\(aq] }}\fP equal to \fB\(aqrefresh\(aq\fP. .sp See \fBsalt.modules.event\fP for more information. .SS Knowing what event is being fired .sp Knowing exactly which event is being fired and what data is has for use in the sls files can be challenging. The easiest way to see exactly what\(aqs going on is to use the \fBeventlisten.py\fP script. This script is not part of packages but is part of the source. .sp If the master process is using the default socket, no additional options will be required. Otherwise, you will need to specify the socket location. .sp Example usage: .sp .nf .ft C wget https://raw.githubusercontent.com/saltstack/salt/develop/tests/eventlisten.py python eventlisten.py # OR python eventlisten.py \-\-sock\-dir /path/to/var/run/salt .ft P .fi .sp Example output: .sp .nf .ft C Event fired at Fri Dec 20 10:43:00 2013 ************************* Tag: salt/auth Data: {\(aq_stamp\(aq: \(aq2013\-12\-20_10:47:54.584699\(aq, \(aqact\(aq: \(aqaccept\(aq, \(aqid\(aq: \(aqfuzzer.domain.tld\(aq, \(aqpub\(aq: \(aq\-\-\-\-\-BEGIN PUBLIC KEY\-\-\-\-\-\enMIICIDANBgk+TRIMMED+EMZ8CAQE=\en\-\-\-\-\-END PUBLIC KEY\-\-\-\-\-\en\(aq, \(aqresult\(aq: True} Event fired at Fri Dec 20 10:43:01 2013 ************************* Tag: salt/minion/fuzzer.domain.tld/start Data: {\(aq_stamp\(aq: \(aq2013\-12\-20_10:43:01.638387\(aq, \(aqcmd\(aq: \(aq_minion_event\(aq, \(aqdata\(aq: \(aqMinion fuzzer.domain.tld started at Fri Dec 20 10:43:01 2013\(aq, \(aqid\(aq: \(aqfuzzer.domain.tld\(aq, \(aqpretag\(aq: None, \(aqtag\(aq: \(aqsalt/minion/fuzzer.domain.tld/start\(aq} .ft P .fi .SS Debugging the Reactor .sp The best window into the Reactor is to run the master in the foreground with debug logging enabled. The output will include when the master sees the event, what the master does in response to that event, and it will also include the rendered SLS file (or any errors generated while rendering the SLS file). .INDENT 0.0 .IP 1. 3 Stop the master. .IP 2. 3 Start the master manually: .sp .nf .ft C salt\-master \-l debug .ft P .fi .UNINDENT .SS Understanding the Structure of Reactor Formulas .sp While the reactor system uses the same data structure as the state system, this data does not translate the same way to operations. In state files formula information is mapped to the state functions, but in the reactor system information is mapped to a number of available subsystems on the master. These systems are the \fBLocalClient\fP and the \fBRunners\fP. The \fBstate declaration\fP field takes a reference to the function to call in each interface. So to trigger a salt\-run call the \fBstate declaration\fP field will start with \fBrunner\fP, followed by the runner function to call. This means that a call to what would be on the command line \fBsalt\-run manage.up\fP will be \fBrunner.manage.up\fP. An example of this in a reactor formula would look like this: .sp .nf .ft C manage_up: runner.manage.up .ft P .fi .sp If the runner takes arguments then they can be specified as well: .sp .nf .ft C overstate_dev_env: runner.state.over: \- env: dev .ft P .fi .sp Executing remote commands maps to the \fBLocalClient\fP interface which is used by the \fBsalt\fP command. This interface more specifically maps to the \fBcmd_async\fP method inside of the \fBLocalClient\fP class. This means that the arguments passed are being passed to the \fBcmd_async\fP method, not the remote method. A field starts with \fBcmd\fP to use the \fBLocalClient\fP subsystem. The result is, to execute a remote command, a reactor formula would look like this: .sp .nf .ft C clean_tmp: cmd.cmd.run: \- tgt: \(aq*\(aq \- arg: \- rm \-rf /tmp/* .ft P .fi .sp The \fBarg\fP option takes a list of arguments as they would be presented on the command line, so the above declaration is the same as running this salt command: .sp .nf .ft C salt \(aq*\(aq cmd.run \(aqrm \-rf /tmp/*\(aq .ft P .fi .sp Use the \fBexpr_form\fP argument to specify a matcher: .sp .nf .ft C clean_tmp: cmd.cmd.run: \- tgt: \(aqos:Ubuntu\(aq \- expr_form: grain \- arg: \- rm \-rf /tmp/* clean_tmp: cmd.cmd.run: \- tgt: \(aqG@roles:hbase_master\(aq \- expr_form: compound \- arg: \- rm \-rf /tmp/* .ft P .fi .sp An interesting trick to pass data from the Reactor script to \fBstate.highstate\fP or \fBstate.sls\fP is to pass it as inline Pillar data since both functions take a keyword argument named \fBpillar\fP. .sp The following example uses Salt\(aqs Reactor to listen for the event that is fired when the key for a new minion is accepted on the master using \fBsalt\-key\fP. .sp \fB/etc/salt/master.d/reactor.conf\fP: .sp .nf .ft C reactor: \- \(aqsalt/key\(aq: \- /srv/salt/haproxy/react_new_minion.sls .ft P .fi .sp The Reactor then fires a \fBstate.sls\fP command targeted to the HAProxy servers and passes the ID of the new minion from the event to the state file via inline Pillar. .sp \fB/srv/salt/haproxy/react_new_minion.sls\fP: .sp .nf .ft C {% if data[\(aqact\(aq] == \(aqaccept\(aq and data[\(aqid\(aq].startswith(\(aqweb\(aq) %} add_new_minion_to_pool: cmd.state.sls: \- tgt: \(aqhaproxy*\(aq \- arg: \- haproxy.refresh_pool \- kwarg: pillar: new_minion: {{ data[\(aqid\(aq] }} {% endif %} .ft P .fi .sp The above command is equivalent to the following command at the CLI: .sp .nf .ft C salt \(aqhaproxy*\(aq state.sls haproxy.refresh_pool \(aqpillar={new_minion: minionid}\(aq .ft P .fi .sp Finally, that data is available in the state file using the normal Pillar lookup syntax. The following example is grabbing web server names and IP addresses from \fISalt Mine\fP. If this state is invoked from the Reactor then the custom Pillar value from above will be available and the new minion will be added to the pool but with the \fBdisabled\fP flag so that HAProxy won\(aqt yet direct traffic to it. .sp \fB/srv/salt/haproxy/refresh_pool.sls\fP: .sp .nf .ft C {% set new_minion = salt[\(aqpillar.get\(aq](\(aqnew_minion\(aq) %} listen web *:80 balance source {% for server,ip in salt[\(aqmine.get\(aq](\(aqweb*\(aq, \(aqnetwork.interfaces\(aq, [\(aqeth0\(aq]).items() %} {% if server == new_minion %} server {{ server }} {{ ip }}:80 disabled {% else %} server {{ server }} {{ ip }}:80 check {% endif %} {% endfor %} .ft P .fi .SS A Complete Example .sp In this example, we\(aqre going to assume that we have a group of servers that will come online at random and need to have keys automatically accepted. We\(aqll also add that we don\(aqt want all servers being automatically accepted. For this example, we\(aqll assume that all hosts that have an id that starts with \(aqink\(aq will be automatically accepted and have state.highstate executed. On top of this, we\(aqre going to add that a host coming up that was replaced (meaning a new key) will also be accepted. .sp Our master configuration will be rather simple. All minions that attempte to authenticate will match the \fBtag\fP of \fBsalt/auth\fP. When it comes to the minion key being accepted, we get a more refined \fBtag\fP that includes the minion id, which we can use for matching. .sp \fB/etc/salt/master.d/reactor.conf\fP: .sp .nf .ft C reactor: \- \(aqsalt/auth\(aq: \- /srv/reactor/auth\-pending.sls \- \(aqsalt/minion/ink*/start\(aq: \- /srv/reactor/auth\-complete.sls .ft P .fi .sp In this sls file, we say that if the key was rejected we will delete the key on the master and then also tell the master to ssh in to the minion and tell it to restart the minion, since a minion process will die if the key is rejected. .sp We also say that if the key is pending and the id starts with ink we will accept the key. A minion that is waiting on a pending key will retry authentication every ten seconds by default. .sp \fB/srv/reactor/auth\-pending.sls\fP: .sp .nf .ft C {# Ink server faild to authenticate \-\- remove accepted key #} {% if not data[\(aqresult\(aq] and data[\(aqid\(aq].startswith(\(aqink\(aq) %} minion_remove: wheel.key.delete: \- match: {{ data[\(aqid\(aq] }} minion_rejoin: cmd.cmd.run: \- tgt: salt\-master.domain.tld \- arg: \- ssh \-o UserKnownHostsFile=/dev/null \-o StrictHostKeyChecking=no "{{ data[\(aqid\(aq] }}" \(aqsleep 10 && /etc/init.d/salt\-minion restart\(aq {% endif %} {# Ink server is sending new key \-\- accept this key #} {% if \(aqact\(aq in data and data[\(aqact\(aq] == \(aqpend\(aq and data[\(aqid\(aq].startswith(\(aqink\(aq) %} minion_add: wheel.key.accept: \- match: {{ data[\(aqid\(aq] }} {% endif %} .ft P .fi .sp No if statements are needed here because we already limited this action to just Ink servers in the master configuration. .sp \fB/srv/reactor/auth\-complete.sls\fP: .sp .nf .ft C {# When an Ink server connects, run state.highstate. #} highstate_run: cmd.state.highstate: \- tgt: {{ data[\(aqid\(aq] }} .ft P .fi .SS Syncing Custom Types on Minion Start .sp Salt will sync all custom types (by running a \fBsaltutil.sync_all\fP) on every highstate. However, there is a chicken\-and\-egg issue where, on the initial highstate, a minion will not yet have these custom types synced when the top file is first compiled. This can be worked around with a simple reactor which watches for \fBminion_start\fP events, which each minion fires when it first starts up and connects to the master. .sp On the master, create \fB/srv/reactor/sync_grains.sls\fP with the following contents: .sp .nf .ft C sync_grains: cmd.saltutil.sync_grains: \- tgt: {{ data[\(aqid\(aq] }} .ft P .fi .sp And in the master config file, add the following reactor configuration: .sp .nf .ft C reactor: \- \(aqminion_start\(aq: \- /srv/reactor/sync_grains.sls .ft P .fi .sp This will cause the master to instruct each minion to sync its custom grains when it starts, making these grains available when the initial highstate is executed. .sp Other types can be synced by replacing \fBcmd.saltutil.sync_grains\fP with \fBcmd.saltutil.sync_modules\fP, \fBcmd.saltutil.sync_all\fP, or whatever else suits your particular use case. .SH THE SALT MINE .sp The Salt Mine is used to collect arbitrary data from minions and store it on the master. This data is then made available to all minions via the \fBsalt.modules.mine\fP module. .sp The data is gathered on the minion and sent back to the master where only the most recent data is maintained (if long term data is required use returners or the external job cache). .SS Mine Functions .sp To enable the Salt Mine the \fImine_functions\fP option needs to be applied to a minion. This option can be applied via the minion\(aqs configuration file, or the minion\(aqs Pillar. The \fImine_functions\fP option dictates what functions are being executed and allows for arguments to be passed in. If no arguments are passed, an empty list must be added: .sp .nf .ft C mine_functions: test.ping: [] network.ip_addrs: interface: eth0 cidr: \(aq10.0.0.0/8\(aq .ft P .fi .SS Mine Interval .sp The Salt Mine functions are executed when the minion starts and at a given interval by the scheduler. The default interval is every 60 minutes and can be adjusted for the minion via the \fImine_interval\fP option: .sp .nf .ft C mine_interval: 60 .ft P .fi .SS Example .sp One way to use data from Salt Mine is in a State. The values can be retrieved via Jinja and used in the SLS file. The following example is a partial HAProxy configuration file and pulls IP addresses from all minions with the "web" grain to add them to the pool of load balanced servers. .sp \fB/srv/pillar/top.sls\fP: .sp .nf .ft C base: \(aqG@roles:web\(aq: \- web .ft P .fi .sp \fB/srv/pillar/web.sls\fP: .sp .nf .ft C mine_functions: network.ip_addrs: [eth0] .ft P .fi .sp \fB/etc/salt/minion.d/mine.conf\fP: .sp .nf .ft C mine_interval: 5 .ft P .fi .sp \fB/srv/salt/haproxy.sls\fP: .sp .nf .ft C haproxy_config: file: \- managed \- name: /etc/haproxy/config \- source: salt://haproxy_config \- template: jinja .ft P .fi .sp \fB/srv/salt/haproxy_config\fP: .sp .nf .ft C <...file contents snipped...> {% for server, addrs in salt[\(aqmine.get\(aq](\(aqroles:web\(aq, \(aqnetwork.ip_addrs\(aq, expr_form=\(aqgrain\(aq).items() %} server {{ server }} {{ addrs[0] }}:80 check {% endfor %} <...file contents snipped...> .ft P .fi .SH EXTERNAL AUTHENTICATION SYSTEM .sp Salt\(aqs External Authentication System (eAuth) allows for Salt to pass through command authorization to any external authentication system, such as PAM or LDAP. .SS Access Control System .sp New in version 0.10.4. .sp Salt maintains a standard system used to open granular control to non administrative users to execute Salt commands. The access control system has been applied to all systems used to configure access to non administrative control interfaces in Salt.These interfaces include, the \fBpeer\fP system, the \fBexternal auth\fP system and the \fBclient acl\fP system. .sp The access control system mandated a standard configuration syntax used in all of the three aforementioned systems. While this adds functionality to the configuration in 0.10.4, it does not negate the old configuration. .sp Now specific functions can be opened up to specific minions from specific users in the case of external auth and client ACLs, and for specific minions in the case of the peer system. .sp The access controls are manifested using matchers in these configurations: .sp .nf .ft C client_acl: fred: \- web\e*: \- pkg.list_pkgs \- test.* \- apache.* .ft P .fi .sp In the above example, fred is able to send commands only to minions which match the specified glob target. This can be expanded to include other functions for other minions based on standard targets. .sp .nf .ft C external_auth: pam: dave: \- test.ping \- mongo\e*: \- network.* \- log\e*: \- network.* \- pkg.* \- \(aqG@os:RedHat\(aq: \- kmod.* steve: \- .* .ft P .fi .sp The above allows for all minions to be hit by test.ping by dave, and adds a few functions that dave can execute on other minions. It also allows steve unrestricted access to salt commands. .sp The external authentication system allows for specific users to be granted access to execute specific functions on specific minions. Access is configured in the master configuration file and uses the \fIaccess control system\fP: .sp .nf .ft C external_auth: pam: thatch: \- \(aqweb*\(aq: \- test.* \- network.* steve: \- .* .ft P .fi .sp The above configuration allows the user \fBthatch\fP to execute functions in the test and network modules on the minions that match the web* target. User \fBsteve\fP is given unrestricted access to minion commands. .IP Note The PAM module does not allow authenticating as \fBroot\fP. .RE .sp To allow access to \fIwheel modules\fP or \fIrunner modules\fP the following \fB@\fP syntax must be used: .sp .nf .ft C external_auth: pam: thatch: \- \(aq@wheel\(aq \- \(aq@runner\(aq .ft P .fi .sp The external authentication system can then be used from the command\-line by any user on the same system as the master with the \fB\-a\fP option: .sp .nf .ft C $ salt \-a pam web\e* test.ping .ft P .fi .sp The system will ask the user for the credentials required by the authentication system and then publish the command. .sp To apply permissions to a group of users in an external authentication system, append a \fB%\fP to the ID: .sp .nf .ft C external_auth: pam: admins%: \- \(aq*\(aq: \- \(aqpkg.*\(aq .ft P .fi .SS Tokens .sp With external authentication alone, the authentication credentials will be required with every call to Salt. This can be alleviated with Salt tokens. .sp Tokens are short term authorizations and can be easily created by just adding a \fB\-T\fP option when authenticating: .sp .nf .ft C $ salt \-T \-a pam web\e* test.ping .ft P .fi .sp Now a token will be created that has a expiration of 12 hours (by default). This token is stored in a file named \fB.salt_token\fP in the active user\(aqs home directory. .sp Once the token is created, it is sent with all subsequent communications. User authentication does not need to be entered again until the token expires. .sp Token expiration time can be set in the Salt master config file. .SS LDAP .sp Salt supports both user and group authentication for LDAP. .sp LDAP configuration happens in the Salt master configuration file. .sp Server configuration values: .sp .nf .ft C auth.ldap.server: localhost auth.ldap.port: 389 auth.ldap.tls: False auth.ldap.scope: 2 .ft P .fi .sp Salt also needs to know which Base DN to search for users and groups and the DN to bind to: .sp .nf .ft C auth.ldap.basedn: dc=saltstack,dc=com auth.ldap.binddn: cn=admin,dc=saltstack,dc=com .ft P .fi .sp To bind to a DN, a password is required .sp .nf .ft C auth.ldap.bindpw: mypassword .ft P .fi .sp Salt users a filter to find the DN associated with a user. Salt substitutes the \fB{{ username }}\fP value for the username when querying LDAP. .sp .nf .ft C auth.ldap.filter: uid={{ username }} .ft P .fi .sp If group support for LDAP is desired, one can specify an OU that contains group data. This is prepended to the basedn to create a search path .sp .nf .ft C auth.ldap.groupou: Groups .ft P .fi .sp Once configured, LDAP permissions can be assigned to users and groups. .sp .nf .ft C external_auth: ldap: test_ldap_user: \- \(aq*\(aq: \- test.ping .ft P .fi .sp To configure an LDAP group, append a \fB%\fP to the ID: .sp .nf .ft C external_auth: ldap: test_ldap_group%: \- \(aq*\(aq: \- test.echo .ft P .fi .SH JOB MANAGEMENT .sp New in version 0.9.7. .sp Since Salt executes jobs running on many systems, Salt needs to be able to manage jobs running on many systems. .SS The Minion proc System .sp Salt Minions maintain a \fIproc\fP directory in the Salt \fBcachedir\fP. The \fIproc\fP directory maintains files named after the executed job ID. These files contain the information about the current running jobs on the minion and allow for jobs to be looked up. This is located in the \fIproc\fP directory under the cachedir, with a default configuration it is under \fI/var/cache/salt/proc\fP. .SS Functions in the saltutil Module .sp Salt 0.9.7 introduced a few new functions to the \fBsaltutil\fP module for managing jobs. These functions are: .INDENT 0.0 .IP 1. 3 \fBrunning\fP Returns the data of all running jobs that are found in the \fIproc\fP directory. .IP 2. 3 \fBfind_job\fP Returns specific data about a certain job based on job id. .IP 3. 3 \fBsignal_job\fP Allows for a given jid to be sent a signal. .IP 4. 3 \fBterm_job\fP Sends a termination signal (SIGTERM, 15) to the process controlling the specified job. .IP 5. 3 \fBkill_job\fP Sends a kill signal (SIGKILL, 9) to the process controlling the specified job. .UNINDENT .sp These functions make up the core of the back end used to manage jobs at the minion level. .SS The jobs Runner .sp A convenience runner front end and reporting system has been added as well. The jobs runner contains functions to make viewing data easier and cleaner. .sp The jobs runner contains a number of functions... .SS active .sp The active function runs saltutil.running on all minions and formats the return data about all running jobs in a much more usable and compact format. The active function will also compare jobs that have returned and jobs that are still running, making it easier to see what systems have completed a job and what systems are still being waited on. .sp .nf .ft C # salt\-run jobs.active .ft P .fi .SS lookup_jid .sp When jobs are executed the return data is sent back to the master and cached. By default it is cached for 24 hours, but this can be configured via the \fBkeep_jobs\fP option in the master configuration. Using the lookup_jid runner will display the same return data that the initial job invocation with the salt command would display. .sp .nf .ft C # salt\-run jobs.lookup_jid .ft P .fi .SS list_jobs .sp Before finding a historic job, it may be required to find the job id. list_jobs will parse the cached execution data and display all of the job data for jobs that have already, or partially returned. .sp .nf .ft C # salt\-run jobs.list_jobs .ft P .fi .SS Scheduling Jobs .sp In Salt versions greater than 0.12.0, the scheduling system allows incremental executions on minions or the master. The schedule system exposes the execution of any execution function on minions or any runner on the master. .sp Scheduling is enabled via the \fBschedule\fP option on either the master or minion config files, or via a minion\(aqs pillar data. .IP Note The scheduler executes different functions on the master and minions. When running on the master the functions reference runner functions, when running on the minion the functions specify execution functions. .RE .sp Specify \fBmaxrunning\fP to ensure that there are no more than N copies of a particular routine running. Use this for jobs that may be long\-running and could step on each other or otherwise double execute. The default for \fBmaxrunning\fP is 1. .sp States are executed on the minion, as all states are. You can pass positional arguments and provide a yaml dict of named arguments. .sp .nf .ft C schedule: job1: function: state.sls seconds: 3600 args: \- httpd kwargs: test: True .ft P .fi .sp This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) .sp .nf .ft C schedule: job1: function: state.sls seconds: 3600 args: \- httpd kwargs: test: True splay: 15 .ft P .fi .sp This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 0 and 15 seconds .sp .nf .ft C schedule: job1: function: state.sls seconds: 3600 args: \- httpd kwargs: test: True splay: start: 10 end: 15 .ft P .fi .sp This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 10 and 15 seconds .sp New in version Helium. .sp Frequency of jobs can also be specified using date strings supported by the python dateutil library. .sp .nf .ft C schedule: job1: function: state.sls args: \- httpd kwargs: test: True when: 5:00pm .ft P .fi .sp This will schedule the command: state.sls httpd test=True at 5:00pm minion localtime. .sp .nf .ft C schedule: job1: function: state.sls args: \- httpd kwargs: test: True when: \- Monday 5:00pm \- Tuesday 3:00pm \- Wednesday 5:00pm \- Thursday 3:00pm \- Friday 5:00pm .ft P .fi .sp This will schedule the command: state.sls httpd test=True at 5pm on Monday, Wednesday and Friday, and 3pm on Tuesday and Thursday. .sp .nf .ft C schedule: job1: function: state.sls seconds: 3600 args: \- httpd kwargs: test: True range: start: 8:00am end: 5:00pm .ft P .fi .sp This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) between the hours of 8am and 5pm. The range parameter must be a dictionary with the date strings using the dateutil format. .sp New in version Helium. .sp The scheduler also supports ensuring that there are no more than N copies of a particular routine running. Use this for jobs that may be long\-running and could step on each other or pile up in case of infrastructure outage. .sp The default for maxrunning is 1. .sp .nf .ft C schedule: long_running_job: function: big_file_transfer jid_include: True .ft P .fi .SS States .sp .nf .ft C schedule: log\-loadavg: function: cmd.run seconds: 3660 args: \- \(aqlogger \-t salt < /proc/loadavg\(aq kwargs: stateful: False shell: True .ft P .fi .SS Highstates .sp To set up a highstate to run on a minion every 60 minutes set this in the minion config or pillar: .sp .nf .ft C schedule: highstate: function: state.highstate minutes: 60 .ft P .fi .sp Time intervals can be specified as seconds, minutes, hours, or days. .SS Runners .sp Runner executions can also be specified on the master within the master configuration file: .sp .nf .ft C schedule: overstate: function: state.over seconds: 35 minutes: 30 hours: 3 .ft P .fi .sp The above configuration will execute the state.over runner every 3 hours, 30 minutes and 35 seconds, or every 12,635 seconds. .SS Scheduler With Returner .sp The scheduler is also useful for tasks like gathering monitoring data about a minion, this schedule option will gather status data and send it to a MySQL returner database: .sp .nf .ft C schedule: uptime: function: status.uptime seconds: 60 returner: mysql meminfo: function: status.meminfo minutes: 5 returner: mysql .ft P .fi .sp Since specifying the returner repeatedly can be tiresome, the \fBschedule_returner\fP option is available to specify one or a list of global returners to be used by the minions when scheduling. In Salt versions greater than 0.12.0, the scheduling system allows incremental executions on minions or the master. The schedule system exposes the execution of any execution function on minions or any runner on the master. .sp Scheduling is enabled via the \fBschedule\fP option on either the master or minion config files, or via a minion\(aqs pillar data. .IP Note The scheduler executes different functions on the master and minions. When running on the master the functions reference runner functions, when running on the minion the functions specify execution functions. .RE .sp Specify \fBmaxrunning\fP to ensure that there are no more than N copies of a particular routine running. Use this for jobs that may be long\-running and could step on each other or otherwise double execute. The default for \fBmaxrunning\fP is 1. .sp States are executed on the minion, as all states are. You can pass positional arguments and provide a yaml dict of named arguments. .sp .nf .ft C schedule: job1: function: state.sls seconds: 3600 args: \- httpd kwargs: test: True .ft P .fi .sp This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) .sp .nf .ft C schedule: job1: function: state.sls seconds: 3600 args: \- httpd kwargs: test: True splay: 15 .ft P .fi .sp This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 0 and 15 seconds .sp .nf .ft C schedule: job1: function: state.sls seconds: 3600 args: \- httpd kwargs: test: True splay: start: 10 end: 15 .ft P .fi .sp This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) splaying the time between 10 and 15 seconds .sp New in version Helium. .sp Frequency of jobs can also be specified using date strings supported by the python dateutil library. .sp .nf .ft C schedule: job1: function: state.sls args: \- httpd kwargs: test: True when: 5:00pm .ft P .fi .sp This will schedule the command: state.sls httpd test=True at 5:00pm minion localtime. .sp .nf .ft C schedule: job1: function: state.sls args: \- httpd kwargs: test: True when: \- Monday 5:00pm \- Tuesday 3:00pm \- Wednesday 5:00pm \- Thursday 3:00pm \- Friday 5:00pm .ft P .fi .sp This will schedule the command: state.sls httpd test=True at 5pm on Monday, Wednesday and Friday, and 3pm on Tuesday and Thursday. .sp .nf .ft C schedule: job1: function: state.sls seconds: 3600 args: \- httpd kwargs: test: True range: start: 8:00am end: 5:00pm .ft P .fi .sp This will schedule the command: state.sls httpd test=True every 3600 seconds (every hour) between the hours of 8am and 5pm. The range parameter must be a dictionary with the date strings using the dateutil format. .sp New in version Helium. .sp The scheduler also supports ensuring that there are no more than N copies of a particular routine running. Use this for jobs that may be long\-running and could step on each other or pile up in case of infrastructure outage. .sp The default for maxrunning is 1. .sp .nf .ft C schedule: long_running_job: function: big_file_transfer jid_include: True .ft P .fi .SS States .sp .nf .ft C schedule: log\-loadavg: function: cmd.run seconds: 3660 args: \- \(aqlogger \-t salt < /proc/loadavg\(aq kwargs: stateful: False shell: True .ft P .fi .SS Highstates .sp To set up a highstate to run on a minion every 60 minutes set this in the minion config or pillar: .sp .nf .ft C schedule: highstate: function: state.highstate minutes: 60 .ft P .fi .sp Time intervals can be specified as seconds, minutes, hours, or days. .SS Runners .sp Runner executions can also be specified on the master within the master configuration file: .sp .nf .ft C schedule: overstate: function: state.over seconds: 35 minutes: 30 hours: 3 .ft P .fi .sp The above configuration will execute the state.over runner every 3 hours, 30 minutes and 35 seconds, or every 12,635 seconds. .SS Scheduler With Returner .sp The scheduler is also useful for tasks like gathering monitoring data about a minion, this schedule option will gather status data and send it to a MySQL returner database: .sp .nf .ft C schedule: uptime: function: status.uptime seconds: 60 returner: mysql meminfo: function: status.meminfo minutes: 5 returner: mysql .ft P .fi .sp Since specifying the returner repeatedly can be tiresome, the \fBschedule_returner\fP option is available to specify one or a list of global returners to be used by the minions when scheduling. .SH SALT EVENT SYSTEM .sp The Salt Event System is used to fire off events enabling third party applications or external processes to react to behavior within Salt. .sp The event system is comprised of a two primary components: .INDENT 0.0 .INDENT 3.5 .INDENT 0.0 .IP \(bu 2 The event sockets which publishes events. .IP \(bu 2 The event library which can listen to events and send events into the salt system. .UNINDENT .UNINDENT .UNINDENT .SS Event types .SS Salt Master Events .sp These events are fired on the Salt Master event bus. This list is \fBnot\fP comprehensive. .SS Authentication events .INDENT 0.0 .TP .B salt/auth Fired when a minion performs an authentication check with the master. .INDENT 7.0 .TP .B Variables .INDENT 7.0 .IP \(bu 2 \fBid\fP \-\- The minion ID. .IP \(bu 2 \fBact\fP \-\- The current status of the minion key: \fBaccept\fP, \fBpend\fP, \fBreject\fP. .IP \(bu 2 \fBpub\fP \-\- The minion public key. .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt/minion//start Fired every time a minion connects to the Salt master. .INDENT 7.0 .TP .B Variables \fBid\fP \-\- The minion ID. .UNINDENT .UNINDENT .SS Job events .INDENT 0.0 .TP .B salt/job//new Fired as a new job is sent out to minions. .INDENT 7.0 .TP .B Variables .INDENT 7.0 .IP \(bu 2 \fBjid\fP \-\- The job ID. .IP \(bu 2 \fBtgt\fP \-\- The target of the job: \fB*\fP, a minion ID, \fBG@os_family:RedHat\fP, etc. .IP \(bu 2 \fBtgt_type\fP \-\- The type of targeting used: \fBglob\fP, \fBgrain\fP, \fBcompound\fP, etc. .IP \(bu 2 \fBfun\fP \-\- The function to run on minions: \fBtest.ping\fP, \fBnetwork.interfaces\fP, etc. .IP \(bu 2 \fBarg\fP \-\- A list of arguments to pass to the function that will be called. .IP \(bu 2 \fBminions\fP \-\- A list of minion IDs that Salt expects will return data for this job. .IP \(bu 2 \fBuser\fP \-\- The name of the user that ran the command as defined in Salt\(aqs Client ACL or external auth. .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt/job//ret/ Fired each time a minion returns data for a job. .INDENT 7.0 .TP .B Variables .INDENT 7.0 .IP \(bu 2 \fBid\fP \-\- The minion ID. .IP \(bu 2 \fBjid\fP \-\- The job ID. .IP \(bu 2 \fBretcode\fP \-\- The return code for the job. .IP \(bu 2 \fBfun\fP \-\- The function the minion ran. E.g., \fBtest.ping\fP. .IP \(bu 2 \fBreturn\fP \-\- The data returned from the execution module. .UNINDENT .UNINDENT .UNINDENT .SS Presence events .INDENT 0.0 .TP .B salt/presence/present Fired on a set schedule. .INDENT 7.0 .TP .B Variables \fBpresent\fP \-\- A list of minions that are currently connected to the Salt master. .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt/presence/change Fired when the Presence system detects new minions connect or disconnect. .INDENT 7.0 .TP .B Variables .INDENT 7.0 .IP \(bu 2 \fBnew\fP \-\- A list of minions that have connected since the last presence event. .IP \(bu 2 \fBlost\fP \-\- A list of minions that have disconnected since the last presence event. .UNINDENT .UNINDENT .UNINDENT .SS Cloud Events .INDENT 0.0 .TP .B salt/cloud//creating Fired when salt\-cloud starts the VM creation process. .INDENT 7.0 .TP .B Variables .INDENT 7.0 .IP \(bu 2 \fBname\fP \-\- the name of the VM being created. .IP \(bu 2 \fBevent\fP \-\- description of the event. .IP \(bu 2 \fBprovider\fP \-\- the cloud provider of the VM being created. .IP \(bu 2 \fBprofile\fP \-\- the cloud profile for the VM being created. .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt/cloud//deploying Fired when the VM is available and salt\-cloud begins deploying Salt to the new VM. .INDENT 7.0 .TP .B Variables .INDENT 7.0 .IP \(bu 2 \fBname\fP \-\- the name of the VM being created. .IP \(bu 2 \fBevent\fP \-\- description of the event. .IP \(bu 2 \fBkwargs\fP \-\- options available as the deploy script is invoked: \fBconf_file\fP, \fBdeploy_command\fP, \fBdisplay_ssh_output\fP, \fBhost\fP, \fBkeep_tmp\fP, \fBkey_filename\fP, \fBmake_minion\fP, \fBminion_conf\fP, \fBname\fP, \fBparallel\fP, \fBpreseed_minion_keys\fP, \fBscript\fP, \fBscript_args\fP, \fBscript_env\fP, \fBsock_dir\fP, \fBstart_action\fP, \fBsudo\fP, \fBtmp_dir\fP, \fBtty\fP, \fBusername\fP .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt/cloud//requesting Fired when salt\-cloud sends the request to create a new VM. .INDENT 7.0 .TP .B Variables .INDENT 7.0 .IP \(bu 2 \fBevent\fP \-\- description of the event. .IP \(bu 2 \fBlocation\fP \-\- the location of the VM being requested. .IP \(bu 2 \fBkwargs\fP \-\- options available as the VM is being requested: \fBAction\fP, \fBImageId\fP, \fBInstanceType\fP, \fBKeyName\fP, \fBMaxCount\fP, \fBMinCount\fP, \fBSecurityGroup.1\fP .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt/cloud//querying Fired when salt\-cloud queries data for a new instance. .INDENT 7.0 .TP .B Variables .INDENT 7.0 .IP \(bu 2 \fBevent\fP \-\- description of the event. .IP \(bu 2 \fBinstance_id\fP \-\- the ID of the new VM. .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt/cloud//tagging Fired when salt\-cloud tags a new instance. .INDENT 7.0 .TP .B Variables .INDENT 7.0 .IP \(bu 2 \fBevent\fP \-\- description of the event. .IP \(bu 2 \fBtags\fP \-\- tags being set on the new instance. .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt/cloud//waiting_for_ssh Fired while the salt\-cloud deploy process is waiting for ssh to become available on the new instance. .INDENT 7.0 .TP .B Variables .INDENT 7.0 .IP \(bu 2 \fBevent\fP \-\- description of the event. .IP \(bu 2 \fBip_address\fP \-\- IP address of the new instance. .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt/cloud//deploy_script Fired once the deploy script is finished. .INDENT 7.0 .TP .B Variables \fBevent\fP \-\- description of the event. .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt/cloud//created Fired once the new instance has been fully created. .INDENT 7.0 .TP .B Variables .INDENT 7.0 .IP \(bu 2 \fBname\fP \-\- the name of the VM being created. .IP \(bu 2 \fBevent\fP \-\- description of the event. .IP \(bu 2 \fBinstance_id\fP \-\- the ID of the new instance. .IP \(bu 2 \fBprovider\fP \-\- the cloud provider of the VM being created. .IP \(bu 2 \fBprofile\fP \-\- the cloud profile for the VM being created. .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt/cloud//destroying Fired when salt\-cloud requests the destruction of an instance. .INDENT 7.0 .TP .B Variables .INDENT 7.0 .IP \(bu 2 \fBname\fP \-\- the name of the VM being created. .IP \(bu 2 \fBevent\fP \-\- description of the event. .IP \(bu 2 \fBinstance_id\fP \-\- the ID of the new instance. .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt/cloud//destroyed Fired when an instance has been destroyed. .INDENT 7.0 .TP .B Variables .INDENT 7.0 .IP \(bu 2 \fBname\fP \-\- the name of the VM being created. .IP \(bu 2 \fBevent\fP \-\- description of the event. .IP \(bu 2 \fBinstance_id\fP \-\- the ID of the new instance. .UNINDENT .UNINDENT .UNINDENT .SS Listening for Events .sp The event system is accessed via the event library and can only be accessed by the same system user that Salt is running as. To listen to events a SaltEvent object needs to be created and then the get_event function needs to be run. The SaltEvent object needs to know the location that the Salt Unix sockets are kept. In the configuration this is the \fBsock_dir\fP option. The \fBsock_dir\fP option defaults to "/var/run/salt/master" on most systems. .sp The following code will check for a single event: .sp .nf .ft C import salt.utils.event event = salt.utils.event.MasterEvent(\(aq/var/run/salt/master\(aq) data = event.get_event() .ft P .fi .sp Events will also use a "tag". Tags allow for events to be filtered. By default all events will be returned. If only authentication events are desired, then pass the tag "auth". .sp The \fBget_event\fP method has a default poll time assigned of 5 seconds. To change this time set the "wait" option. .sp The following example will only listen for auth events and will wait for 10 seconds instead of the default 5. .sp .nf .ft C import salt.utils.event event = salt.utils.event.MasterEvent(\(aq/var/run/salt/master\(aq) data = event.get_event(wait=10, tag=\(aqauth\(aq) .ft P .fi .sp Instead of looking for a single event, the \fBiter_events\fP method can be used to make a generator which will continually yield salt events. .sp The iter_events method also accepts a tag but not a wait time: .sp .nf .ft C import salt.utils.event event = salt.utils.event.MasterEvent(\(aq/var/run/salt/master\(aq) for data in event.iter_events(tag=\(aqauth\(aq): print(data) .ft P .fi .SS Firing Events .sp It is possible to fire events on either the minion\(aqs local bus or to fire events intended for the master. .sp To fire a local event from the minion, on the command line: .sp .nf .ft C salt\-call event.fire \(aq{"data": "message to be sent in the event"}\(aq \(aqtag\(aq .ft P .fi .sp To fire an event to be sent to the master, from the minion: .sp .nf .ft C salt\-call event.fire_master \(aq{"data": "message for the master"}\(aq \(aqtag\(aq .ft P .fi .sp If a process is listening on the minion, it may be useful for a user on the master to fire an event to it: .sp .nf .ft C salt minionname event.fire \(aq{"data": "message for the minion"}\(aq \(aqtag\(aq .ft P .fi .SS Firing Events From Code .sp Events can be very useful when writing execution modules, in order to inform various processes on the master when a certain task has taken place. In Salt versions previous to 0.17.0, the basic code looks like: .sp .nf .ft C # Import the proper library import salt.utils.event # Fire deploy action sock_dir = \(aq/var/run/salt/minion\(aq event = salt.utils.event.SaltEvent(\(aqmaster\(aq, sock_dir) event.fire_event(\(aqMessage to be sent\(aq, \(aqtag\(aq) .ft P .fi .sp In Salt version 0.17.0, the ability to send a payload with a more complex data structure than a string was added. When using this interface, a Python dictionary should be sent instead. .sp .nf .ft C # Import the proper library import salt.utils.event # Fire deploy action sock_dir = \(aq/var/run/salt/minion\(aq payload = {\(aqsample\-msg\(aq: \(aqthis is a test\(aq, \(aqexample\(aq: \(aqthis is the same test\(aq} event = salt.utils.event.SaltEvent(\(aqmaster\(aq, sock_dir) event.fire_event(payload, \(aqtag\(aq) .ft P .fi .sp It should be noted that this code can be used in 3rd party applications as well. So long as the salt\-minion process is running, the minion socket can be used: .sp .nf .ft C sock_dir = \(aq/var/run/salt/minion\(aq .ft P .fi .sp So long as the salt\-master process is running, the master socket can be used: .sp .nf .ft C sock_dir = \(aq/var/run/salt/master\(aq .ft P .fi .sp This allows 3rd party applications to harness the power of the Salt event bus programmatically, without having to make other calls to Salt. .sp A 3rd party process can listen to the event bus on the master and another 3rd party process can fire events to the process on the master, which Salt will happily pass along. .SH SALT SYNDIC .sp The Salt Syndic interface is a powerful tool which allows for the construction of Salt command topologies. A basic Salt setup has a Salt Master commanding a group of Salt Minions. The Syndic interface is a special passthrough minion, it is run on a master and connects to another master, then the master that the Syndic minion is listening to can control the minions attached to the master running the syndic. .sp The intent for supporting many layouts is not presented with the intent of supposing the use of any single topology, but to allow a more flexible method of controlling many systems. .SS Configuring the Syndic .sp Since the Syndic only needs to be attached to a higher level master the configuration is very simple. On a master that is running a syndic to connect to a higher level master the \fBsyndic_master\fP option needs to be set in the master config file. The \fBsyndic_master\fP option contains the hostname or IP address of the master server that can control the master that the syndic is running on. .sp The master that the syndic connects to sees the syndic as an ordinary minion, and treats it as such. the higher level master will need to accept the syndic\(aqs minion key like any other minion. This master will also need to set the \fBorder_masters\fP value in the configuration to \fBTrue\fP. The \fBorder_masters\fP option in the config on the higher level master is very important, to control a syndic extra information needs to be sent with the publications, the \fBorder_masters\fP option makes sure that the extra data is sent out. .sp To sum up, you have those configuration options available on the master side: .INDENT 0.0 .INDENT 3.5 .INDENT 0.0 .IP \(bu 2 \fBsyndic_master\fP: MasterOfMaster ip/address .IP \(bu 2 \fBsyndic_master_port\fP: MasterOfMaster ret_port .IP \(bu 2 \fBsyndic_log_file\fP: path to the logfile (absolute or not) .IP \(bu 2 \fBsyndic_pidfile\fP: path to the pidfile (absolute or not) .UNINDENT .UNINDENT .UNINDENT .sp Each Syndic must provide its own \fBfile_roots\fP directory. Files will not be automatically transferred from the master\-master. .SS Running the Syndic .sp The Syndic is a separate daemon that needs to be started on the master that is controlled by a higher master. Starting the Syndic daemon is the same as starting the other Salt daemons. .sp .nf .ft C # salt\-syndic .ft P .fi .IP Note If you have an exceptionally large infrastructure or many layers of syndics, you may find that the CLI doesn\(aqt wait long enough for the syndics to return their events. If you think this is the case, you can set the \fBsyndic_wait\fP value in the upper master config. The default value is \fB1\fP, and should work for the majority of deployments. .RE .SS Topology .sp The \fBsalt\-syndic\fP is little more than a command and event forwarder. When a command is issued from a higher\-level master, it will be received by the configured syndics on lower\-level masters, and propagated to to their minions, and other syndics that are bound to them further down in the hierarchy. When events and job return data are generated by minions, they aggregated back, through the same syndic(s), to the master which issued the command. .sp The master sitting at the top of the hierachy (the Master of Masters) will \fInot\fP be running the \fBsalt\-syndic\fP daemon. It will have the \fBsalt\-master\fP daemon running, and optionally, the \fBsalt\-minion\fP daemon. Each syndic connected to an upper\-level master will have both the \fBsalt\-master\fP and the \fBsalt\-syndic\fP daemon running, and optionally, the \fBsalt\-minion\fP daemon. .sp Nodes on the lowest points of the hierarchy (minions which do not propogate data to another level) will only have the \fBsalt\-minion\fP daemon running. There is no need for either \fBsalt\-master\fP or \fBsalt\-syndic\fP to be running on a standard minion. .SH SALT PROXY MINION DOCUMENTATION .sp Proxy minions are a developing Salt feature that enables controlling devices that, for whatever reason, cannot run a standard salt\-minion. Examples include network gear that has an API but runs a proprietary OS, devices with limited CPU or memory, or devices that could run a minion, but for security reasons, will not. .sp \fIProxy minions are not an "out of the box" feature\fP. Because there are an infinite number of controllable devices, you will most likely have to write the interface yourself. Fortunately, this is only as difficult as the actual interface to the proxied device. Devices that have an existing Python module (PyUSB for example) would be relatively simple to interface. Code to control a device that has an HTML REST\-based interface should be easy. Code to control your typical housecat would be excellent source material for a PhD thesis. .sp Salt proxy\-minions provide the \(aqplumbing\(aq that allows device enumeration and discovery, control, status, remote execution, and state management. .SS Getting Started .sp The following diagram may be helpful in understanding the structure of a Salt installation that includes proxy\-minions: [image] .sp The key thing to remember is the left\-most section of the diagram. Salt\(aqs nature is to have a minion connect to a master, then the master may control the minion. However, for proxy minions, the target device cannot run a minion, and thus must rely on a separate minion to fire up the proxy\-minion and make the initial and persistent connection. .sp After the proxy minion is started and initiates its connection to the \(aqdumb\(aq device, it connects back to the salt\-master and ceases to be affiliated in any way with the minion that started it. .sp To create support for a proxied device one needs to create four things: .INDENT 0.0 .IP 1. 3 The \fI\%proxytype connection class\fP (located in salt/proxy). .IP 2. 3 The \fI\%grains support code\fP (located in salt/grains). .IP 3. 3 \fISalt modules\fP specific to the controlled device. .IP 4. 3 \fISalt states\fP specific to the controlled device. .UNINDENT .SS Configuration parameters on the master .sp Proxy minions require no configuration parameters in /etc/salt/master. .sp Salt\(aqs Pillar system is ideally suited for configuring proxy\-minions. Proxies can either be designated via a pillar file in pillar_roots, or through an external pillar. External pillars afford the opportunity for interfacing with a configuration management system, database, or other knowledgeable system that that may already contain all the details of proxy targets. To use static files in pillar_roots, pattern your files after the following examples, which are based on the diagram above: .sp \fB/srv/salt/pillar/top.sls\fP .sp .nf .ft C base: minioncontroller1: \- networkswitches minioncontroller2: \- reallydumbdevices minioncontroller3: \- smsgateway .ft P .fi .sp \fB/srv/salt/pillar/networkswitches.sls\fP .sp .nf .ft C proxy: dumbdevice1: proxytype: networkswitch host: 172.23.23.5 username: root passwd: letmein dumbdevice2: proxytype: networkswitch host: 172.23.23.6 username: root passwd: letmein dumbdevice3: proxytype: networkswitch host: 172.23.23.7 username: root passwd: letmein .ft P .fi .sp \fB/srv/salt/pillar/reallydumbdevices.sls\fP .sp .nf .ft C proxy: dumbdevice4: proxytype: i2c_lightshow i2c_address: 1 dumbdevice5: proxytype: i2c_lightshow i2c_address: 2 dumbdevice6: proxytype: 433mhz_wireless .ft P .fi .sp \fB/srv/salt/pillar/smsgateway.sls\fP .sp .nf .ft C proxy: minioncontroller3: dumbdevice7: proxytype: sms_serial deventry: /dev/tty04 .ft P .fi .sp Note the contents of each minioncontroller key may differ widely based on the type of device that the proxy\-minion is managing. .sp In the above example .INDENT 0.0 .IP \(bu 2 dumbdevices 1, 2, and 3 are network switches that have a management interface available at a particular IP address. .IP \(bu 2 dumbdevices 4 and 5 are very low\-level devices controlled over an i2c bus. In this case the devices are physically connected to machine \(aqminioncontroller2\(aq, and are addressable on the i2c bus at their respective i2c addresses. .IP \(bu 2 dumbdevice6 is a 433 MHz wireless transmitter, also physically connected to minioncontroller2 .IP \(bu 2 dumbdevice7 is an SMS gateway connected to machine minioncontroller3 via a serial port. .UNINDENT .sp Because of the way pillar works, each of the salt\-minions that fork off the proxy minions will only see the keys specific to the proxies it will be handling. In other words, from the above example, only minioncontroller1 will see the connection information for dumbdevices 1, 2, and 3. Minioncontroller2 will see configuration data for dumbdevices 4, 5, and 6, and minioncontroller3 will be privy to dumbdevice7. .sp Also, in general, proxy\-minions are lightweight, so the machines that run them could conceivably control a large number of devices. The example above is just to illustrate that it is possible for the proxy services to be spread across many machines if necessary, or intentionally run on machines that need to control devices because of some physical interface (e.g. i2c and serial above). Another reason to divide proxy services might be security. In more secure environments only certain machines may have a network path to certain devices. .sp Now our salt\-minions know if they are supposed to spawn a proxy\-minion process to control a particular device. That proxy\-minion process will initiate a connection back to the master to enable control. .SS Proxytypes .sp A proxytype is a Python class called \(aqProxyconn\(aq that encapsulates all the code necessary to interface with a device. Proxytypes are located inside the salt.proxy module. At a minimum a proxytype object must implement the following methods: .sp \fBproxytype(self)\fP: Returns a string with the name of the proxy type. .sp \fBproxyconn(self, **kwargs)\fP: Provides the primary way to connect and communicate with the device. Some proxyconns instantiate a particular object that opens a network connection to a device and leaves the connection open for communication. Others simply abstract a serial connection or even implement endpoints to communicate via REST over HTTP. .sp \fBid(self, opts)\fP: Returns a unique, unchanging id for the controlled device. This is the "name" of the device, and is used by the salt\-master for targeting and key authentication. .sp Optionally, the class may define a \fBshutdown(self, opts)\fP method if the controlled device should be informed when the minion goes away cleanly. .sp It is highly recommended that the \fBtest.ping\fP execution module also be defined for a proxytype. The code for \fBping\fP should contact the controlled device and make sure it is really available. .sp Here is an example proxytype used to interface to Juniper Networks devices that run the Junos operating system. Note the additional library requirements\-\-most of the "hard part" of talking to these devices is handled by the jnpr.junos, jnpr.junos.utils and jnpr.junos.cfg modules. .sp .nf .ft C # Import python libs import logging import os import jnpr.junos import jnpr.junos.utils import jnpr.junos.cfg HAS_JUNOS = True class Proxyconn(object): def __init__(self, details): self.conn = jnpr.junos.Device(user=details[\(aqusername\(aq], host=details[\(aqhost\(aq], password=details[\(aqpasswd\(aq]) self.conn.open() self.conn.bind(cu=jnpr.junos.cfg.Resource) def proxytype(self): return \(aqjunos\(aq def id(self, opts): return self.conn.facts[\(aqhostname\(aq] def ping(self): return self.conn.connected def shutdown(self, opts): print(\(aqProxy module {} shutting down!!\(aq.format(opts[\(aqid\(aq])) try: self.conn.close() except Exception: pass .ft P .fi .sp Grains are data about minions. Most proxied devices will have a paltry amount of data as compared to a typical Linux server. Because proxy\-minions are started by a regular minion, they inherit a sizeable number of grain settings which can be useful, especially when targeting (PYTHONPATH, for example). .sp All proxy minions set a grain called \(aqproxy\(aq. If it is present, you know the minion is controlling another device. To add more grains to your proxy minion for a particular device, create a file in salt/grains named [proxytype].py and place inside it the different functions that need to be run to collect the data you are interested in. Here\(aqs an example: .SS The __proxyenabled__ directive .sp Salt states and execution modules, by and large, cannot "automatically" work with proxied devices. Execution modules like \fBpkg\fP or \fBsqlite3\fP have no meaning on a network switch or a housecat. For a state/execution module to be available to a proxy\-minion, the \fB__proxyenabled__\fP variable must be defined in the module as an array containing the names of all the proxytypes that this module can support. The array can contain the special value \fB*\fP to indicate that the module supports all proxies. .sp If no \fB__proxyenabled__\fP variable is defined, then by default, the state/execution module is unavailable to any proxy. .sp Here is an excerpt from a module that was modified to support proxy\-minions: .sp .nf .ft C def ping(): if \(aqproxyobject\(aq in __opts__: if \(aqping\(aq in __opts__[\(aqproxyobject\(aq].__attr__(): return __opts[\(aqproxyobject\(aq].ping() else: return False else: return True .ft P .fi .sp And then in salt.proxy.junos we find .sp .nf .ft C def ping(self): return self.connected .ft P .fi .sp The Junos API layer lacks the ability to do a traditional \(aqping\(aq, so the example simply checks the connection object field that indicates if the ssh connection was successfully made to the device. .SH WINDOWS SOFTWARE REPOSITORY .sp The Salt Windows Software Repository provides a package manager and software repository similar to what is provided by yum and apt on Linux. .sp It permits the installation of software using the installers on remote windows machines. In many senses, the operation is similar to that of the other package managers salt is aware of: .INDENT 0.0 .IP \(bu 2 the \fBpkg.installed\fP and similar states work on Windows. .IP \(bu 2 the \fBpkg.install\fP and similar module functions work on Windows. .IP \(bu 2 each windows machine needs to have \fBpkg.refresh_db\fP executed against it to pick up the latest version of the package database. .UNINDENT .sp High level differences to yum and apt are: .INDENT 0.0 .IP \(bu 2 The repository metadata (sls files) is hosted through either salt or git. .IP \(bu 2 Packages can be downloaded from within the salt repository, a git repository or from http(s) or ftp urls. .IP \(bu 2 No dependencies are managed. Dependencies between packages needs to be managed manually. .UNINDENT .SS Operation .sp The install state/module function of the windows package manager works roughly as follows: .INDENT 0.0 .IP 1. 3 Execute \fBpkg.list_pkgs\fP and store the result .IP 2. 3 Check if any action needs to be taken. (i.e. compare required package and version against \fBpkg.list_pkgs\fP results) .IP 3. 3 If so, run the installer command. .IP 4. 3 Execute \fBpkg.list_pkgs\fP and compare to the result stored from before installation. .IP 5. 3 Success/Failure/Changes will be reported based on the differences between the original and final \fBpkg.list_pkgs\fP results. .UNINDENT .sp If there are any problems in using the package manager it is likely to be due to the data in your sls files not matching the difference between the pre and post \fBpkg.list_pkgs\fP results. .SS Usage .sp By default, the Windows software repository is found at \fB/srv/salt/win/repo\fP This can be changed in the master config file (default location is \fB/etc/salt/master\fP) by modifying the \fBwin_repo\fP variable. Each piece of software should have its own directory which contains the installers and a package definition file. This package definition file is a YAML file named \fBinit.sls\fP. .sp The package definition file should look similar to this example for Firefox: \fB/srv/salt/win/repo/firefox/init.sls\fP .sp .nf .ft C Firefox: 17.0.1: installer: \(aqsalt://win/repo/firefox/English/Firefox Setup 17.0.1.exe\(aq full_name: Mozilla Firefox 17.0.1 (x86 en\-US) locale: en_US reboot: False install_flags: \(aq \-ms\(aq uninstaller: \(aq%ProgramFiles(x86)%/Mozilla Firefox/uninstall/helper.exe\(aq uninstall_flags: \(aq /S\(aq 16.0.2: installer: \(aqsalt://win/repo/firefox/English/Firefox Setup 16.0.2.exe\(aq full_name: Mozilla Firefox 16.0.2 (x86 en\-US) locale: en_US reboot: False install_flags: \(aq \-ms\(aq uninstaller: \(aq%ProgramFiles(x86)%/Mozilla Firefox/uninstall/helper.exe\(aq uninstall_flags: \(aq /S\(aq 15.0.1: installer: \(aqsalt://win/repo/firefox/English/Firefox Setup 15.0.1.exe\(aq full_name: Mozilla Firefox 15.0.1 (x86 en\-US) locale: en_US reboot: False install_flags: \(aq \-ms\(aq uninstaller: \(aq%ProgramFiles(x86)%/Mozilla Firefox/uninstall/helper.exe\(aq uninstall_flags: \(aq /S\(aq .ft P .fi .sp More examples can be found here: \fI\%https://github.com/saltstack/salt-winrepo\fP .sp The version number and \fBfull_name\fP need to match the output from \fBpkg.list_pkgs\fP so that the status can be verified when running highstate. Note: It is still possible to successfully install packages using \fBpkg.install\fP even if they don\(aqt match which can make this hard to troubleshoot. .sp .nf .ft C salt \(aqtest\-2008\(aq pkg.list_pkgs test\-2008 \-\-\-\-\-\-\-\-\-\- 7\-Zip 9.20 (x64 edition): 9.20.00.0 Microsoft .NET Framework 4 Client Profile: 4.0.30319,4.0.30319 Microsoft .NET Framework 4 Extended: 4.0.30319,4.0.30319 Microsoft Visual C++ 2008 Redistributable \- x64 9.0.21022: 9.0.21022 Mozilla Firefox 17.0.1 (x86 en\-US): 17.0.1 Mozilla Maintenance Service: 17.0.1 NSClient++ (x64): 0.3.8.76 Notepad++: 6.4.2 Salt Minion 0.16.0: 0.16.0 .ft P .fi .sp If any of these preinstalled packages already exist in winrepo the full_name will be automatically renamed to their package name during the next update (running highstate or installing another package). .sp .nf .ft C test\-2008: \-\-\-\-\-\-\-\-\-\- 7zip: 9.20.00.0 Microsoft .NET Framework 4 Client Profile: 4.0.30319,4.0.30319 Microsoft .NET Framework 4 Extended: 4.0.30319,4.0.30319 Microsoft Visual C++ 2008 Redistributable \- x64 9.0.21022: 9.0.21022 Mozilla Maintenance Service: 17.0.1 Notepad++: 6.4.2 Salt Minion 0.16.0: 0.16.0 firefox: 17.0.1 nsclient: 0.3.9.328 .ft P .fi .sp Add \fBmsiexec: True\fP if using an MSI installer requiring the use of \fBmsiexec /i\fP to install and \fBmsiexec /x\fP to uninstall. .sp The \fBinstall_flags\fP and \fBuninstall_flags\fP are flags passed to the software installer to cause it to perform a silent install. These can often be found by adding \fB/?\fP or \fB/h\fP when running the installer from the command line. A great resource for finding these silent install flags can be found on the WPKG project\(aqs \fI\%wiki\fP: .sp .nf .ft C 7zip: 9.20.00.0: installer: salt://win/repo/7zip/7z920\-x64.msi full_name: 7\-Zip 9.20 (x64 edition) reboot: False install_flags: \(aq /q \(aq msiexec: True uninstaller: salt://win/repo/7zip/7z920\-x64.msi uninstall_flags: \(aq /qn\(aq .ft P .fi .SS Generate Repo Cache File .sp Once the sls file has been created, generate the repository cache file with the winrepo runner: .sp .nf .ft C salt\-run winrepo.genrepo .ft P .fi .sp Then update the repository cache file on your minions, exactly how it\(aqs done for the Linux package managers: .sp .nf .ft C salt \(aq*\(aq pkg.refresh_db .ft P .fi .SS Install Windows Software .sp Now you can query the available version of Firefox using the Salt pkg module. .sp .nf .ft C salt \(aq*\(aq pkg.available_version Firefox {\(aqFirefox\(aq: {\(aq15.0.1\(aq: \(aqMozilla Firefox 15.0.1 (x86 en\-US)\(aq, \(aq16.0.2\(aq: \(aqMozilla Firefox 16.0.2 (x86 en\-US)\(aq, \(aq17.0.1\(aq: \(aqMozilla Firefox 17.0.1 (x86 en\-US)\(aq}} .ft P .fi .sp As you can see, there are three versions of Firefox available for installation. You can refer a software package by its \fBname\fP or its \fBfull_name\fP surround by single quotes. .sp .nf .ft C salt \(aq*\(aq pkg.install \(aqFirefox\(aq .ft P .fi .sp The above line will install the latest version of Firefox. .sp .nf .ft C salt \(aq*\(aq pkg.install \(aqFirefox\(aq version=16.0.2 .ft P .fi .sp The above line will install version 16.0.2 of Firefox. .sp If a different version of the package is already installed it will be replaced with the version in winrepo (only if the package itself supports live updating). .sp You can also specify the full name: .sp .nf .ft C salt \(aq*\(aq pkg.install \(aqMozilla Firefox 17.0.1 (x86 en\-US)\(aq .ft P .fi .SS Uninstall Windows Software .sp Uninstall software using the pkg module: .sp .nf .ft C salt \(aq*\(aq pkg.remove \(aqFirefox\(aq salt \(aq*\(aq pkg.purge \(aqFirefox\(aq .ft P .fi .sp \fBpkg.purge\fP just executes \fBpkg.remove\fP on Windows. At some point in the future \fBpkg.purge\fP may direct the installer to remove all configs and settings for software packages that support that option. .SS Standalone Minion Salt Windows Repo Module .sp In order to facilitate managing a Salt Windows software repo with Salt on a Standalone Minion on Windows, a new module named winrepo has been added to Salt. winrepo matches what is available in the salt runner and allows you to manage the Windows software repo contents. Example: \fBsalt \(aq*\(aq winrepo.genrepo\fP .SS Git Hosted Repo .sp Windows software package definitions can also be hosted in one or more git repositories. The default repo is one hosted on Github.com by SaltStack,Inc., which includes package definitions for open source software. This repo points to the HTTP or ftp locations of the installer files. Anyone is welcome to send a pull request to this repo to add new package definitions. Browse the repo here: \fI\%https://github.com/saltstack/salt-winrepo\fP . .sp Configure which git repos the master can search for package definitions by modifying or extending the \fBwin_gitrepos\fP configuration option list in the master config. .sp Checkout each git repo in \fBwin_gitrepos\fP, compile your package repository cache and then refresh each minion\(aqs package cache: .sp .nf .ft C salt\-run winrepo.update_git_repos salt\-run winrepo.genrepo salt \(aq*\(aq pkg.refresh_db .ft P .fi .SS Troubleshooting .SS Incorrect name/version .sp If the package seems to install properly, but salt reports a failure then it is likely you have a version or \fBfull_name\fP mismatch. .sp Check the exact \fBfull_name\fP and version used by the package. Use \fBpkg.list_pkgs\fP to check that the names and version exactly match what is installed. .SS Changes to sls files not being picked up .sp Ensure you have (re)generated the repository cache file and then updated the repository cache on the relevant minions: .sp .nf .ft C salt\-run winrepo.genrepo salt \(aqMINION\(aq pkg.refresh_db .ft P .fi .SS Packages management under Windows 2003 .sp On windows server 2003, you need to install optional windows component "wmi windows installer provider" to have full list of installed packages. If you don\(aqt have this, salt\-minion can\(aqt report some installed software. .SH WINDOWS-SPECIFIC BEHAVIOUR .sp Salt is capable of managing Windows systems, however due to various differences between the operating systems, there are some things you need to keep in mind. .sp This document will contain any quirks that apply across Salt or generally across multiple module functions. Any Windows\-specific behaviour for particular module functions will be documented in the module function documentation. Therefore this document should be read in conjunction with the module function documentation. .SS Group parameter for files .sp Salt was originally written for managing Unix\-based systems, and therefore the file module functions were designed around that security model. Rather than trying to shoehorn that model on to Windows, Salt ignores these parameters and makes non\-applicable module functions unavailable instead. .sp One of the commonly ignored parameters is the \fBgroup\fP parameter for managing files. Under Windows, while files do have a \(aqprimary group\(aq property, this is rarely used. It generally has no bearing on permissions unless intentionally configured and is most commonly used to provide Unix compatibility (e.g. Services For Unix, NFS services). .sp Because of this, any file module functions that typically require a group, do not under Windows. Attempts to directly use file module functions that operate on the group (e.g. \fBfile.chgrp\fP) will return a pseudo\-value and cause a log message to appear. No group parameters will be acted on. .sp If you do want to access and change the \(aqprimary group\(aq property and understand the implications, use the \fBfile.get_pgid\fP or \fBfile.get_pgroup\fP functions or the \fBpgroup\fP parameter on the \fBfile.chown\fP module function. .SS Dealing with case\-insensitive but case\-preserving names .sp Windows is case\-insensitive, but however preserves the case of names and it is this preserved form that is returned from system functions. This causes some issues with Salt because it assumes case\-sensitive names. These issues generally occur in the state functions and can cause bizarre looking errors. .sp To avoid such issues, always pretend Windows is case\-sensitive and use the right case for names, e.g. specify \fBuser=Administrator\fP instead of \fBuser=administrator\fP. .sp Follow \fI\%issue 11801\fP for any changes to this behaviour. .SS Dealing with various username forms .sp Salt does not understand the various forms that Windows usernames can come in, e.g. username, mydomainusername, \fI\%username@mydomain.tld\fP can all refer to the same user. In fact, Salt generally only considers the raw username value, i.e. the username without the domain or host information. .sp Using these alternative forms will likely confuse Salt and cause odd errors to happen. Use only the raw username value in the correct case to avoid problems. .sp Follow \fI\%issue 11801\fP for any changes to this behaviour. .SS Specifying the None group .sp Each Windows system has built\-in _None_ group. This is the default \(aqprimary group\(aq for files for users not on a domain environment. .sp Unfortunately, the word _None_ has special meaning in Python \- it is a special value indicating \(aqnothing\(aq, similar to \fBnull\fP or \fBnil\fP in other languages. .sp To specify the None group, it must be specified in quotes, e.g. \fB./salt \(aq*\(aq file.chpgrp C:\epath\eto\efile "\(aqNone\(aq"\fP. .SS Symbolic link loops .sp Under Windows, if any symbolic link loops are detected or if there are too many levels of symlinks (defaults to 64), an error is always raised. .sp For some functions, this behaviour is different to the behaviour on Unix platforms. In general, avoid symlink loops on either platform. .SS Modifying security properties (ACLs) on files .sp There is no support in Salt for modifying ACLs, and therefore no support for changing file permissions, besides modifying the owner/user. .SH SALT CLOUD .SS Getting Started .SS Install Salt Cloud .sp Salt Cloud is now part of Salt proper. It was merged in as of \fBSalt version 2014.1.0\fP. .sp Salt Cloud depends on \fBapache\-libcloud\fP. Libcloud can be installed via pip with \fBpip install apache\-libcloud\fP. .SS Installing Salt Cloud for development .sp Installing Salt for development enables Salt Cloud development as well, just make sure \fBapache\-libcloud\fP is installed as per above paragraph. .sp See these instructions: \fBInstalling Salt for development\fP. .SS Using Salt Cloud .SS VM Profiles .sp Salt cloud designates virtual machines inside the profile configuration file. The profile configuration file defaults to \fB/etc/salt/cloud.profiles\fP and is a yaml configuration. The syntax for declaring profiles is simple: .sp .nf .ft C fedora_rackspace: provider: rackspace image: Fedora 17 size: 256 server script: bootstrap\-salt .ft P .fi .sp It should be noted that the \fBscript\fP option defaults to \fBbootstrap\-salt\fP, and does not normally need to be specified. Further examples in this document will not show the \fBscript\fP option. .sp A few key pieces of information need to be declared and can change based on the public cloud provider. A number of additional parameters can also be inserted: .sp .nf .ft C centos_rackspace: provider: rackspace image: CentOS 6.2 size: 1024 server minion: master: salt.example.com append_domain: webs.example.com grains: role: webserver .ft P .fi .sp The image must be selected from available images. Similarly, sizes must be selected from the list of sizes. To get a list of available images and sizes use the following command: .sp .nf .ft C salt\-cloud \-\-list\-images openstack salt\-cloud \-\-list\-sizes openstack .ft P .fi .sp Some parameters can be specified in the main Salt cloud configuration file and then are applied to all cloud profiles. For instance if only a single cloud provider is being used then the provider option can be declared in the Salt cloud configuration file. .SS Multiple Configuration Files .sp In addition to \fB/etc/salt/cloud.profiles\fP, profiles can also be specified in any file matching \fBcloud.profiles.d/*conf\fP which is a sub\-directory relative to the profiles configuration file(with the above configuration file as an example, \fB/etc/salt/cloud.profiles.d/*.conf\fP). This allows for more extensible configuration, and plays nicely with various configuration management tools as well as version control systems. .SS Larger Example .sp .nf .ft C rhel_ec2: provider: ec2 image: ami\-e565ba8c size: Micro Instance minion: cheese: edam ubuntu_ec2: provider: ec2 image: ami\-7e2da54e size: Micro Instance minion: cheese: edam ubuntu_rackspace: provider: rackspace image: Ubuntu 12.04 LTS size: 256 server minion: cheese: edam fedora_rackspace: provider: rackspace image: Fedora 17 size: 256 server minion: cheese: edam cent_linode: provider: linode image: CentOS 6.2 64bit size: Linode 512 cent_gogrid: provider: gogrid image: 12834 size: 512MB cent_joyent: provider: joyent image: centos\-6 size: Small 1GB .ft P .fi .SS Cloud Map File .sp A number of options exist when creating virtual machines. They can be managed directly from profiles and the command line execution, or a more complex map file can be created. The map file allows for a number of virtual machines to be created and associated with specific profiles. .sp Map files have a simple format, specify a profile and then a list of virtual machines to make from said profile: .sp .nf .ft C fedora_small: \- web1 \- web2 \- web3 fedora_high: \- redis1 \- redis2 \- redis3 cent_high: \- riak1 \- riak2 \- riak3 .ft P .fi .sp This map file can then be called to roll out all of these virtual machines. Map files are called from the salt\-cloud command with the \-m option: .sp .nf .ft C $ salt\-cloud \-m /path/to/mapfile .ft P .fi .sp Remember, that as with direct profile provisioning the \-P option can be passed to create the virtual machines in parallel: .sp .nf .ft C $ salt\-cloud \-m /path/to/mapfile \-P .ft P .fi .sp A map file can also be enforced to represent the total state of a cloud deployment by using the \fB\-\-hard\fP option. When using the hard option any vms that exist but are not specified in the map file will be destroyed: .sp .nf .ft C $ salt\-cloud \-m /path/to/mapfile \-P \-H .ft P .fi .sp Be careful with this argument, it is very dangerous! In fact, it is so dangerous that in order to use it, you must explicitly enable it in the main configuration file. .sp .nf .ft C enable_hard_maps: True .ft P .fi .sp A map file can include grains and minion configuration options: .sp .nf .ft C fedora_small: \- web1: minion: log_level: debug grains: cheese: tasty omelet: du fromage \- web2: minion: log_level: warn grains: cheese: more tasty omelet: with peppers .ft P .fi .sp A map file may also be used with the various query options: .sp .nf .ft C $ salt\-cloud \-m /path/to/mapfile \-Q {\(aqec2\(aq: {\(aqweb1\(aq: {\(aqid\(aq: \(aqi\-e6aqfegb\(aq, \(aqimage\(aq: None, \(aqprivate_ips\(aq: [], \(aqpublic_ips\(aq: [], \(aqsize\(aq: None, \(aqstate\(aq: 0}}, \(aqweb2\(aq: {\(aqAbsent\(aq}} .ft P .fi .sp ...or with the delete option: .sp .nf .ft C $ salt\-cloud \-m /path/to/mapfile \-d The following virtual machines are set to be destroyed: web1 web2 Proceed? [N/y] .ft P .fi .SS Setting up New Salt Masters .sp Bootstrapping a new master in the map is as simple as: .sp .nf .ft C fedora_small: \- web1: make_master: True \- web2 \- web3 .ft P .fi .sp Notice that \fBALL\fP bootstrapped minions from the map will answer to the newly created salt\-master. .sp To make any of the bootstrapped minions answer to the bootstrapping salt\-master as opposed to the newly created salt\-master, as an example: .sp .nf .ft C fedora_small: \- web1: make_master: True minion: master: local_master: True \- web2 \- web3 .ft P .fi .sp The above says the minion running on the newly created salt\-master responds to the local master, ie, the master used to bootstrap these VMs. .sp Another example: .sp .nf .ft C fedora_small: \- web1: make_master: True \- web2 \- web3: minion: master: local_master: True .ft P .fi .sp The above example makes the \fBweb3\fP minion answer to the local master, not the newly created master. .SS Cloud Actions .sp Once a VM has been created, there are a number of actions that can be performed on it. The "reboot" action can be used across all providers, but all other actions are specific to the cloud provider. In order to perform an action, you may specify it from the command line, including the name(s) of the VM to perform the action on: .sp .nf .ft C $ salt\-cloud \-a reboot vm_name $ salt\-cloud \-a reboot vm1 vm2 vm2 .ft P .fi .sp Or you may specify a map which includes all VMs to perform the action on: .sp .nf .ft C $ salt\-cloud \-a reboot \-m /path/to/mapfile .ft P .fi .sp The following is a list of actions currently supported by salt\-cloud: .sp .nf .ft C all providers: \- reboot ec2: \- start \- stop joyent: \- stop .ft P .fi .SS Cloud Functions .sp Cloud functions work much the same way as cloud actions, except that they don\(aqt perform an operation on a specific instance, and so do not need a machine name to be specified. However, since they perform an operation on a specific cloud provider, that provider must be specified. .sp .nf .ft C $ salt\-cloud \-f show_image ec2 image=ami\-fd20ad94 .ft P .fi .SS Core Configuration .SS Core Configuration .sp A number of core configuration options and some options that are global to the VM profiles can be set in the cloud configuration file. By default this file is located at \fB/etc/salt/cloud\fP. .SS Thread Pool Size .sp When salt cloud is operating in parallel mode via the \fB\-P\fP argument, you can control the thread pool size by specifying the \fBpool_size\fP parameter with a positive integer value. .sp By default, the thread pool size will be set to the number of VMs that salt cloud is operating on. .sp .nf .ft C pool_size: 10 .ft P .fi .SS Minion Configuration .sp The default minion configuration is set up in this file. Minions created by salt\-cloud derive their configuration from this file. Almost all parameters found in \fIConfiguring the Salt Minion\fP can be used here. .sp .nf .ft C minion: master: saltmaster.example.com .ft P .fi .sp In particular, this is the location to specify the location of the salt master and its listening port, if the port is not set to the default. .SS New Cloud Configuration Syntax .sp The data specific to interacting with public clouds is set up here. .sp \fBATTENTION\fP: Since version 0.8.7 a new cloud provider configuration syntax was implemented. It will allow for multiple configurations of the same cloud provider where only minor details can change, for example, the region for an EC2 instance. While the old format is still supported and automatically migrated every time salt\-cloud configuration is parsed, a choice was made to warn the user or even exit with an error if both formats are mixed. .SS Migrating Configurations .sp If you wish to migrate, there are several alternatives. Since the old syntax was mainly done on the main cloud configuration file, see the next before and after migration example. .INDENT 0.0 .IP \(bu 2 Before migration in \fB/etc/salt/cloud\fP: .UNINDENT .sp .nf .ft C AWS.id: HJGRYCILJLKJYG AWS.key: \(aqkdjgfsgm;woormgl/aserigjksjdhasdfgn\(aq AWS.keyname: test AWS.securitygroup: quick\-start AWS.private_key: /root/test.pem .ft P .fi .INDENT 0.0 .IP \(bu 2 After migration in \fB/etc/salt/cloud\fP: .UNINDENT .sp .nf .ft C providers: my\-aws\-migrated\-config: id: HJGRYCILJLKJYG key: \(aqkdjgfsgm;woormgl/aserigjksjdhasdfgn\(aq keyname: test securitygroup: quick\-start private_key: /root/test.pem provider: aws .ft P .fi .sp Notice that it\(aqs no longer required to name a cloud provider\(aqs configuration after its provider; it can be an alias, though an additional configuration key is added, \fBprovider\fP. This allows for multiple configuration for the same cloud provider to coexist. .sp While moving towards an improved and extensible configuration handling regarding the cloud providers, \fB\-\-providers\-config\fP, which defaults to \fB/etc/salt/cloud.providers\fP, was added to the cli parser. It allows for the cloud providers configuration to be provided in a different file, and/or even any matching file on a sub\-directory, \fBcloud.providers.d/*.conf\fP which is relative to the providers configuration file(with the above configuration file as an example, \fB/etc/salt/cloud.providers.d/*.conf\fP). .sp So, using the example configuration above, after migration in \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/aws\-migrated.conf\fP: .sp .nf .ft C my\-aws\-migrated\-config: id: HJGRYCILJLKJYG key: \(aqkdjgfsgm;woormgl/aserigjksjdhasdfgn\(aq keyname: test securitygroup: quick\-start private_key: /root/test.pem provider: aws .ft P .fi .sp Notice that on this last migrated example, it \fBno longer\fP includes the \fBproviders\fP starting key. .sp While migrating the cloud providers configuration, if the provider alias (from the above example \fBmy\-aws\-migrated\-config\fP) changes from what you had (from the above example \fBaws\fP), you will also need to change the \fBprovider\fP configuration key in the defined profiles. .INDENT 0.0 .IP \(bu 2 From: .UNINDENT .sp .nf .ft C rhel_aws: provider: aws image: ami\-e565ba8c size: Micro Instance .ft P .fi .INDENT 0.0 .IP \(bu 2 To: .UNINDENT .sp .nf .ft C rhel_aws: provider: my\-aws\-migrated\-config image: ami\-e565ba8c size: Micro Instance .ft P .fi .sp This new configuration syntax even allows you to have multiple cloud configurations under the same alias, for example: .sp .nf .ft C production\-config: \- id: HJGRYCILJLKJYG key: \(aqkdjgfsgm;woormgl/aserigjksjdhasdfgn\(aq keyname: test securitygroup: quick\-start private_key: /root/test.pem \- user: example_user apikey: 123984bjjas87034 provider: rackspace .ft P .fi .sp \fBNotice the dash and indentation on the above example.\fP .sp Having multiple entries for a configuration alias also makes the \fBprovider\fP key on any defined profile to change, see the example: .sp .nf .ft C rhel_aws_dev: provider: production\-config:aws image: ami\-e565ba8c size: Micro Instance rhel_aws_prod: provider: production\-config:aws image: ami\-e565ba8c size: High\-CPU Extra Large Instance database_prod: provider: production\-config:rackspace image: Ubuntu 12.04 LTS size: 256 server .ft P .fi .sp Notice that because of the multiple entries, one has to be explicit about the provider alias and name, from the above example, \fBproduction\-config:aws\fP. .sp This new syntax also changes the interaction with the \fBsalt\-cloud\fP binary. \fB\-\-list\-location\fP, \fB\-\-list\-images\fP and \fB\-\-list\-sizes\fP which needs a cloud provider as an argument. Since 0.8.7 the argument used should be the configured cloud provider alias. If the provider alias only has a single entry, use \fB\fP. If it has multiple entries, \fB:\fP should be used. .SS Pillar Configuration .sp It is possible to configure cloud providers using pillars. This is only used when inside the cloud module. You can setup a variable called \fBclouds\fP that contains your profile and provider to pass that information to the cloud servers instead of having to copy the full configuration to every minion. .sp In your pillar file, you would use something like this. .sp .nf .ft C cloud: ssh_key_name: saltstack ssh_key_file: /root/.ssh/id_rsa update_cachedir: True diff_cache_events: True change_password: True providers: my\-nova: identity_url: https://identity.api.rackspacecloud.com/v2.0/ compute_region: IAD user: myuser api_key: apikey tenant: 123456 provider: nova my\-openstack: identity_url: https://identity.api.rackspacecloud.com/v2.0/tokens user: user2 apikey: apikey2 tenant: 654321 compute_region: DFW provider: openstack compute_name: cloudServersOpenStack profiles: ubuntu\-nova: provider: my\-nova size: performance1\-8 image: bb02b1a3\-bc77\-4d17\-ab5b\-421d89850fca script_args: git develop flush_mine_on_destroy: True ubuntu\-openstack: provider: my\-openstack size: performance1\-8 image: bb02b1a3\-bc77\-4d17\-ab5b\-421d89850fca script_args: git develop flush_mine_on_destroy: True .ft P .fi .sp \fBNOTE\fP: This is only valid in the cloud module, so also in the cloud state. This does not work with the salt\-cloud binary. .SS Cloud Configurations .SS Rackspace .sp Rackspace cloud requires two configuration options: .INDENT 0.0 .IP \(bu 2 Using the old format: .UNINDENT .sp .nf .ft C RACKSPACE.user: example_user RACKSPACE.apikey: 123984bjjas87034 .ft P .fi .INDENT 0.0 .IP \(bu 2 Using the new configuration format: .UNINDENT .sp .nf .ft C my\-rackspace\-config: user: example_user apikey: 123984bjjas87034 provider: rackspace .ft P .fi .sp \fBNOTE\fP: With the new providers configuration syntax you would have \fBprovider: rackspace\-config\fP instead of \fBprovider: rackspace\fP on a profile configuration. .SS Amazon AWS .sp A number of configuration options are required for Amazon AWS: .INDENT 0.0 .IP \(bu 2 Using the old format: .UNINDENT .sp .nf .ft C AWS.id: HJGRYCILJLKJYG AWS.key: \(aqkdjgfsgm;woormgl/aserigjksjdhasdfgn\(aq AWS.keyname: test AWS.securitygroup: quick\-start AWS.private_key: /root/test.pem .ft P .fi .INDENT 0.0 .IP \(bu 2 Using the new configuration format: .UNINDENT .sp .nf .ft C my\-aws\-quick\-start: id: HJGRYCILJLKJYG key: \(aqkdjgfsgm;woormgl/aserigjksjdhasdfgn\(aq keyname: test securitygroup: quick\-start private_key: /root/test.pem provider: aws my\-aws\-default: id: HJGRYCILJLKJYG key: \(aqkdjgfsgm;woormgl/aserigjksjdhasdfgn\(aq keyname: test securitygroup: default private_key: /root/test.pem provider: aws .ft P .fi .sp \fBNOTE\fP: With the new providers configuration syntax you would have \fBprovider: my\-aws\-quick\-start\fP or \fBprovider: my\-aws\-default\fP instead of \fBprovider: aws\fP on a profile configuration. .SS Linode .sp Linode requires a single API key, but the default root password also needs to be set: .INDENT 0.0 .IP \(bu 2 Using the old format: .UNINDENT .sp .nf .ft C LINODE.apikey: asldkgfakl;sdfjsjaslfjaklsdjf;askldjfaaklsjdfhasldsadfghdkf LINODE.password: F00barbaz .ft P .fi .INDENT 0.0 .IP \(bu 2 Using the new configuration format: .UNINDENT .sp .nf .ft C my\-linode\-config: apikey: asldkgfakl;sdfjsjaslfjaklsdjf;askldjfaaklsjdfhasldsadfghdkf password: F00barbaz provider: linode .ft P .fi .sp \fBNOTE\fP: With the new providers configuration syntax you would have \fBprovider: my\-linode\-config\fP instead of \fBprovider: linode\fP on a profile configuration. .sp The password needs to be 8 characters and contain lowercase, uppercase and numbers. .SS Joyent Cloud .sp The Joyent cloud requires three configuration parameters. The username and password that are used to log into the Joyent system, and the location of the private SSH key associated with the Joyent account. The SSH key is needed to send the provisioning commands up to the freshly created virtual machine. .INDENT 0.0 .IP \(bu 2 Using the old format: .UNINDENT .sp .nf .ft C JOYENT.user: fred JOYENT.password: saltybacon JOYENT.private_key: /root/joyent.pem .ft P .fi .INDENT 0.0 .IP \(bu 2 Using the new configuration format: .UNINDENT .sp .nf .ft C my\-joyent\-config: user: fred password: saltybacon private_key: /root/joyent.pem provider: joyent .ft P .fi .sp \fBNOTE\fP: With the new providers configuration syntax you would have \fBprovider: my\-joyent\-config\fP instead of \fBprovider: joyent\fP on a profile configuration. .SS GoGrid .sp To use Salt Cloud with GoGrid log into the GoGrid web interface and create an API key. Do this by clicking on "My Account" and then going to the API Keys tab. .sp The GOGRID.apikey and the GOGRID.sharedsecret configuration parameters need to be set in the configuration file to enable interfacing with GoGrid: .INDENT 0.0 .IP \(bu 2 Using the old format: .UNINDENT .sp .nf .ft C GOGRID.apikey: asdff7896asdh789 GOGRID.sharedsecret: saltybacon .ft P .fi .INDENT 0.0 .IP \(bu 2 Using the new configuration format: .UNINDENT .sp .nf .ft C my\-gogrid\-config: apikey: asdff7896asdh789 sharedsecret: saltybacon provider: gogrid .ft P .fi .sp \fBNOTE\fP: With the new providers configuration syntax you would have \fBprovider: my\-gogrid\-config\fP instead of \fBprovider: gogrid\fP on a profile configuration. .SS OpenStack .sp OpenStack configuration differs between providers, and at the moment several options need to be specified. This module has been officially tested against the HP and the Rackspace implementations, and some examples are provided for both. .INDENT 0.0 .IP \(bu 2 Using the old format: .UNINDENT .sp .nf .ft C # For HP OPENSTACK.identity_url: \(aqhttps://region\-a.geo\-1.identity.hpcloudsvc.com:35357/v2.0/\(aq OPENSTACK.compute_name: Compute OPENSTACK.compute_region: \(aqaz\-1.region\-a.geo\-1\(aq OPENSTACK.tenant: myuser\-tenant1 OPENSTACK.user: myuser OPENSTACK.ssh_key_name: mykey OPENSTACK.ssh_key_file: \(aq/etc/salt/hpcloud/mykey.pem\(aq OPENSTACK.password: mypass # For Rackspace OPENSTACK.identity_url: \(aqhttps://identity.api.rackspacecloud.com/v2.0/tokens\(aq OPENSTACK.compute_name: cloudServersOpenStack OPENSTACK.protocol: ipv4 OPENSTACK.compute_region: DFW OPENSTACK.protocol: ipv4 OPENSTACK.user: myuser OPENSTACK.tenant: 5555555 OPENSTACK.password: mypass .ft P .fi .sp If you have an API key for your provider, it may be specified instead of a password: .sp .nf .ft C OPENSTACK.apikey: 901d3f579h23c8v73q9 .ft P .fi .INDENT 0.0 .IP \(bu 2 Using the new configuration format: .UNINDENT .sp .nf .ft C # For HP my\-openstack\-hp\-config: identity_url: \(aqhttps://region\-a.geo\-1.identity.hpcloudsvc.com:35357/v2.0/\(aq compute_name: Compute compute_region: \(aqaz\-1.region\-a.geo\-1\(aq tenant: myuser\-tenant1 user: myuser ssh_key_name: mykey ssh_key_file: \(aq/etc/salt/hpcloud/mykey.pem\(aq password: mypass provider: openstack # For Rackspace my\-openstack\-rackspace\-config: identity_url: \(aqhttps://identity.api.rackspacecloud.com/v2.0/tokens\(aq compute_name: cloudServersOpenStack protocol: ipv4 compute_region: DFW protocol: ipv4 user: myuser tenant: 5555555 password: mypass provider: openstack .ft P .fi .sp If you have an API key for your provider, it may be specified instead of a password: .sp .nf .ft C my\-openstack\-hp\-config: apikey: 901d3f579h23c8v73q9 my\-openstack\-rackspace\-config: apikey: 901d3f579h23c8v73q9 .ft P .fi .sp \fBNOTE\fP: With the new providers configuration syntax you would have \fBprovider: my\-openstack\-hp\-config\fP or \fBprovider: my\-openstack\-rackspace\-config\fP instead of \fBprovider: openstack\fP on a profile configuration. .sp You will certainly need to configure the \fBuser\fP, \fBtenant\fP and either \fBpassword\fP or \fBapikey\fP. .sp If your OpenStack instances only have private IP addresses and a CIDR range of private addresses are not reachable from the salt\-master, you may set your preference to have Salt ignore it. Using the old could configurations syntax: .sp .nf .ft C OPENSTACK.ignore_cidr: 192.168.0.0/16 .ft P .fi .sp Using the new syntax: .sp .nf .ft C my\-openstack\-config: ignore_cidr: 192.168.0.0/16 .ft P .fi .sp For in\-house OpenStack Essex installation, libcloud needs the service_type : .sp .nf .ft C my\-openstack\-config: identity_url: \(aqhttp://control.openstack.example.org:5000/v2.0/\(aq compute_name : Compute Service service_type : compute .ft P .fi .SS Digital Ocean .sp Using Salt for Digital Ocean requires a client_key and an api_key. These can be found in the Digital Ocean web interface, in the "My Settings" section, under the API Access tab. .INDENT 0.0 .IP \(bu 2 Using the old format: .UNINDENT .sp .nf .ft C DIGITAL_OCEAN.client_key: wFGEwgregeqw3435gDger DIGITAL_OCEAN.api_key: GDE43t43REGTrkilg43934t34qT43t4dgegerGEgg .ft P .fi .INDENT 0.0 .IP \(bu 2 Using the new configuration format: .UNINDENT .sp .nf .ft C my\-digitalocean\-config: provider: digital_ocean client_key: wFGEwgregeqw3435gDger api_key: GDE43t43REGTrkilg43934t34qT43t4dgegerGEgg location: New York 1 .ft P .fi .sp \fBNOTE\fP: With the new providers configuration syntax you would have \fBprovider: my\-digitalocean\-config\fP instead of \fBprovider: digital_ocean\fP on a profile configuration. .SS Parallels .sp Using Salt with Parallels requires a user, password and URL. These can be obtained from your cloud provider. .INDENT 0.0 .IP \(bu 2 Using the old format: .UNINDENT .sp .nf .ft C PARALLELS.user: myuser PARALLELS.password: xyzzy PARALLELS.url: https://api.cloud.xmission.com:4465/paci/v1.0/ .ft P .fi .INDENT 0.0 .IP \(bu 2 Using the new configuration format: .UNINDENT .sp .nf .ft C my\-parallels\-config: user: myuser password: xyzzy url: https://api.cloud.xmission.com:4465/paci/v1.0/ provider: parallels .ft P .fi .sp \fBNOTE\fP: With the new providers configuration syntax you would have \fBprovider: my\-parallels\-config\fP instead of \fBprovider: parallels\fP on a profile configuration. .SS Proxmox .sp Using Salt with Proxmox requires a user, password and URL. These can be obtained from your cloud provider. Both PAM and PVE users can be used. .INDENT 0.0 .IP \(bu 2 Using the new configuration format: .UNINDENT .sp .nf .ft C my\-proxmox\-config: provider: proxmox user: saltcloud@pve password: xyzzy url: your.proxmox.host .ft P .fi .SS lxc .sp The lxc driver is a new, experimental driver for installing Salt on newly provisionned (via saltcloud) lxc containers. It will in turn use saltify to install salt an rattach the lxc container as a new lxc minion. As soon as we can, we manage baremetal operation over SSH. You can also destroy those containers via this driver. .sp .nf .ft C devhost10\-lxc: target: devhost10 provider: lxc .ft P .fi .sp And in the map file: .sp .nf .ft C devhost10\-lxc: provider: devhost10\-lxc from_container: ubuntu backing: lvm sudo: True size: 3g ip: 10.0.3.9 minion: master: 10.5.0.1 master_port: 4506 lxc_conf: \- lxc.utsname: superlxc .ft P .fi .SS Saltify .sp The Saltify driver is a new, experimental driver for installing Salt on existing machines (virtual or bare metal). Because it does not use an actual cloud provider, it needs no configuration in the main cloud config file. However, it does still require a profile to be set up, and is most useful when used inside a map file. The key parameters to be set are \fBssh_host\fP, \fBssh_username\fP and either \fBssh_keyfile\fP or \fBssh_password\fP. These may all be set in either the profile or the map. An example configuration might use the following in cloud.profiles: .sp .nf .ft C make_salty: provider: saltify .ft P .fi .sp And in the map file: .sp .nf .ft C make_salty: \- myinstance: ssh_host: 54.262.11.38 ssh_username: ubuntu ssh_keyfile: \(aq/etc/salt/mysshkey.pem\(aq sudo: True .ft P .fi .SS Extending Profiles and Cloud Providers Configuration .sp As of 0.8.7, the option to extend both the profiles and cloud providers configuration and avoid duplication was added. The extends feature works on the current profiles configuration, but, regarding the cloud providers configuration, \fBonly\fP works in the new syntax and respective configuration files, i.e. \fB/etc/salt/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/*.conf\fP. .SS Extending Profiles .sp Some example usage on how to use \fBextends\fP with profiles. Consider \fB/etc/salt/salt/cloud.profiles\fP containing: .sp .nf .ft C development\-instances: provider: my\-ec2\-config size: Micro Instance ssh_username: ec2_user securitygroup: \- default deploy: False Amazon\-Linux\-AMI\-2012.09\-64bit: image: ami\-54cf5c3d extends: development\-instances Fedora\-17: image: ami\-08d97e61 extends: development\-instances CentOS\-5: provider: my\-aws\-config image: ami\-09b61d60 extends: development\-instances .ft P .fi .sp The above configuration, once parsed would generate the following profiles data: .sp .nf .ft C [{\(aqdeploy\(aq: False, \(aqimage\(aq: \(aqami\-08d97e61\(aq, \(aqprofile\(aq: \(aqFedora\-17\(aq, \(aqprovider\(aq: \(aqmy\-ec2\-config\(aq, \(aqsecuritygroup\(aq: [\(aqdefault\(aq], \(aqsize\(aq: \(aqMicro Instance\(aq, \(aqssh_username\(aq: \(aqec2_user\(aq}, {\(aqdeploy\(aq: False, \(aqimage\(aq: \(aqami\-09b61d60\(aq, \(aqprofile\(aq: \(aqCentOS\-5\(aq, \(aqprovider\(aq: \(aqmy\-aws\-config\(aq, \(aqsecuritygroup\(aq: [\(aqdefault\(aq], \(aqsize\(aq: \(aqMicro Instance\(aq, \(aqssh_username\(aq: \(aqec2_user\(aq}, {\(aqdeploy\(aq: False, \(aqimage\(aq: \(aqami\-54cf5c3d\(aq, \(aqprofile\(aq: \(aqAmazon\-Linux\-AMI\-2012.09\-64bit\(aq, \(aqprovider\(aq: \(aqmy\-ec2\-config\(aq, \(aqsecuritygroup\(aq: [\(aqdefault\(aq], \(aqsize\(aq: \(aqMicro Instance\(aq, \(aqssh_username\(aq: \(aqec2_user\(aq}, {\(aqdeploy\(aq: False, \(aqprofile\(aq: \(aqdevelopment\-instances\(aq, \(aqprovider\(aq: \(aqmy\-ec2\-config\(aq, \(aqsecuritygroup\(aq: [\(aqdefault\(aq], \(aqsize\(aq: \(aqMicro Instance\(aq, \(aqssh_username\(aq: \(aqec2_user\(aq}] .ft P .fi .sp Pretty cool right? .SS Extending Providers .sp Some example usage on how to use \fBextends\fP within the cloud providers configuration. Consider \fB/etc/salt/salt/cloud.providers\fP containing: .sp .nf .ft C my\-develop\-envs: \- id: HJGRYCILJLKJYG key: \(aqkdjgfsgm;woormgl/aserigjksjdhasdfgn\(aq keyname: test securitygroup: quick\-start private_key: /root/test.pem location: ap\-southeast\-1 availability_zone: ap\-southeast\-1b provider: aws \- user: myuser@mycorp.com password: mypass ssh_key_name: mykey ssh_key_file: \(aq/etc/salt/ibm/mykey.pem\(aq location: Raleigh provider: ibmsce my\-productions\-envs: \- extends: my\-develop\-envs:ibmsce user: my\-production\-user@mycorp.com location: us\-east\-1 availability_zone: us\-east\-1 .ft P .fi .sp The above configuration, once parsed would generate the following providers data: .sp .nf .ft C \(aqproviders\(aq: { \(aqmy\-develop\-envs\(aq: [ {\(aqavailability_zone\(aq: \(aqap\-southeast\-1b\(aq, \(aqid\(aq: \(aqHJGRYCILJLKJYG\(aq, \(aqkey\(aq: \(aqkdjgfsgm;woormgl/aserigjksjdhasdfgn\(aq, \(aqkeyname\(aq: \(aqtest\(aq, \(aqlocation\(aq: \(aqap\-southeast\-1\(aq, \(aqprivate_key\(aq: \(aq/root/test.pem\(aq, \(aqprovider\(aq: \(aqaws\(aq, \(aqsecuritygroup\(aq: \(aqquick\-start\(aq }, {\(aqlocation\(aq: \(aqRaleigh\(aq, \(aqpassword\(aq: \(aqmypass\(aq, \(aqprovider\(aq: \(aqibmsce\(aq, \(aqssh_key_file\(aq: \(aq/etc/salt/ibm/mykey.pem\(aq, \(aqssh_key_name\(aq: \(aqmykey\(aq, \(aquser\(aq: \(aqmyuser@mycorp.com\(aq } ], \(aqmy\-productions\-envs\(aq: [ {\(aqavailability_zone\(aq: \(aqus\-east\-1\(aq, \(aqlocation\(aq: \(aqus\-east\-1\(aq, \(aqpassword\(aq: \(aqmypass\(aq, \(aqprovider\(aq: \(aqibmsce\(aq, \(aqssh_key_file\(aq: \(aq/etc/salt/ibm/mykey.pem\(aq, \(aqssh_key_name\(aq: \(aqmykey\(aq, \(aquser\(aq: \(aqmy\-production\-user@mycorp.com\(aq } ] } .ft P .fi .SS Windows Configuration .SS Spinning up Windows Minions .sp It is possible to use Salt Cloud to spin up Windows instances, and then install Salt on them. This functionality is available on all cloud providers that are supported by Salt Cloud. However, it may not necessarily be available on all Windows images. .SS Requirements .sp Salt Cloud makes use of \fIsmbclient\fP and \fIwinexe\fP to set up the Windows Salt Minion installer. \fIsmbclient\fP may be part of either the \fIsamba\fP package, or its own \fIsmbclient\fP package, depending on the distribution. \fIwinexe\fP is less commonly available in distribution\-specific repositories. However, it is currently being built for various distributions in 3rd party channels: .INDENT 0.0 .IP \(bu 2 \fI\%RPMs at pbone.net\fP .UNINDENT .INDENT 0.0 .IP \(bu 2 \fI\%OpenSuse Build Service\fP .UNINDENT .sp Additionally, a copy of the Salt Minion Windows installer must be present on the system on which Salt Cloud is running. This installer may be downloaded from saltstack.com: .INDENT 0.0 .IP \(bu 2 \fI\%SaltStack Download Area\fP .UNINDENT .SS Firewall Settings .sp Because Salt Cloud makes use of \fIsmbclient\fP and \fIwinexe\fP, port 445 must be open on the target image. This port is not generally open by default on a standard Windows distribution, and care must be taken to use an image in which this port is open, or the Windows firewall is disabled. .SS Configuration .sp Configuration is set as usual, with some extra configuration settings. The location of the Windows installer on the machine that Salt Cloud is running on must be specified. This may be done in any of the regular configuration files (main, providers, profiles, maps). For example: .sp Setting the installer in \fB/etc/salt/cloud.providers\fP: .sp .nf .ft C my\-softlayer: provider: softlayer user: MYUSER1138 apikey: \(aqe3b68aa711e6deadc62d5b76355674beef7cc3116062ddbacafe5f7e465bfdc9\(aq minion: master: saltmaster.example.com win_installer: /root/Salt\-Minion\-0.17.0\-AMD64\-Setup.exe win_username: Administrator win_password: letmein .ft P .fi .sp The default Windows user is \fIAdministrator\fP, and the default Windows password is blank. .SS Cloud Provider Specifics .SS Getting Started With Aliyun ECS .sp The Aliyun ECS (Elastic Computer Service) is one of the most popular public cloud providers in China. This cloud provider can be used to manage aliyun instance using salt\-cloud. .sp \fI\%http://www.aliyun.com/\fP .SS Dependencies .sp This driver requires the Python \fBrequests\fP library to be installed. .SS Configuration .sp Using Salt for Aliyun ECS requires aliyun access key id and key secret. These can be found in the aliyun web interface, in the "User Center" section, under "My Service" tab. .sp .nf .ft C # Note: This example is for /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. my\-aliyun\-config: # aliyun Access Key ID id: wDGEwGregedg3435gDgxd # aliyun Access Key Secret key: GDd45t43RDBTrkkkg43934t34qT43t4dgegerGEgg location: cn\-qingdao provider: aliyun .ft P .fi .SS Profiles .SS Cloud Profiles .sp Set up an initial profile at \fB/etc/salt/cloud.profiles\fP or in the \fB/etc/salt/cloud.profiles.d/\fP directory: .sp .nf .ft C aliyun_centos: provider: my\-aliyun\-config size: ecs.t1.small location: cn\-qingdao securitygroup: G1989096784427999 image: centos6u3_64_20G_aliaegis_20130816.vhd .ft P .fi .sp Sizes can be obtained using the \fB\-\-list\-sizes\fP option for the \fBsalt\-cloud\fP command: .sp .nf .ft C # salt\-cloud \-\-list\-sizes my\-aliyun\-config my\-aliyun\-config: \-\-\-\-\-\-\-\-\-\- aliyun: \-\-\-\-\-\-\-\-\-\- ecs.c1.large: \-\-\-\-\-\-\-\-\-\- CpuCoreCount: 8 InstanceTypeId: ecs.c1.large MemorySize: 16.0 \&...SNIP... .ft P .fi .sp Images can be obtained using the \fB\-\-list\-images\fP option for the \fBsalt\-cloud\fP command: .sp .nf .ft C # salt\-cloud \-\-list\-images my\-aliyun\-config my\-aliyun\-config: \-\-\-\-\-\-\-\-\-\- aliyun: \-\-\-\-\-\-\-\-\-\- centos5u8_64_20G_aliaegis_20131231.vhd: \-\-\-\-\-\-\-\-\-\- Architecture: x86_64 Description: ImageId: centos5u8_64_20G_aliaegis_20131231.vhd ImageName: CentOS 5.8 64位 ImageOwnerAlias: system ImageVersion: 1.0 OSName: CentOS 5.8 64位 Platform: CENTOS5 Size: 20 Visibility: public \&...SNIP... .ft P .fi .sp Locations can be obtained using the \fB\-\-list\-locations\fP option for the \fBsalt\-cloud\fP command: .sp .nf .ft C my\-aliyun\-config: \-\-\-\-\-\-\-\-\-\- aliyun: \-\-\-\-\-\-\-\-\-\- cn\-beijing: \-\-\-\-\-\-\-\-\-\- LocalName: 北京 RegionId: cn\-beijing cn\-hangzhou: \-\-\-\-\-\-\-\-\-\- LocalName: 杭州 RegionId: cn\-hangzhou cn\-hongkong: \-\-\-\-\-\-\-\-\-\- LocalName: 香港 RegionId: cn\-hongkong cn\-qingdao: \-\-\-\-\-\-\-\-\-\- LocalName: 青岛 RegionId: cn\-qingdao .ft P .fi .sp Security Group can be obtained using the \fB\-f list_securitygroup\fP option for the \fBsalt\-cloud\fP command: .sp .nf .ft C # salt\-cloud \-\-location=cn\-qingdao \-f list_securitygroup my\-aliyun\-config my\-aliyun\-config: \-\-\-\-\-\-\-\-\-\- aliyun: \-\-\-\-\-\-\-\-\-\- G1989096784427999: \-\-\-\-\-\-\-\-\-\- Description: G1989096784427999 SecurityGroupId: G1989096784427999 .ft P .fi .IP Note Aliyun ECS REST API documentation is available from \fI\%Aliyun ECS API\fP. .RE .SS Getting Started With Azure .sp New in version 2014.1.0. .sp Azure is a cloud service by Microsoft providing virtual machines, SQL services, media services, and more. This document describes how to use Salt Cloud to create a virtual machine on Azure, with Salt installed. .sp You can find more information about Azure at \fI\%http://www.windowsazure.com/\fP. .SS Dependencies .INDENT 0.0 .IP \(bu 2 The \fI\%Azure\fP Python SDK. .IP \(bu 2 A Microsoft Azure account .IP \(bu 2 OpenSSL (to generate the certificates) .IP \(bu 2 \fI\%Salt\fP .UNINDENT .SS Configuration .sp Set up the provider config at \fB/etc/salt/cloud.providers.d/azure.conf\fP: .sp .nf .ft C # Note: This example is for /etc/salt/cloud.providers.d/azure.conf my\-azure\-config: provider: azure subscription_id: 3287abc8\-f98a\-c678\-3bde\-326766fd3617 certificate_path: /etc/salt/azure.pem # Set up the location of the salt master # minion: master: saltmaster.example.com provider: azure # Optional management_host: management.core.windows.net .ft P .fi .sp The certificate used must be generated by the user. OpenSSL can be used to create the management certificates. Two certificates are needed: a .cer file, which is uploaded to Azure, and a .pem file, which is stored locally. .sp To create the .pem file, execute the following command: .sp .nf .ft C openssl req \-x509 \-nodes \-days 365 \-newkey rsa:1024 \-keyout /etc/salt/azure.pem \-out /etc/salt/azure.pem .ft P .fi .sp To create the .cer file, execute the following command: .sp .nf .ft C openssl x509 \-inform pem \-in /etc/salt/azure.pem \-outform der \-out /etc/salt/azure.cer .ft P .fi .sp After you creating these files, the .cer file will need to be uploaded to Azure via the "Upload" action of the "Settings" tab of the management portal. .sp Optionally, a \fBmanagement_host\fP may be configured, if necessary for your region. .SS Cloud Profiles .sp Set up an initial profile at \fB/etc/salt/cloud.profiles\fP: .sp .nf .ft C azure\-ubuntu: provider: my\-azure\-config image: \(aqb39f27a8b8c64d52b05eac6a62ebad85__Ubuntu\-12_04_3\-LTS\-amd64\-server\-20131003\-en\-us\-30GB\(aq size: Small location: \(aqEast US\(aq ssh_username: azureuser ssh_password: verybadpass slot: production media_link: \(aqhttp://portalvhdabcdefghijklmn.blob.core.windows.net/vhds\(aq .ft P .fi .sp These options are described in more detail below. Once configured, the profile can be realized with a salt command: .sp .nf .ft C salt\-cloud \-p azure\-ubuntu newinstance .ft P .fi .sp This will create an salt minion instance named \fBnewinstance\fP in Azure. If the command was executed on the salt\-master, its Salt key will automatically be signed on the master. .sp Once the instance has been created with salt\-minion installed, connectivity to it can be verified with Salt: .sp .nf .ft C salt newminion test.ping .ft P .fi .SS Profile Options .sp The following options are currently available for Azure. .SS provider .sp The name of the provider as configured in \fI/etc/salt/cloud.providers.d/azure.conf\fP. .SS image .sp The name of the image to use to create a VM. Available images can be viewed using the following command: .sp .nf .ft C salt\-cloud \-\-list\-images my\-azure\-config .ft P .fi .SS size .sp The name of the size to use to create a VM. Available sizes can be viewed using the following command: .sp .nf .ft C salt\-cloud \-\-list\-sizes my\-azure\-config .ft P .fi .SS location .sp The name of the location to create a VM in. Available locations can be viewed using the following command: .sp .nf .ft C salt\-cloud \-\-list\-locations my\-azure\-config .ft P .fi .SS ssh_username .sp The user to use to log into the newly\-created VM to install Salt. .SS ssh_password .sp The password to use to log into the newly\-created VM to install Salt. .SS slot .sp The environment to which the hosted service is deployed. Valid values are \fIstaging\fP or \fIproduction\fP. When set to \fIproduction\fP, the resulting URL of the new VM will be \fI.cloudapp.net\fP. When set to \fIstaging\fP, the resulting URL will contain a generated hash instead. .SS media_link .sp This is the URL of the container that will store the disk that this VM uses. Currently, this container must already exist. If a VM has previously been created in the associated account, a container should already exist. In the web interface, go into the Storage area and click one of the available storage selections. Click the Containers link, and then copy the URL from the container that will be used. It generally looks like: .sp .nf .ft C http://portalvhdabcdefghijklmn.blob.core.windows.net/vhds .ft P .fi .SS Show Instance .sp This action is a thin wrapper around \fB\-\-full\-query\fP, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance. .sp .nf .ft C salt\-cloud \-a show_instance myinstance .ft P .fi .SS Getting Started With Digital Ocean .sp Digital Ocean is a public cloud provider that specializes in Linux instances. .SS Dependencies .sp This driver requires the Python \fBrequests\fP library to be installed. .SS Configuration .sp Using Salt for Digital Ocean requires a client_key, an api_key, an ssh_key_file, and an ssh_key_name. The client_key and api_key can be found in the Digital Ocean web interface, in the "My Settings" section, under the API Access tab. The ssh_key_name can be found under the "SSH Keys" section. .sp .nf .ft C # Note: This example is for /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. my\-digitalocean\-config: provider: digital_ocean client_key: wFGEwgregeqw3435gDger api_key: GDE43t43REGTrkilg43934t34qT43t4dgegerGEgg ssh_key_file: /path/to/ssh/key/file ssh_key_name: my\-key\-name location: New York 1 .ft P .fi .SS Profiles .SS Cloud Profiles .sp Set up an initial profile at \fB/etc/salt/cloud.profiles\fP or in the \fB/etc/salt/cloud.profiles.d/\fP directory: .sp .nf .ft C digitalocean\-ubuntu: provider: my\-digitalocean\-config image: Ubuntu 14.04 x32 size: 512MB location: New York 1 private_networking: True backups_enabled: True .ft P .fi .sp Sizes can be obtained using the \fB\-\-list\-sizes\fP option for the \fBsalt\-cloud\fP command: .sp .nf .ft C # salt\-cloud \-\-list\-sizes my\-digitalocean\-config my\-digitalocean\-config: \-\-\-\-\-\-\-\-\-\- digital_ocean: \-\-\-\-\-\-\-\-\-\- 512MB: \-\-\-\-\-\-\-\-\-\- cost_per_hour: 0.00744 cost_per_month: 5.0 cpu: 1 disk: 20 id: 66 memory: 512 name: 512MB slug: None \&...SNIP... .ft P .fi .sp Images can be obtained using the \fB\-\-list\-images\fP option for the \fBsalt\-cloud\fP command: .sp .nf .ft C # salt\-cloud \-\-list\-images my\-digitalocean\-config my\-digitalocean\-config: \-\-\-\-\-\-\-\-\-\- digital_ocean: \-\-\-\-\-\-\-\-\-\- Arch Linux 2013.05 x64: \-\-\-\-\-\-\-\-\-\- distribution: Arch Linux id: 350424 name: Arch Linux 2013.05 x64 public: True slug: None \&...SNIP... .ft P .fi .IP Note DigitalOcean\(aqs concept of \fBApplications\fP is nothing more than a pre\-configured instance (same as a normal Droplet). You will find examples such \fBDocker 0.7 Ubuntu 13.04 x64\fP and \fBWordpress on Ubuntu 12.10\fP when using the \fB\-\-list\-images\fP option. These names can be used just like the rest of the standard instances when specifying an image in the cloud profile configuration. .RE .IP Note Additional documentation is available from \fI\%Digital Ocean\fP. .RE .SS Getting Started With AWS EC2 .sp Amazon EC2 is a very widely used public cloud platform and one of the core platforms Salt Cloud has been built to support. .sp Previously, the suggested provider for AWS EC2 was the \fBaws\fP provider. This has been deprecated in favor of the \fBec2\fP provider. Configuration using the old \fBaws\fP provider will still function, but that driver is no longer in active development. .SS Dependencies .sp This driver requires the Python \fBrequests\fP library to be installed. .SS Configuration .sp The following example illustrates some of the options that can be set. These parameters are discussed in more detail below. .sp .nf .ft C # Note: This example is for /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. my\-ec2\-southeast\-public\-ips: # Set up the location of the salt master # minion: master: saltmaster.example.com # Set up grains information, which will be common for all nodes # using this provider grains: node_type: broker release: 1.0.1 # Specify whether to use public or private IP for deploy script. # # Valid options are: # private_ips \- The salt\-master is also hosted with EC2 # public_ips \- The salt\-master is hosted outside of EC2 # ssh_interface: public_ips # Set the EC2 access credentials (see below) # id: HJGRYCILJLKJYG key: \(aqkdjgfsgm;woormgl/aserigjksjdhasdfgn\(aq # Make sure this key is owned by root with permissions 0400. # private_key: /etc/salt/my_test_key.pem keyname: my_test_key securitygroup: default # Optionally configure default region # location: ap\-southeast\-1 availability_zone: ap\-southeast\-1b # Configure which user to use to run the deploy script. This setting is # dependent upon the AMI that is used to deploy. It is usually safer to # configure this individually in a profile, than globally. Typical users # are: # # Amazon Linux \-> ec2\-user # RHEL \-> ec2\-user # CentOS \-> ec2\-user # Ubuntu \-> ubuntu # ssh_username: ec2\-user # Optionally add an IAM profile iam_profile: \(aqarn:aws:iam::123456789012:instance\-profile/ExampleInstanceProfile\(aq provider: ec2 my\-ec2\-southeast\-private\-ips: # Set up the location of the salt master # minion: master: saltmaster.example.com # Specify whether to use public or private IP for deploy script. # # Valid options are: # private_ips \- The salt\-master is also hosted with EC2 # public_ips \- The salt\-master is hosted outside of EC2 # ssh_interface: private_ips # Set the EC2 access credentials (see below) # id: HJGRYCILJLKJYG key: \(aqkdjgfsgm;woormgl/aserigjksjdhasdfgn\(aq # Make sure this key is owned by root with permissions 0400. # private_key: /etc/salt/my_test_key.pem keyname: my_test_key securitygroup: default # Optionally configure default region # location: ap\-southeast\-1 availability_zone: ap\-southeast\-1b # Configure which user to use to run the deploy script. This setting is # dependent upon the AMI that is used to deploy. It is usually safer to # configure this individually in a profile, than globally. Typical users # are: # # Amazon Linux \-> ec2\-user # RHEL \-> ec2\-user # CentOS \-> ec2\-user # Ubuntu \-> ubuntu # ssh_username: ec2\-user # Optionally add an IAM profile iam_profile: \(aqmy other profile name\(aq provider: ec2 .ft P .fi .SS Access Credentials .sp The \fBid\fP and \fBkey\fP settings may be found in the Security Credentials area of the AWS Account page: .sp \fI\%https://portal.aws.amazon.com/gp/aws/securityCredentials\fP .sp Both are located in the Access Credentials area of the page, under the Access Keys tab. The \fBid\fP setting is labeled Access Key ID, and the \fBkey\fP setting is labeled Secret Access Key. .SS Key Pairs .sp In order to create an instance with Salt installed and configured, a key pair will need to be created. This can be done in the EC2 Management Console, in the Key Pairs area. These key pairs are unique to a specific region. Keys in the us\-east\-1 region can be configured at: .sp \fI\%https://console.aws.amazon.com/ec2/home?region=us-east-1#s=KeyPairs\fP .sp Keys in the us\-west\-1 region can be configured at .sp \fI\%https://console.aws.amazon.com/ec2/home?region=us-west-1#s=KeyPairs\fP .sp ...and so on. When creating a key pair, the browser will prompt to download a pem file. This file must be placed in a directory accessible by Salt Cloud, with permissions set to either 0400 or 0600. .SS Security Groups .sp An instance on EC2 needs to belong to a security group. Like key pairs, these are unique to a specific region. These are also configured in the EC2 Management Console. Security groups for the us\-east\-1 region can be configured at: .sp \fI\%https://console.aws.amazon.com/ec2/home?region=us-east-1#s=SecurityGroups\fP .sp ...and so on. .sp A security group defines firewall rules which an instance will adhere to. If the salt\-master is configured outside of EC2, the security group must open the SSH port (usually port 22) in order for Salt Cloud to install Salt. .SS IAM Profile .sp Amazon EC2 instances support the concept of an \fI\%instance profile\fP, which is a logical container for the IAM role. At the time that you launch an EC2 instance, you can associate the instance with an instance profile, which in turn corresponds to the IAM role. Any software that runs on the EC2 instance is able to access AWS using the permissions associated with the IAM role. .sp Scaffolding the profile is a 2\-step configuration process: .INDENT 0.0 .IP 1. 3 Configure an IAM Role from the \fI\%IAM Management Console\fP. .IP 2. 3 Attach this role to a new profile. It can be done with the \fI\%AWS CLI\fP: .INDENT 3.0 .INDENT 3.5 .sp .nf .ft C > aws iam create\-instance\-profile \-\-instance\-profile\-name PROFILE_NAME > aws iam add\-role\-to\-instance\-profile \-\-instance\-profile\-name PROFILE_NAME \-\-role\-name ROLE_NAME .ft P .fi .UNINDENT .UNINDENT .UNINDENT .sp Once the profile is created, you can use the \fBPROFILE_NAME\fP to configure your cloud profiles. .SS Cloud Profiles .sp Set up an initial profile at \fB/etc/salt/cloud.profiles\fP: .sp .nf .ft C base_ec2_private: provider: my\-ec2\-southeast\-private\-ips image: ami\-e565ba8c size: Micro Instance ssh_username: ec2\-user base_ec2_public: provider: my\-ec2\-southeast\-public\-ips image: ami\-e565ba8c size: Micro Instance ssh_username: ec2\-user base_ec2_db: provider: my\-ec2\-southeast\-public\-ips image: ami\-e565ba8c size: m1.xlarge ssh_username: ec2\-user volumes: \- { size: 10, device: /dev/sdf } \- { size: 10, device: /dev/sdg, type: io1, iops: 1000 } \- { size: 10, device: /dev/sdh, type: io1, iops: 1000 } # optionally add tags to profile: tag: {\(aqEnvironment\(aq: \(aqproduction\(aq, \(aqRole\(aq: \(aqdatabase\(aq} # force grains to sync after install sync_after_install: grains base_ec2_vpc: provider: my\-ec2\-southeast\-public\-ips image: ami\-a73264ce size: m1.xlarge ssh_username: ec2\-user script: /etc/salt/cloud.deploy.d/user_data.sh network_interfaces: \- DeviceIndex: 0 PrivateIpAddresses: \- Primary: True #auto assign public ip (not EIP) AssociatePublicIpAddress: True SubnetId: subnet\-813d4bbf SecurityGroupId: \- sg\-750af413 volumes: \- { size: 10, device: /dev/sdf } \- { size: 10, device: /dev/sdg, type: io1, iops: 1000 } \- { size: 10, device: /dev/sdh, type: io1, iops: 1000 } del_root_vol_on_destroy: True del_all_vol_on_destroy: True tag: {\(aqEnvironment\(aq: \(aqproduction\(aq, \(aqRole\(aq: \(aqdatabase\(aq} sync_after_install: grains .ft P .fi .sp The profile can now be realized with a salt command: .sp .nf .ft C # salt\-cloud \-p base_ec2 ami.example.com # salt\-cloud \-p base_ec2_public ami.example.com # salt\-cloud \-p base_ec2_private ami.example.com .ft P .fi .sp This will create an instance named \fBami.example.com\fP in EC2. The minion that is installed on this instance will have an \fBid\fP of \fBami.example.com\fP. If the command was executed on the salt\-master, its Salt key will automatically be signed on the master. .sp Once the instance has been created with salt\-minion installed, connectivity to it can be verified with Salt: .sp .nf .ft C # salt \(aqami.example.com\(aq test.ping .ft P .fi .SS Required Settings .sp The following settings are always required for EC2: .sp .nf .ft C # Set the EC2 login data my\-ec2\-config: id: HJGRYCILJLKJYG key: \(aqkdjgfsgm;woormgl/aserigjksjdhasdfgn\(aq keyname: test securitygroup: quick\-start private_key: /root/test.pem provider: ec2 .ft P .fi .SS Optional Settings .sp EC2 allows a location to be set for servers to be deployed in. Availability zones exist inside regions, and may be added to increase specificity. .sp .nf .ft C my\-ec2\-config: # Optionally configure default region location: ap\-southeast\-1 availability_zone: ap\-southeast\-1b .ft P .fi .sp EC2 instances can have a public or private IP, or both. When an instance is deployed, Salt Cloud needs to log into it via SSH to run the deploy script. By default, the public IP will be used for this. If the salt\-cloud command is run from another EC2 instance, the private IP should be used. .sp .nf .ft C my\-ec2\-config: # Specify whether to use public or private IP for deploy script # private_ips or public_ips ssh_interface: public_ips .ft P .fi .sp Many EC2 instances do not allow remote access to the root user by default. Instead, another user must be used to run the deploy script using sudo. Some common usernames include ec2\-user (for Amazon Linux), ubuntu (for Ubuntu instances), admin (official Debian) and bitnami (for images provided by Bitnami). .sp .nf .ft C my\-ec2\-config: # Configure which user to use to run the deploy script ssh_username: ec2\-user .ft P .fi .sp Multiple usernames can be provided, in which case Salt Cloud will attempt to guess the correct username. This is mostly useful in the main configuration file: .sp .nf .ft C my\-ec2\-config: ssh_username: \- ec2\-user \- ubuntu \- admin \- bitnami .ft P .fi .sp Multiple security groups can also be specified in the same fashion: .sp .nf .ft C my\-ec2\-config: securitygroup: \- default \- extra .ft P .fi .sp Your instances may optionally make use of EC2 Spot Instances. The following example will request that spot instances be used and your maximum bid will be $0.10. Keep in mind that different spot prices may be needed based on the current value of the various EC2 instance sizes. You can check current and past spot instance pricing via the EC2 API or AWS Console. .sp .nf .ft C my\-ec2\-config: spot_config: spot_price: 0.10 .ft P .fi .sp By default, the spot instance type is set to \(aqone\-time\(aq, meaning it will be launched and, if it\(aqs ever terminated for whatever reason, it will not be recreated. If you would like your spot instances to be relaunched after a termination (by your or AWS), set the \fBtype\fP to \(aqpersistent\(aq. .sp NOTE: Spot instances are a great way to save a bit of money, but you do run the risk of losing your spot instances if the current price for the instance size goes above your maximum bid. .sp The following parameters may be set in the cloud configuration file to control various aspects of the spot instance launching: .INDENT 0.0 .IP \(bu 2 \fBwait_for_spot_timeout\fP: seconds to wait before giving up on spot instance launch (default=600) .IP \(bu 2 \fBwait_for_spot_interval\fP: seconds to wait in between polling requests to determine if a spot instance is available (default=30) .IP \(bu 2 \fBwait_for_spot_interval_multiplier\fP: a multiplier to add to the interval in between requests, which is useful if AWS is throttling your requests (default=1) .IP \(bu 2 \fBwait_for_spot_max_failures\fP: maximum number of failures before giving up on launching your spot instance (default=10) .UNINDENT .sp If you find that you\(aqre being throttled by AWS while polling for spot instances, you can set the following in your core cloud configuration file that will double the polling interval after each request to AWS. .sp .nf .ft C wait_for_spot_interval: 1 wait_for_spot_interval_multiplier: 2 .ft P .fi .sp See the \fI\%AWS Spot Instances\fP documentation for more information. .sp Block device mappings enable you to specify additional EBS volumes or instance store volumes when the instance is launched. This setting is also available on each cloud profile. Note that the number of instance stores varies by instance type. If more mappings are provided than are supported by the instance type, mappings will be created in the order provided and additional mappings will be ignored. Consult the \fI\%AWS documentation\fP for a listing of the available instance stores, device names, and mount points. .sp .nf .ft C my\-ec2\-config: block_device_mappings: \- DeviceName: /dev/sdb VirtualName: ephemeral0 \- DeviceName: /dev/sdc VirtualName: ephemeral1 .ft P .fi .sp You can also use block device mappings to change the size of the root device at the provisioning time. For example, assuming the root device is \(aq/dev/sda\(aq, you can set its size to 100G by using the following configuration. .sp .nf .ft C my\-ec2\-config: block_device_mappings: \- DeviceName: /dev/sda Ebs.VolumeSize: 100 .ft P .fi .sp Existing EBS volumes may also be attached (not created) to your instances or you can create new EBS volumes based on EBS snapshots. To simply attach an existing volume use the \fBvolume_id\fP parameter. .sp .nf .ft C device: /dev/xvdj mount_point: /mnt/my_ebs volume_id: vol\-12345abcd .ft P .fi .sp Or, to create a volume from an EBS snapshot, use the \fBsnapshot\fP parameter. .sp .nf .ft C device: /dev/xvdj mount_point: /mnt/my_ebs snapshot: snap\-abcd12345 .ft P .fi .sp Note that \fBvolume_id\fP will take precedence over the \fBsnapshot\fP parameter. .sp Tags can be set once an instance has been launched. .sp .nf .ft C my\-ec2\-config: tag: tag0: value tag1: value .ft P .fi .SS Modify EC2 Tags .sp One of the features of EC2 is the ability to tag resources. In fact, under the hood, the names given to EC2 instances by salt\-cloud are actually just stored as a tag called Name. Salt Cloud has the ability to manage these tags: .sp .nf .ft C salt\-cloud \-a get_tags mymachine salt\-cloud \-a set_tags mymachine tag1=somestuff tag2=\(aqOther stuff\(aq salt\-cloud \-a del_tags mymachine tag1,tag2,tag3 .ft P .fi .sp It is possible to manage tags on any resource in EC2 with a Resource ID, not just instances: .sp .nf .ft C salt\-cloud \-f get_tags my_ec2 resource_id=af5467ba salt\-cloud \-f set_tags my_ec2 resource_id=af5467ba tag1=somestuff salt\-cloud \-f del_tags my_ec2 resource_id=af5467ba tag1,tag2,tag3 .ft P .fi .SS Rename EC2 Instances .sp As mentioned above, EC2 instances are named via a tag. However, renaming an instance by renaming its tag will cause the salt keys to mismatch. A rename function exists which renames both the instance, and the salt keys. .sp .nf .ft C salt\-cloud \-a rename mymachine newname=yourmachine .ft P .fi .SS EC2 Termination Protection .sp EC2 allows the user to enable and disable termination protection on a specific instance. An instance with this protection enabled cannot be destroyed. .sp .nf .ft C salt\-cloud \-a enable_term_protect mymachine salt\-cloud \-a disable_term_protect mymachine .ft P .fi .SS Rename on Destroy .sp When instances on EC2 are destroyed, there will be a lag between the time that the action is sent, and the time that Amazon cleans up the instance. During this time, the instance still retails a Name tag, which will cause a collision if the creation of an instance with the same name is attempted before the cleanup occurs. In order to avoid such collisions, Salt Cloud can be configured to rename instances when they are destroyed. The new name will look something like: .sp .nf .ft C myinstance\-DEL20f5b8ad4eb64ed88f2c428df80a1a0c .ft P .fi .sp In order to enable this, add rename_on_destroy line to the main configuration file: .sp .nf .ft C my\-ec2\-config: rename_on_destroy: True .ft P .fi .SS Listing Images .sp Normally, images can be queried on a cloud provider by passing the \fB\-\-list\-images\fP argument to Salt Cloud. This still holds true for EC2: .sp .nf .ft C salt\-cloud \-\-list\-images my\-ec2\-config .ft P .fi .sp However, the full list of images on EC2 is extremely large, and querying all of the available images may cause Salt Cloud to behave as if frozen. Therefore, the default behavior of this option may be modified, by adding an \fBowner\fP argument to the provider configuration: .sp .nf .ft C owner: aws\-marketplace .ft P .fi .sp The possible values for this setting are \fBamazon\fP, \fBaws\-marketplace\fP, \fBself\fP, \fB\fP or \fBall\fP. The default setting is \fBamazon\fP. Take note that \fBall\fP and \fBaws\-marketplace\fP may cause Salt Cloud to appear as if it is freezing, as it tries to handle the large amount of data. .sp It is also possible to perform this query using different settings without modifying the configuration files. To do this, call the \fBavail_images\fP function directly: .sp .nf .ft C salt\-cloud \-f avail_images my\-ec2\-config owner=aws\-marketplace .ft P .fi .SS EC2 Images .sp The following are lists of available AMI images, generally sorted by OS. These lists are on 3rd\-party websites, are not managed by Salt Stack in any way. They are provided here as a reference for those who are interested, and contain no warranty (express or implied) from anyone affiliated with Salt Stack. Most of them have never been used, much less tested, by the Salt Stack team. .INDENT 0.0 .IP \(bu 2 \fI\%Arch Linux\fP .UNINDENT .INDENT 0.0 .IP \(bu 2 \fI\%FreeBSD\fP .UNINDENT .INDENT 0.0 .IP \(bu 2 \fI\%Fedora\fP .UNINDENT .INDENT 0.0 .IP \(bu 2 \fI\%CentOS\fP .UNINDENT .INDENT 0.0 .IP \(bu 2 \fI\%Ubuntu\fP .UNINDENT .INDENT 0.0 .IP \(bu 2 \fI\%Debian\fP .UNINDENT .INDENT 0.0 .IP \(bu 2 \fI\%OmniOS\fP .UNINDENT .INDENT 0.0 .IP \(bu 2 \fI\%All Images on Amazon\fP .UNINDENT .SS show_image .sp This is a function that describes an AMI on EC2. This will give insight as to the defaults that will be applied to an instance using a particular AMI. .sp .nf .ft C $ salt\-cloud \-f show_image ec2 image=ami\-fd20ad94 .ft P .fi .SS show_instance .sp This action is a thin wrapper around \fB\-\-full\-query\fP, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance. .sp .nf .ft C $ salt\-cloud \-a show_instance myinstance .ft P .fi .SS ebs_optimized .sp This argument enables switching of the EbsOptimized setting which default to \(aqfalse\(aq. Indicates whether the instance is optimized for EBS I/O. This optimization provides dedicated throughput to Amazon EBS and an optimized configuration stack to provide optimal Amazon EBS I/O performance. This optimization isn\(aqt available with all instance types. Additional usage charges apply when using an EBS\-optimized instance. .sp This setting can be added to the profile or map file for an instance. .sp If set to True, this setting will enable an instance to be EbsOptimized .sp .nf .ft C ebs_optimized: True .ft P .fi .sp This can also be set as a cloud provider setting in the EC2 cloud configuration: .sp .nf .ft C my\-ec2\-config: ebs_optimized: True .ft P .fi .SS del_root_vol_on_destroy .sp This argument overrides the default DeleteOnTermination setting in the AMI for the EBS root volumes for an instance. Many AMIs contain \(aqfalse\(aq as a default, resulting in orphaned volumes in the EC2 account, which may unknowingly be charged to the account. This setting can be added to the profile or map file for an instance. .sp If set, this setting will apply to the root EBS volume .sp .nf .ft C del_root_vol_on_destroy: True .ft P .fi .sp This can also be set as a cloud provider setting in the EC2 cloud configuration: .sp .nf .ft C my\-ec2\-config: del_root_vol_on_destroy: True .ft P .fi .SS del_all_vols_on_destroy .sp This argument overrides the default DeleteOnTermination setting in the AMI for the not\-root EBS volumes for an instance. Many AMIs contain \(aqfalse\(aq as a default, resulting in orphaned volumes in the EC2 account, which may unknowingly be charged to the account. This setting can be added to the profile or map file for an instance. .sp If set, this setting will apply to any (non\-root) volumes that were created by salt\-cloud using the \(aqvolumes\(aq setting. .sp The volumes will not be deleted under the following conditions * If a volume is detached before terminating the instance * If a volume is created without this setting and attached to the instance .sp .nf .ft C del_all_vols_on_destroy: True .ft P .fi .sp This can also be set as a cloud provider setting in the EC2 cloud configuration: .sp .nf .ft C my\-ec2\-config: del_all_vols_on_destroy: True .ft P .fi .sp The setting for this may be changed on all volumes of an existing instance using one of the following commands: .sp .nf .ft C salt\-cloud \-a delvol_on_destroy myinstance salt\-cloud \-a keepvol_on_destroy myinstance salt\-cloud \-a show_delvol_on_destroy myinstance .ft P .fi .sp The setting for this may be changed on a volume on an existing instance using one of the following commands: .sp .nf .ft C salt\-cloud \-a delvol_on_destroy myinstance device=/dev/sda1 salt\-cloud \-a delvol_on_destroy myinstance volume_id=vol\-1a2b3c4d salt\-cloud \-a keepvol_on_destroy myinstance device=/dev/sda1 salt\-cloud \-a keepvol_on_destroy myinstance volume_id=vol\-1a2b3c4d salt\-cloud \-a show_delvol_on_destroy myinstance device=/dev/sda1 salt\-cloud \-a show_delvol_on_destroy myinstance volume_id=vol\-1a2b3c4d .ft P .fi .SS EC2 Termination Protection .sp EC2 allows the user to enable and disable termination protection on a specific instance. An instance with this protection enabled cannot be destroyed. The EC2 driver adds a show_term_protect action to the regular EC2 functionality. .sp .nf .ft C salt\-cloud \-a show_term_protect mymachine salt\-cloud \-a enable_term_protect mymachine salt\-cloud \-a disable_term_protect mymachine .ft P .fi .SS Alternate Endpoint .sp Normally, EC2 endpoints are build using the region and the service_url. The resulting endpoint would follow this pattern: .sp .nf .ft C ec2.. .ft P .fi .sp This results in an endpoint that looks like: .sp .nf .ft C ec2.us\-east\-1.amazonaws.com .ft P .fi .sp There are other projects that support an EC2 compatibility layer, which this scheme does not account for. This can be overridden by specifying the endpoint directly in the main cloud configuration file: .sp .nf .ft C my\-ec2\-config: endpoint: myendpoint.example.com:1138/services/Cloud .ft P .fi .SS Volume Management .sp The EC2 driver has several functions and actions for management of EBS volumes. .SS Creating Volumes .sp A volume may be created, independent of an instance. A zone must be specified. A size or a snapshot may be specified (in GiB). If neither is given, a default size of 10 GiB will be used. If a snapshot is given, the size of the snapshot will be used. .sp .nf .ft C salt\-cloud \-f create_volume ec2 zone=us\-east\-1b salt\-cloud \-f create_volume ec2 zone=us\-east\-1b size=10 salt\-cloud \-f create_volume ec2 zone=us\-east\-1b snapshot=snap12345678 salt\-cloud \-f create_volume ec2 size=10 type=standard salt\-cloud \-f create_volume ec2 size=10 type=io1 iops=1000 .ft P .fi .SS Attaching Volumes .sp Unattached volumes may be attached to an instance. The following values are required; name or instance_id, volume_id and device. .sp .nf .ft C salt\-cloud \-a attach_volume myinstance volume_id=vol\-12345 device=/dev/sdb1 .ft P .fi .SS Show a Volume .sp The details about an existing volume may be retrieved. .sp .nf .ft C salt\-cloud \-a show_volume myinstance volume_id=vol\-12345 salt\-cloud \-f show_volume ec2 volume_id=vol\-12345 .ft P .fi .SS Detaching Volumes .sp An existing volume may be detached from an instance. .sp .nf .ft C salt\-cloud \-a detach_volume myinstance volume_id=vol\-12345 .ft P .fi .SS Deleting Volumes .sp A volume that is not attached to an instance may be deleted. .sp .nf .ft C salt\-cloud \-f delete_volume ec2 volume_id=vol\-12345 .ft P .fi .SS Managing Key Pairs .sp The EC2 driver has the ability to manage key pairs. .SS Creating a Key Pair .sp A key pair is required in order to create an instance. When creating a key pair with this function, the return data will contain a copy of the private key. This private key is not stored by Amazon, will not be obtainable past this point, and should be stored immediately. .sp .nf .ft C salt\-cloud \-f create_keypair ec2 keyname=mykeypair .ft P .fi .SS Show a Key Pair .sp This function will show the details related to a key pair, not including the private key itself (which is not stored by Amazon). .sp .nf .ft C salt\-cloud \-f show_keypair ec2 keyname=mykeypair .ft P .fi .SS Delete a Key Pair .sp This function removes the key pair from Amazon. .sp .nf .ft C salt\-cloud \-f delete_keypair ec2 keyname=mykeypair .ft P .fi .SS Getting Started With GoGrid .sp GoGrid is a public cloud provider supporting Linux and Windows. .SS Dependencies .sp The GoGrid driver for Salt Cloud requires Libcloud 0.13.2 or higher to be installed. .SS Configuration .sp To use Salt Cloud with GoGrid log into the GoGrid web interface and create an API key. Do this by clicking on "My Account" and then going to the API Keys tab. .sp The \fBapikey\fP and the \fBsharedsecret\fP configuration parameters need to be set in the configuration file to enable interfacing with GoGrid: .sp .nf .ft C # Note: This example is for /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. my\-gogrid\-config: provider: gogrid apikey: asdff7896asdh789 sharedsecret: saltybacon .ft P .fi .SS Profiles .SS Cloud Profiles .sp Set up an initial profile at \fB/etc/salt/cloud.profiles\fP or in the \fB/etc/salt/cloud.profiles.d/\fP directory: .sp .nf .ft C gogrid_512: provider: my\-gogrid\-config size: 512MB image: CentOS 6.2 (64\-bit) w/ None .ft P .fi .sp Sizes can be obtained using the \fB\-\-list\-sizes\fP option for the \fBsalt\-cloud\fP command: .sp .nf .ft C # salt\-cloud \-\-list\-sizes my\-gogrid\-config my\-gogrid\-config: \-\-\-\-\-\-\-\-\-\- gogrid: \-\-\-\-\-\-\-\-\-\- 512MB: \-\-\-\-\-\-\-\-\-\- bandwidth: None disk: 30 driver: get_uuid: id: 512MB name: 512MB price: 0.095 ram: 512 uuid: bde1e4d7c3a643536e42a35142c7caac34b060e9 \&...SNIP... .ft P .fi .sp Images can be obtained using the \fB\-\-list\-images\fP option for the \fBsalt\-cloud\fP command: .sp .nf .ft C # salt\-cloud \-\-list\-images my\-gogrid\-config my\-gogrid\-config: \-\-\-\-\-\-\-\-\-\- gogrid: \-\-\-\-\-\-\-\-\-\- CentOS 6.4 (64\-bit) w/ None: \-\-\-\-\-\-\-\-\-\- driver: extra: \-\-\-\-\-\-\-\-\-\- get_uuid: id: 18094 name: CentOS 6.4 (64\-bit) w/ None uuid: bfd4055389919e01aa6261828a96cf54c8dcc2c4 \&...SNIP... .ft P .fi .SS Getting Started With Google Compute Engine .sp Google Compute Engine (GCE) is Google\-infrastructure as a service that lets you run your large\-scale computing workloads on virtual machines. This document covers how to use Salt Cloud to provision and manage your virtual machines hosted within Google\(aqs infrastructure. .sp You can find out more about GCE and other Google Cloud Platform services at \fI\%https://cloud.google.com\fP. .SS Dependencies .INDENT 0.0 .IP \(bu 2 Libcloud >= 0.14.0\-beta3 .IP \(bu 2 PyCrypto >= 2.1. .IP \(bu 2 A Google Cloud Platform account with Compute Engine enabled .IP \(bu 2 A registered Service Account for authorization .IP \(bu 2 Oh, and obviously you\(aqll need \fI\%salt\fP .UNINDENT .SS Google Compute Engine Setup .INDENT 0.0 .IP 1. 3 Sign up for Google Cloud Platform .sp Go to \fI\%https://cloud.google.com\fP and use your Google account to sign up for Google Cloud Platform and complete the guided instructions. .IP 2. 3 Create a Project .sp Next, go to the console at \fI\%https://cloud.google.com/console\fP and create a new Project. Make sure to select your new Project if you are not automatically directed to the Project. .sp Projects are a way of grouping together related users, services, and billing. You may opt to create multiple Projects and the remaining instructions will need to be completed for each Project if you wish to use GCE and Salt Cloud to manage your virtual machines. .IP 3. 3 Enable the Google Compute Engine service .sp In your Project, either just click \fICompute Engine\fP to the left, or go to the \fIAPIs & auth\fP section and \fIAPIs\fP link and enable the Google Compute Engine service. .IP 4. 3 Create a Service Account .sp To set up authorization, navigate to \fIAPIs & auth\fP section and then the \fICredentials\fP link and click the \fICREATE NEW CLIENT ID\fP button. Select \fIService Account\fP and click the \fICreate Client ID\fP button. This will prompt you to save a private key file. Look for a new \fIService Account\fP section in the page and record the generated email address for the matching key/fingerprint. The email address will be used in the \fBservice_account_email_address\fP of your \fB/etc/salt/cloud\fP file. .IP 5. 3 Key Format .sp You will need to convert the private key to a format compatible with libcloud. The original Google\-generated private key was encrypted using \fInotasecret\fP as a passphrase. Use the following command and record the location of the converted private key and record the location for use in the \fBservice_account_private_key\fP of your \fB/etc/salt/cloud\fP file: .sp .nf .ft C openssl pkcs12 \-in ORIG.pkey \-passin pass:notasecret \e \-nodes \-nocerts | openssl rsa \-out NEW.pem .ft P .fi .UNINDENT .SS Configuration .sp Set up the cloud config at \fB/etc/salt/cloud\fP: .sp .nf .ft C # Note: This example is for /etc/salt/cloud providers: gce\-config: # Set up the Project name and Service Account authorization # project: "your_project_name" service_account_email_address: "123\-a5gt@developer.gserviceaccount.com" service_account_private_key: "/path/to/your/NEW.pem" # Set up the location of the salt master # minion: master: saltmaster.example.com # Set up grains information, which will be common for all nodes # using this provider grains: node_type: broker release: 1.0.1 provider: gce .ft P .fi .SS Cloud Profiles .sp Set up an initial profile at \fB/etc/salt/cloud.profiles\fP: .sp .nf .ft C all_settings: image: centos\-6 size: n1\-standard\-1 location: europe\-west1\-b network: default tags: \(aq["one", "two", "three"]\(aq metadata: \(aq{"one": "1", "2": "two"}\(aq use_persistent_disk: True delete_boot_pd: False deploy: True make_master: False provider: gce\-config .ft P .fi .sp The profile can be realized now with a salt command: .sp .nf .ft C salt\-cloud \-p all_settings gce\-instance .ft P .fi .sp This will create an salt minion instance named \fBgce\-instance\fP in GCE. If the command was executed on the salt\-master, its Salt key will automatically be signed on the master. .sp Once the instance has been created with salt\-minion installed, connectivity to it can be verified with Salt: .sp .nf .ft C salt \(aqami.example.com\(aq test.ping .ft P .fi .SS GCE Specific Settings .sp Consult the sample profile below for more information about GCE specific settings. Some of them are mandatory and are properly labeled below but typically also include a hard\-coded default. .sp .nf .ft C all_settings: # Image is used to define what Operating System image should be used # to for the instance. Examples are Debian 7 (wheezy) and CentOS 6. # # MANDATORY # image: centos\-6 # A \(aqsize\(aq, in GCE terms, refers to the instance\(aqs \(aqmachine type\(aq. See # the on\-line documentation for a complete list of GCE machine types. # # MANDATORY # size: n1\-standard\-1 # A \(aqlocation\(aq, in GCE terms, refers to the instance\(aqs \(aqzone\(aq. GCE # has the notion of both Regions (e.g. us\-central1, europe\-west1, etc) # and Zones (e.g. us\-central1\-a, us\-central1\-b, etc). # # MANDATORY # location: europe\-west1\-b # Use this setting to define the network resource for the instance. # All GCE projects contain a network named \(aqdefault\(aq but it\(aqs possible # to use this setting to create instances belonging to a different # network resource. # network: default # GCE supports instance/network tags and this setting allows you to # set custom tags. It should be a list of strings and must be # parse\-able by the python ast.literal_eval() function to convert it # to a python list. # tags: \(aq["one", "two", "three"]\(aq # GCE supports instance metadata and this setting allows you to # set custom metadata. It should be a hash of key/value strings and # parse\-able by the python ast.literal_eval() function to convert it # to a python dictionary. # metadata: \(aq{"one": "1", "2": "two"}\(aq # Use this setting to ensure that when new instances are created, # they will use a persistent disk to preserve data between instance # terminations and re\-creations. # use_persistent_disk: True # In the event that you wish the boot persistent disk to be permanently # deleted when you destroy an instance, set delete_boot_pd to True. # delete_boot_pd: False .ft P .fi .sp GCE instances do not allow remote access to the root user by default. Instead, another user must be used to run the deploy script using sudo. Append something like this to \fB/etc/salt/cloud.profiles\fP: .sp .nf .ft C all_settings: ... # SSH to GCE instances as gceuser ssh_username: gceuser # Use the local private SSH key file located here ssh_keyfile: /etc/cloud/google_compute_engine .ft P .fi .sp If you have not already used this SSH key to login to instances in this GCE project you will also need to add the public key to your projects metadata at \fI\%https://cloud.google.com/console\fP. You could also add it via the metadata setting too: .sp .nf .ft C all_settings: ... metadata: \(aq{"one": "1", "2": "two", "sshKeys": "gceuser:ssh\-rsa gceuser@host"}\(aq .ft P .fi .SS Single instance details .sp This action is a thin wrapper around \fB\-\-full\-query\fP, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance. .sp .nf .ft C salt\-cloud \-a show_instance myinstance .ft P .fi .SS Destroy, persistent disks, and metadata .sp As noted in the provider configuration, it\(aqs possible to force the boot persistent disk to be deleted when you destroy the instance. The way that this has been implemented is to use the instance metadata to record the cloud profile used when creating the instance. When \fBdestroy\fP is called, if the instance contains a \fBsalt\-cloud\-profile\fP key, it\(aqs value is used to reference the matching profile to determine if \fBdelete_boot_pd\fP is set to \fBTrue\fP. .sp Be aware that any GCE instances created with salt cloud will contain this custom \fBsalt\-cloud\-profile\fP metadata entry. .SS List various resources .sp It\(aqs also possible to list several GCE resources similar to what can be done with other providers. The following commands can be used to list GCE zones (locations), machine types (sizes), and images. .sp .nf .ft C salt\-cloud \-\-list\-locations gce salt\-cloud \-\-list\-sizes gce salt\-cloud \-\-list\-images gce .ft P .fi .SS Persistent Disk .sp The Compute Engine provider provides functions via salt\-cloud to manage your Persistent Disks. You can create and destroy disks as well as attach and detach them from running instances. .SS Create .sp When creating a disk, you can create an empty disk and specify its size (in GB), or specify either an \(aqimage\(aq or \(aqsnapshot\(aq. .sp .nf .ft C salt\-cloud \-f create_disk gce disk_name=pd location=us\-central1\-b size=200 .ft P .fi .SS Delete .sp Deleting a disk only requires the name of the disk to delete .sp .nf .ft C salt\-cloud \-f delete_disk gce disk_name=old\-backup .ft P .fi .SS Attach .sp Attaching a disk to an existing instance is really an \(aqaction\(aq and requires both an instance name and disk name. It\(aqs possible to use this ation to create bootable persistent disks if necessary. Compute Engine also supports attaching a persistent disk in READ_ONLY mode to multiple instances at the same time (but then cannot be attached in READ_WRITE to any instance). .sp .nf .ft C salt\-cloud \-a attach_disk myinstance disk_name=pd mode=READ_WRITE boot=yes .ft P .fi .SS Detach .sp Detaching a disk is also an action against an instance and only requires the name of the disk. Note that this does \fInot\fP safely sync and umount the disk from the instance. To ensure no data loss, you must first make sure the disk is unmounted from the instance. .sp .nf .ft C salt\-cloud \-a detach_disk myinstance disk_name=pd .ft P .fi .SS Show disk .sp It\(aqs also possible to look up the details for an existing disk with either a function or an action. .sp .nf .ft C salt\-cloud \-a show_disk myinstance disk_name=pd salt\-cloud \-f show_disk gce disk_name=pd .ft P .fi .SS Create snapshot .sp You can take a snapshot of an existing disk\(aqs content. The snapshot can then in turn be used to create other persistent disks. Note that to prevent data corruption, it is strongly suggested that you unmount the disk prior to taking a snapshot. You must name the snapshot and provide the name of the disk. .sp .nf .ft C salt\-cloud \-f create_snapshot gce name=backup\-20140226 disk_name=pd .ft P .fi .SS Delete snapshot .sp You can delete a snapshot when it\(aqs no longer needed by specifying the name of the snapshot. .sp .nf .ft C salt\-cloud \-f delete_snapshot gce name=backup\-20140226 .ft P .fi .SS Show snapshot .sp Use this function to look up information about the snapshot. .sp .nf .ft C salt\-cloud \-f show_snapshot gce name=backup\-20140226 .ft P .fi .SS Networking .sp Compute Engine supports multiple private networks per project. Instances within a private network can easily communicate with each other by an internal DNS service that resolves instance names. Instances within a private network can also communicate with either directly without needing special routing or firewall rules even if they span different regions/zones. .sp Networks also support custom firewall rules. By default, traffic between instances on the same private network is open to all ports and protocols. Inbound SSH traffic (port 22) is also allowed but all other inbound traffic is blocked. .SS Create network .sp New networks require a name and CIDR range. New instances can be created and added to this network by setting the network name during create. It is not possible to add/remove existing instances to a network. .sp .nf .ft C salt\-cloud \-f create_network gce name=mynet cidr=10.10.10.0/24 .ft P .fi .SS Destroy network .sp Destroy a network by specifying the name. Make sure that there are no instances associated with the network prior to deleting it or you\(aqll have a bad day. .sp .nf .ft C salt\-cloud \-f delete_network gce name=mynet .ft P .fi .SS Show network .sp Specify the network name to view information about the network. .sp .nf .ft C salt\-cloud \-f show_network gce name=mynet .ft P .fi .SS Create firewall .sp You\(aqll need to create custom firewall rules if you want to allow other traffic than what is described above. For instance, if you run a web service on your instances, you\(aqll need to explicitly allow HTTP and/or SSL traffic. The firewall rule must have a name and it will use the \(aqdefault\(aq network unless otherwise specified with a \(aqnetwork\(aq attribute. Firewalls also support instance tags for source/destination .sp .nf .ft C salt\-cloud \-f create_fwrule gce name=web allow=tcp:80,tcp:443,icmp .ft P .fi .SS Delete firewall .sp Deleting a firewall rule will prevent any previously allowed traffic for the named firewall rule. .sp .nf .ft C salt\-cloud \-f delete_fwrule gce name=web .ft P .fi .SS Show firewall .sp Use this function to review an existing firewall rule\(aqs information. .sp .nf .ft C salt\-cloud \-f show_fwrule gce name=web .ft P .fi .SS Load Balancer .sp Compute Engine possess a load\-balancer feature for splitting traffic across multiple instances. Please reference the \fI\%documentation\fP for a more complete discription. .sp The load\-balancer functionality is slightly different than that described in Google\(aqs documentation. The concept of \fITargetPool\fP and \fIForwardingRule\fP are consolidated in salt\-cloud/libcloud. HTTP Health Checks are optional. .SS HTTP Health Check .sp HTTP Health Checks can be used as a means to toggle load\-balancing across instance members, or to detect if an HTTP site is functioning. A common use\-case is to set up a health check URL and if you want to toggle traffic on/off to an instance, you can temporarily have it return a non\-200 response. A non\-200 response to the load\-balancer\(aqs health check will keep the LB from sending any new traffic to the "down" instance. Once the instance\(aqs health check URL beings returning 200\-responses, the LB will again start to send traffic to it. Review Compute Engine\(aqs documentation for allowable parameters. You can use the following salt\-cloud functions to manage your HTTP health checks. .sp .nf .ft C salt\-cloud \-f create_hc gce name=myhc path=/ port=80 salt\-cloud \-f delete_hc gce name=myhc salt\-cloud \-f show_hc gce name=myhc .ft P .fi .SS Load\-balancer .sp When creating a new load\-balancer, it requires a name, region, port range, and list of members. There are other optional parameters for protocol, and list of health checks. Deleting or showing details about the LB only requires the name. .sp .nf .ft C salt\-cloud \-f create_lb gce name=lb region=... ports=80 members=w1,w2,w3 salt\-cloud \-f delete_lb gce name=lb salt\-cloud \-f show_lb gce name=lb .ft P .fi .SS Attach and Detach LB .sp It is possible to attach or detach an instance from an existing load\-balancer. Both the instance and load\-balancer must exist before using these functions. .sp .nf .ft C salt\-cloud \-f attach_lb gce name=lb member=w4 salt\-cloud \-f detach_lb gce name=lb member=oops .ft P .fi .SS Getting Started With HP Cloud .sp HP Cloud is a major public cloud platform and uses the libcloud \fIopenstack\fP driver. The current version of OpenStack that HP Cloud uses is Havana. When an instance is booted, it must have a floating IP added to it in order to connect to it and further below you will see an example that adds context to this statement. .SS Set up a cloud provider configuration file .sp To use the \fIopenstack\fP driver for HP Cloud, set up the cloud provider configuration file as in the example shown below: .sp \fB/etc/salt/cloud.providers.d/hpcloud.conf\fP: .sp .nf .ft C hpcloud\-config: # Set the location of the salt\-master # minion: master: saltmaster.example.com # Configure HP Cloud using the OpenStack plugin # identity_url: https://region\-b.geo\-1.identity.hpcloudsvc.com:35357/v2.0/tokens compute_name: Compute protocol: ipv4 # Set the compute region: # compute_region: region\-b.geo\-1 # Configure HP Cloud authentication credentials # user: myname tenant: myname\-project1 password: xxxxxxxxx # keys to allow connection to the instance launched # ssh_key_name: yourkey ssh_key_file: /path/to/key/yourkey.priv provider: openstack .ft P .fi .sp The subsequent example that follows is using the openstack driver. .SS Compute Region .sp Originally, HP Cloud, in its OpenStack Essex version (1.0), had 3 availability zones in one region, US West (region\-a.geo\-1), which each behaved each as a region. .sp This has since changed, and the current OpenStack Havana version of HP Cloud (1.1) now has simplified this and now has two regions to choose from: .sp .nf .ft C region\-a.geo\-1 \-> US West region\-b.geo\-1 \-> US East .ft P .fi .SS Authentication .sp The \fBuser\fP is the same user as is used to log into the HP Cloud management UI. The \fBtenant\fP can be found in the upper left under "Project/Region/Scope". It is often named the same as \fBuser\fP albeit with a \fB\-project1\fP appended. The \fBpassword\fP is of course what you created your account with. The management UI also has other information such as being able to select US East or US West. .SS Set up a cloud profile config file .sp The profile shown below is a know working profile for an Ubuntu instance. The profile configuration file is stored in the following location: .sp \fB/etc/salt/cloud.profiles.d/hp_ae1_ubuntu.conf\fP: .sp .nf .ft C hp_ae1_ubuntu: provider: hp_ae1 image: 9302692b\-b787\-4b52\-a3a6\-daebb79cb498 ignore_cidr: 10.0.0.1/24 networks: \- floating: Ext\-Net size: standard.small ssh_key_file: /root/keys/test.key ssh_key_name: test ssh_username: ubuntu .ft P .fi .sp Some important things about the example above: .INDENT 0.0 .IP \(bu 2 The \fBimage\fP parameter can use either the image name or image ID which you can obtain by running in the example below (this case US East): .UNINDENT .sp .nf .ft C # salt\-cloud \-\-list\-images hp_ae1 .ft P .fi .INDENT 0.0 .IP \(bu 2 The parameter \fBignore_cidr\fP specifies a range of addresses to ignore when trying to connect to the instance. In this case, it\(aqs the range of IP addresses used for an private IP of the instance. .IP \(bu 2 The parameter \fBnetworks\fP is very important to include. In previous versions of Salt Cloud, this is what made it possible for salt\-cloud to be able to attach a floating IP to the instance in order to connect to the instance and set up the minion. The current version of salt\-cloud doesn\(aqt require it, though having it is of no harm either. Newer versions of salt\-cloud will use this, and without it, will attempt to find a list of floating IP addresses to use regardless. .IP \(bu 2 The \fBssh_key_file\fP and \fBssh_key_name\fP are the keys that will make it possible to connect to the instance to set up the minion .IP \(bu 2 The \fBssh_username\fP parameter, in this case, being that the image used will be ubuntu, will make it possible to not only log in but install the minion .UNINDENT .SS Launch an instance .sp To instantiate a machine based on this profile (example): .sp .nf .ft C # salt\-cloud \-p hp_ae1_ubuntu ubuntu_instance_1 .ft P .fi .sp After several minutes, this will create an instance named ubuntu_instance_1 running in HP Cloud in the US East region and will set up the minion and then return information about the instance once completed. .SS Manage the instance .sp Once the instance has been created with salt\-minion installed, connectivity to it can be verified with Salt: .sp .nf .ft C # salt ubuntu_instance_1 ping .ft P .fi .SS SSH to the instance .sp Additionally, the instance can be accessed via SSH using the floating IP assigned to it .sp .nf .ft C # ssh ubuntu@ .ft P .fi .SS Using a private IP .sp Alternatively, in the cloud profile, using the private IP to log into the instance to set up the minion is another option, particularly if salt\-cloud is running within the cloud on an instance that is on the same network with all the other instances (minions) .sp The example below is a modified version of the previous example. Note the use of \fBssh_interface\fP: .sp .nf .ft C hp_ae1_ubuntu: provider: hp_ae1 image: 9302692b\-b787\-4b52\-a3a6\-daebb79cb498 size: standard.small ssh_key_file: /root/keys/test.key ssh_key_name: test ssh_username: ubuntu ssh_interface: private_ips .ft P .fi .sp With this setup, salt\-cloud will use the private IP address to ssh into the instance and set up the salt\-minion .SS Getting Started With Joyent .sp Joyent is a public cloud provider supporting SmartOS, Linux, FreeBSD and Windows. .SS Dependencies .sp This driver requires the Python \fBrequests\fP library to be installed. .SS Configuration .sp The Joyent cloud requires three configuration parameters. The user name and password that are used to log into the Joyent system, and the location of the private ssh key associated with the Joyent account. The ssh key is needed to send the provisioning commands up to the freshly created virtual machine. .sp .nf .ft C # Note: This example is for /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. my\-joyent\-config: provider: joyent user: fred password: saltybacon private_key: /root/joyent.pem .ft P .fi .SS Profiles .SS Cloud Profiles .sp Set up an initial profile at \fB/etc/salt/cloud.profiles\fP or in the \fB/etc/salt/cloud.profiles.d/\fP directory: .sp .nf .ft C joyent_512 provider: my\-joyent\-config size: Extra Small 512 MB image: Arch Linux 2013.06 .ft P .fi .sp Sizes can be obtained using the \fB\-\-list\-sizes\fP option for the \fBsalt\-cloud\fP command: .sp .nf .ft C # salt\-cloud \-\-list\-sizes my\-joyent\-config my\-joyent\-config: \-\-\-\-\-\-\-\-\-\- joyent: \-\-\-\-\-\-\-\-\-\- Extra Small 512 MB: \-\-\-\-\-\-\-\-\-\- default: false disk: 15360 id: Extra Small 512 MB memory: 512 name: Extra Small 512 MB swap: 1024 vcpus: 1 \&...SNIP... .ft P .fi .sp Images can be obtained using the \fB\-\-list\-images\fP option for the \fBsalt\-cloud\fP command: .sp .nf .ft C # salt\-cloud \-\-list\-images my\-joyent\-config my\-joyent\-config: \-\-\-\-\-\-\-\-\-\- joyent: \-\-\-\-\-\-\-\-\-\- base: \-\-\-\-\-\-\-\-\-\- description: A 32\-bit SmartOS image with just essential packages installed. Ideal for users who are comfortable with setting up their own environment and tools. disabled: False files: \-\-\-\-\-\-\-\-\-\- \- compression: bzip2 \- sha1: 40cdc6457c237cf6306103c74b5f45f5bf2d9bbe \- size: 82492182 name: base os: smartos owner: 352971aa\-31ba\-496c\-9ade\-a379feaecd52 public: True \&...SNIP... .ft P .fi .SS Getting Started With LXC .sp The LXC module is designed to install Salt in an LXC container on a controlled and possibly remote minion. .sp In other words, Salt will connect to a minion, then from that minion: .INDENT 0.0 .IP \(bu 2 Provision and configure a container for networking access .IP \(bu 2 Use \fIsaltify\fP to deploy salt and re\-attach to master .UNINDENT .SS Limitations .INDENT 0.0 .IP \(bu 2 You can only act on one minion and one provider at a time. .IP \(bu 2 Listing images must be targeted to a particular LXC provider (nothing will be outputted with \fBall\fP) .UNINDENT .SS Operation .sp Salt\(aqs LXC support does not use lxc.init. This enables it to tie minions to a master in a more generic fashion (if any masters are defined) and allows other custom association code. .sp Order of operation: .INDENT 0.0 .IP \(bu 2 Create the LXC container using \fBthe LXC execution module\fP on the desired minion (clone or template) .IP \(bu 2 Change LXC config options (if any need to be changed) .IP \(bu 2 Start container .IP \(bu 2 Change base passwords if any .IP \(bu 2 Change base DNS configuration if necessary .IP \(bu 2 Wait for LXC container to be up and ready for ssh .IP \(bu 2 Test SSH connection and bailout in error .IP \(bu 2 Via SSH (with the help of saltify), upload deploy script and seeds, then re\-attach the minion. .UNINDENT .SS Provider configuration .sp Here is a simple provider configuration: .sp .nf .ft C # Note: This example goes in /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. devhost10\-lxc: target: devhost10 provider: lxc .ft P .fi .SS Profile configuration .sp Here are the options to configure your containers: .sp .nf .ft C \(ga\(gatarget\(ga\(ga Host minion id to install the lxc Container into \(ga\(gaprofile\(ga\(ga Name of the profile containing the LXC configuration Container creation/clone options: Create a container by cloning: \(ga\(gafrom_container\(ga\(ga Name of an original container using clone \(ga\(gasnapshot\(ga\(ga Do we use snapshots on cloned filesystems Create a container from scratch using an LXC template: image template to use backing Backing store type (None, lvm, brtfs) lvname LVM logical volume name, if any fstype Type of filesystem size Size of the containera (for brtfs, or lvm) vgname LVM Volume Group name, if any users Names of the users to be pre\-created inside the container ssh_username Username of the SSH systems administrator inside the container sudo Do we use sudo ssh_gateway if the minion is not in your \(aqtopmaster\(aq network, use that gateway to connect to the lxc container. This may be the public ip of the hosting minion ssh_gateway_key When using gateway, the ssh key of the gateway user (passed to saltify) ssh_gateway_port When using gateway, the ssh port of the gateway (passed to saltify) ssh_gateway_user When using gateway, user to login as via SSH (passed to saltify) password password for root and sysadmin (see "users" parameter above) mac mac address to assign to the container\(aqs network interface ip IP address to assign to the container\(aqs network interface netmask netmask for the network interface\(aqs IP bridge bridge under which the container\(aqs network interface will be enslaved dnsservers List of DNS servers to use\-\-this is optional. If present, DNS servers will be restricted to that list if used lxc_conf_unset Configuration variables to unset in this container\(aqs LXC configuration lxc_conf LXC configuration variables to add in this container\(aqs LXC configuration minion minion configuration (see :doc:\(gaMinion Configuration in Salt Cloud \(ga) .ft P .fi .sp .nf .ft C # Note: This example would go in /etc/salt/cloud.profile or any file in the # /etc/salt/cloud.profile.d/ directory. devhost10\-lxc: provider: devhost10\-lxc from_container: ubuntu backing: lvm sudo: True size: 3g ip: 10.0.3.9 minion: master: 10.5.0.1 master_port: 4506 lxc_conf: \- lxc.utsname: superlxc .ft P .fi .SS Driver Support .INDENT 0.0 .IP \(bu 2 Container creation .IP \(bu 2 Image listing (LXC templates) .IP \(bu 2 Running container informations (IP addresses, etc.) .UNINDENT .SS Getting Started With Linode .sp Linode is a public cloud provider with a focus on Linux instances. .SS Dependencies .sp The Linode driver for Salt Cloud requires Libcloud 0.13.2 or higher to be installed. .SS Configuration .sp Linode requires a single API key, but the default root password for new instances also needs to be set: .sp .nf .ft C # Note: This example is for /etc/salt/cloud.providers or any file in the # /etc/salt/cloud.providers.d/ directory. my\-linode\-config: apikey: asldkgfakl;sdfjsjaslfjaklsdjf;askldjfaaklsjdfhasldsadfghdkf password: F00barbaz provider: linode .ft P .fi .sp The password needs to be 8 characters and contain lowercase, uppercase and numbers. .SS Profiles .SS Cloud Profiles .sp Set up an initial profile at \fB/etc/salt/cloud.profiles\fP or in the \fB/etc/salt/cloud.profiles.d/\fP directory: .sp .nf .ft C linode_1024: provider: my\-linode\-config size: Linode 1024 image: Arch Linux 2013.06 .ft P .fi .sp Sizes can be obtained using the \fB\-\-list\-sizes\fP option for the \fBsalt\-cloud\fP command: .sp .nf .ft C # salt\-cloud \-\-list\-sizes my\-linode\-config my\-linode\-config: \-\-\-\-\-\-\-\-\-\- linode: \-\-\-\-\-\-\-\-\-\- Linode 1024: \-\-\-\-\-\-\-\-\-\- bandwidth: 2000 disk: 49152 driver: get_uuid: id: 1 name: Linode 1024 price: 20.0 ram: 1024 uuid: 03e18728ce4629e2ac07c9cbb48afffb8cb499c4 \&...SNIP... .ft P .fi .sp Images can be obtained using the \fB\-\-list\-images\fP option for the \fBsalt\-cloud\fP command: .sp .nf .ft C # salt\-cloud \-\-list\-images my\-linode\-config my\-linode\-config: \-\-\-\-\-\-\-\-\-\- linode: \-\-\-\-\-\-\-\-\-\- Arch Linux 2013.06: \-\-\-\-\-\-\-\-\-\- driver: extra: \-\-\-\-\-\-\-\-\-\- 64bit: 1 pvops: 1 get_uuid: id: 112 name: Arch Linux 2013.06 uuid: 8457f92eaffc92b7666b6734a96ad7abe1a8a6dd \&...SNIP... .ft P .fi .SS Getting Started With OpenStack .sp OpenStack is one the most popular cloud projects. It\(aqs an open source project to build public and/or private clouds. You can use Salt Cloud to launch OpenStack instances. .INDENT 0.0 .IP \(bu 2 Using the new format, set up the cloud configuration at \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/openstack.conf\fP: .UNINDENT .sp .nf .ft C my\-openstack\-config: # Set the location of the salt\-master # minion: master: saltmaster.example.com # Configure the OpenStack driver # identity_url: http://identity.youopenstack.com/v2.0/tokens compute_name: nova protocol: ipv4 compute_region: RegionOne # Configure Openstack authentication credentials # user: myname password: 123456 # tenant is the project name tenant: myproject provider: openstack # skip SSL certificate validation (default false) insecure: false .ft P .fi .SS Using nova client to get information from OpenStack .sp One of the best ways to get information about OpenStack is using the novaclient python package (available in pypi as python\-novaclient). The client configuration is a set of environment variables that you can get from the Dashboard. Log in and then go to Project \-> Access & security \-> API Access and download the "OpenStack RC file". Then: .sp .nf .ft C source /path/to/your/rcfile nova credentials nova endpoints .ft P .fi .sp In the \fBnova endpoints\fP output you can see the information about \fBcompute_region\fP and \fBcompute_name\fP. .SS Compute Region .sp It depends on the OpenStack cluster that you are using. Please, have a look at the previous sections. .SS Authentication .sp The \fBuser\fP and \fBpassword\fP is the same user as is used to log into the OpenStack Dashboard. .SS Profiles .sp Here is an example of a profile: .sp .nf .ft C openstack_512: provider: my\-openstack\-config size: m1.tiny image: cirros\-0.3.1\-x86_64\-uec ssh_key_file: /tmp/test.pem ssh_key_name: test ssh_interface: private_ips .ft P .fi .sp The following list explains some of the important properties. .INDENT 0.0 .TP .B size can be one of the options listed in the output of \fBnova flavor\-list\fP. .TP .B image can be one of the options listed in the output of \fBnova image\-list\fP. .TP .B ssh_key_file The SSH private key that the salt\-cloud uses to SSH into the VM after its first booted in order to execute a command or script. This private key\(aqs \fIpublic key\fP must be the openstack public key inserted into the authorized_key\(aqs file of the VM\(aqs root user account. .TP .B ssh_key_name The name of the openstack SSH public key that is inserted into the authorized_keys file of the VM\(aqs root user account. Prior to using this public key, you must use openstack commands or the horizon web UI to load that key into the tenant\(aqs account. Note that this openstack tenant must be the one you defined in the cloud provider. .TP .B ssh_interface This option allows you to create a VM without a public IP. If this option is omitted and the VM does not have a public IP, then the salt\-cloud waits for a certain period of time and then destroys the VM. .UNINDENT .sp For more information concerning cloud profiles, see \fBhere\fP. .SS change_password .sp If no ssh_key_file is provided, and the server already exists, change_password will use the api to change the root password of the server so that it can be bootstrapped. .sp .nf .ft C change_password: True .ft P .fi .SS Getting Started With Parallels .sp Parallels Cloud Server is a product by Parallels that delivers a cloud hosting solution. The PARALLELS module for Salt Cloud enables you to manage instances hosted by a provider using PCS. Further information can be found at: .sp \fI\%http://www.parallels.com/products/pcs/\fP .INDENT 0.0 .IP \(bu 2 Using the old format, set up the cloud configuration at \fB/etc/salt/cloud\fP: .UNINDENT .sp .nf .ft C # Set up the location of the salt master # minion: master: saltmaster.example.com # Set the PARALLELS access credentials (see below) # PARALLELS.user: myuser PARALLELS.password: badpass # Set the access URL for your PARALLELS provider # PARALLELS.url: https://api.cloud.xmission.com:4465/paci/v1.0/ .ft P .fi .INDENT 0.0 .IP \(bu 2 Using the new format, set up the cloud configuration at \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/parallels.conf\fP: .UNINDENT .sp .nf .ft C my\-parallels\-config: # Set up the location of the salt master # minion: master: saltmaster.example.com # Set the PARALLELS access credentials (see below) # user: myuser password: badpass # Set the access URL for your PARALLELS provider # url: https://api.cloud.xmission.com:4465/paci/v1.0/ provider: parallels .ft P .fi .SS Access Credentials .sp The \fBuser\fP, \fBpassword\fP and \fBurl\fP will be provided to you by your cloud provider. These are all required in order for the PARALLELS driver to work. .SS Cloud Profiles .sp Set up an initial profile at \fB/etc/salt/cloud.profiles\fP or \fB/etc/salt/cloud.profiles.d/parallels.conf\fP: .INDENT 0.0 .IP \(bu 2 Using the old cloud configuration format: .UNINDENT .sp .nf .ft C parallels\-ubuntu: provider: parallels image: ubuntu\-12.04\-x86_64 .ft P .fi .INDENT 0.0 .IP \(bu 2 Using the new cloud configuration format and the cloud configuration example from above: .UNINDENT .sp .nf .ft C parallels\-ubuntu: provider: my\-parallels\-config image: ubuntu\-12.04\-x86_64 .ft P .fi .sp The profile can be realized now with a salt command: .sp .nf .ft C # salt\-cloud \-p parallels\-ubuntu myubuntu .ft P .fi .sp This will create an instance named \fBmyubuntu\fP on the cloud provider. The minion that is installed on this instance will have an \fBid\fP of \fBmyubuntu\fP. If the command was executed on the salt\-master, its Salt key will automatically be signed on the master. .sp Once the instance has been created with salt\-minion installed, connectivity to it can be verified with Salt: .sp .nf .ft C # salt myubuntu test.ping .ft P .fi .SS Required Settings .sp The following settings are always required for PARALLELS: .INDENT 0.0 .IP \(bu 2 Using the old cloud configuration format: .UNINDENT .sp .nf .ft C PARALLELS.user: myuser PARALLELS.password: badpass PARALLELS.url: https://api.cloud.xmission.com:4465/paci/v1.0/ .ft P .fi .INDENT 0.0 .IP \(bu 2 Using the new cloud configuration format: .UNINDENT .sp .nf .ft C my\-parallels\-config: user: myuser password: badpass url: https://api.cloud.xmission.com:4465/paci/v1.0/ provider: parallels .ft P .fi .SS Optional Settings .sp Unlike other cloud providers in Salt Cloud, Parallels does not utilize a \fBsize\fP setting. This is because Parallels allows the end\-user to specify a more detailed configuration for their instances, than is allowed by many other cloud providers. The following options are available to be used in a profile, with their default settings listed. .sp .nf .ft C # Description of the instance. Defaults to the instance name. desc: # How many CPU cores, and how fast they are (in MHz) cpu_number: 1 cpu_power: 1000 # How many megabytes of RAM ram: 256 # Bandwidth available, in kbps bandwidth: 100 # How many public IPs will be assigned to this instance ip_num: 1 # Size of the instance disk (in GiB) disk_size: 10 # Username and password ssh_username: root password: # The name of the image, from \(ga\(gasalt\-cloud \-\-list\-images parallels\(ga\(ga image: ubuntu\-12.04\-x86_64 .ft P .fi .SS Getting Started With Proxmox .sp Proxmox Virtual Environment is a complete server virtualization management solution, based on KVM virtualization and OpenVZ containers. Further information can be found at: .sp \fI\%http://www.proxmox.org/\fP .sp Please note: This module allows you to create both OpenVZ and KVM but installing Salt on it will only be done when the VM is an OpenVZ container rather than a KVM virtual machine. .INDENT 0.0 .IP \(bu 2 Set up the cloud configuration at \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/proxmox.conf\fP: .UNINDENT .sp .nf .ft C my\-proxmox\-config: # Set up the location of the salt master # minion: master: saltmaster.example.com # Set the PROXMOX access credentials (see below) # user: myuser@pve password: badpass # Set the access URL for your PROXMOX provider # url: your.proxmox.host provider: proxmox .ft P .fi .SS Access Credentials .sp The \fBuser\fP, \fBpassword\fP and \fBurl\fP will be provided to you by your cloud provider. These are all required in order for the PROXMOX driver to work. .SS Cloud Profiles .sp Set up an initial profile at \fB/etc/salt/cloud.profiles\fP or \fB/etc/salt/cloud.profiles.d/proxmox.conf\fP: .INDENT 0.0 .IP \(bu 2 Configure a profile to be used: .UNINDENT .sp .nf .ft C proxmox\-ubuntu: provider: proxmox image: local:vztmpl/ubuntu\-12.04\-standard_12.04\-1_amd64.tar.gz technology: openvz host: myvmhost ip_address: 192.168.100.155 password: topsecret .ft P .fi .sp The profile can be realized now with a salt command: .sp .nf .ft C # salt\-cloud \-p proxmox\-ubuntu myubuntu .ft P .fi .sp This will create an instance named \fBmyubuntu\fP on the cloud provider. The minion that is installed on this instance will have a \fBhostname\fP of \fBmyubuntu\fP. If the command was executed on the salt\-master, its Salt key will automatically be signed on the master. .sp Once the instance has been created with salt\-minion installed, connectivity to it can be verified with Salt: .sp .nf .ft C # salt myubuntu test.ping .ft P .fi .SS Required Settings .sp The following settings are always required for PROXMOX: .INDENT 0.0 .IP \(bu 2 Using the new cloud configuration format: .UNINDENT .sp .nf .ft C my\-proxmox\-config: provider: proxmox user: saltcloud@pve password: xyzzy url: your.proxmox.host .ft P .fi .SS Optional Settings .sp Unlike other cloud providers in Salt Cloud, Proxmox does not utilize a \fBsize\fP setting. This is because Proxmox allows the end\-user to specify a more detailed configuration for their instances, than is allowed by many other cloud providers. The following options are available to be used in a profile, with their default settings listed. .sp .nf .ft C # Description of the instance. desc: # How many CPU cores, and how fast they are (in MHz) cpus: 1 cpuunits: 1000 # How many megabytes of RAM memory: 256 # How much swap space in MB swap: 256 # Whether to auto boot the vm after the host reboots onboot: 1 # Size of the instance disk (in GiB) disk: 10 # Host to create this vm on host: myvmhost # Nameservers. Defaults to host nameserver: 8.8.8.8 8.8.4.4 # Username and password ssh_username: root password: # The name of the image, from \(ga\(gasalt\-cloud \-\-list\-images proxmox\(ga\(ga image: local:vztmpl/ubuntu\-12.04\-standard_12.04\-1_amd64.tar.gz .ft P .fi .SS Getting Started With Rackspace .sp Rackspace is a major public cloud platform which may be configured using either the \fIrackspace\fP or the \fIopenstack\fP driver, depending on your needs. .sp Please note that the \fIrackspace\fP driver is only intended for 1st gen instances, aka, "the old cloud" at Rackspace. It is required for 1st gen instances, but will \fInot\fP work with OpenStack\-based instances. Unless you explicitly have a reason to use it, it is highly recommended that you use the \fIopenstack\fP driver instead. .INDENT 0.0 .TP .B To use the \fIopenstack\fP driver (recommended), set up the cloud configuration at \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/rackspace.conf\fP: .UNINDENT .sp .nf .ft C my\-rackspace\-config: # Set the location of the salt\-master # minion: master: saltmaster.example.com # Configure Rackspace using the OpenStack plugin # identity_url: \(aqhttps://identity.api.rackspacecloud.com/v2.0/tokens\(aq compute_name: cloudServersOpenStack protocol: ipv4 # Set the compute region: # compute_region: DFW # Configure Rackspace authentication credentials # user: myname tenant: 123456 apikey: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx provider: openstack .ft P .fi .INDENT 0.0 .TP .B To use the \fIrackspace\fP driver, set up the cloud configuration at \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/rackspace.conf\fP: .UNINDENT .sp .nf .ft C my\-rackspace\-config: provider: rackspace # The Rackspace login user user: fred # The Rackspace user\(aqs apikey apikey: 901d3f579h23c8v73q9 .ft P .fi .sp The settings that follow are for using Rackspace with the \fIopenstack\fP driver, and will not work with the \fIrackspace\fP driver. .SS Compute Region .sp Rackspace currently has six compute regions which may be used: .sp .nf .ft C DFW \-> Dallas/Forth Worth ORD \-> Chicago SYD \-> Sydney LON \-> London IAD \-> Northern Virginia HKG \-> Hong Kong .ft P .fi .sp Note: Currently the LON region is only available with a UK account, and UK accounts cannot access other regions .SS Authentication .sp The \fBuser\fP is the same user as is used to log into the Rackspace Control Panel. The \fBtenant\fP and \fBapikey\fP can be found in the API Keys area of the Control Panel. The \fBapikey\fP will be labeled as API Key (and may need to be generated), and \fBtenant\fP will be labeled as Cloud Account Number. .sp An initial profile can be configured in \fB/etc/salt/cloud.profiles\fP or \fB/etc/salt/cloud.profiles.d/rackspace.conf\fP: .sp .nf .ft C openstack_512: provider: my\-rackspace\-config size: 512 MB Standard image: Ubuntu 12.04 LTS (Precise Pangolin) .ft P .fi .sp To instantiate a machine based on this profile: .sp .nf .ft C # salt\-cloud \-p openstack_512 myinstance .ft P .fi .sp This will create a virtual machine at Rackspace with the name \fBmyinstance\fP. This operation may take several minutes to complete, depending on the current load at the Rackspace data center. .sp Once the instance has been created with salt\-minion installed, connectivity to it can be verified with Salt: .sp .nf .ft C # salt myinstance test.ping .ft P .fi .SS RackConnect Environments .sp Rackspace offers a hybrid hosting configuration option called RackConnect that allows you to use a physical firewall appliance with your cloud servers. When this service is in use the public_ip assigned by nova will be replaced by a NAT ip on the firewall. For salt\-cloud to work properly it must use the newly assigned "access ip" instead of the Nova assigned public ip. You can enable that capability by adding this to your profiles: .sp .nf .ft C openstack_512: provider: my\-openstack\-config size: 512 MB Standard image: Ubuntu 12.04 LTS (Precise Pangolin) rackconnect: True .ft P .fi .SS Managed Cloud Environments .sp Rackspace offers a managed service level of hosting. As part of the managed service level you have the ability to choose from base of lamp installations on cloud server images. The post build process for both the base and the lamp installations used Chef to install things such as the cloud monitoring agent and the cloud backup agent. It also takes care of installing the lamp stack if selected. In order to prevent the post installation process from stomping over the bootstrapping you can add the below to your profiles. .sp .nf .ft C openstack_512: provider: my\-rackspace\-config size: 512 MB Standard image: Ubuntu 12.04 LTS (Precise Pangolin) managedcloud: True .ft P .fi .SS First and Next Generation Images .sp Rackspace provides two sets of virtual machine images, \fIfirst\fP and \fInext\fP generation. As of \fB0.8.9\fP salt\-cloud will default to using the \fInext\fP generation images. To force the use of first generation images, on the profile configuration please add: .sp .nf .ft C FreeBSD\-9.0\-512: provider: my\-rackspace\-config size: 512 MB Standard image: FreeBSD 9.0 force_first_gen: True .ft P .fi .SS Getting Started With SoftLayer .sp SoftLayer is a public cloud provider, and baremetal hardware hosting provider. .SS Dependencies .sp The SoftLayer driver for Salt Cloud requires the softlayer package, which is available at PyPI: .sp \fI\%https://pypi.python.org/pypi/SoftLayer\fP .sp This package can be installed using \fIpip\fP or \fIeasy_install\fP: .sp .nf .ft C # pip install softlayer # easy_install softlayer .ft P .fi .SS Configuration .sp Set up the cloud config at \fB/etc/salt/cloud.providers\fP: .sp .nf .ft C # Note: These examples are for /etc/salt/cloud.providers my\-softlayer: # Set up the location of the salt master minion: master: saltmaster.example.com # Set the SoftLayer access credentials (see below) user: MYUSER1138 apikey: \(aqe3b68aa711e6deadc62d5b76355674beef7cc3116062ddbacafe5f7e465bfdc9\(aq provider: softlayer my\-softlayer\-hw: # Set up the location of the salt master minion: master: saltmaster.example.com # Set the SoftLayer access credentials (see below) user: MYUSER1138 apikey: \(aqe3b68aa711e6deadc62d5b76355674beef7cc3116062ddbacafe5f7e465bfdc9\(aq provider: softlayer_hw .ft P .fi .SS Access Credentials .sp The \fIuser\fP setting is the same user as is used to log into the SoftLayer Administration area. The \fIapikey\fP setting is found inside the Admin area after logging in: .INDENT 0.0 .IP \(bu 2 Hover over the \fIAdministrative\fP menu item. .IP \(bu 2 Click the \fIAPI Access\fP link. .IP \(bu 2 The \fIapikey\fP is located next to the \fIuser\fP setting. .UNINDENT .SS Profiles .SS Cloud Profiles .sp Set up an initial profile at \fB/etc/salt/cloud.profiles\fP: .sp .nf .ft C base_softlayer_ubuntu: provider: my\-softlayer image: UBUNTU_LATEST cpu_number: 1 ram: 1024 disk_size: 100 local_disk: True hourly_billing: True domain: example.com location: sjc01 # Optional max_net_speed: 1000 private_vlan: 396 private_network: True private_ssh: True # May be used _instead_of_ image global_identifier: 320d8be5\-46c0\-dead\-cafe\-13e3c51 .ft P .fi .sp Most of the above items are required; optional items are specified below. .SS image .sp Images to build an instance can be found using the \fI\-\-list\-images\fP option: .sp .nf .ft C # salt\-cloud \-\-list\-images my\-softlayer .ft P .fi .sp The setting used will be labeled as \fItemplate\fP. .SS cpu_number .sp This is the number of CPU cores that will be used for this instance. This number may be dependent upon the image that is used. For instance: .sp .nf .ft C Red Hat Enterprise Linux 6 \- Minimal Install (64 bit) (1 \- 4 Core): \-\-\-\-\-\-\-\-\-\- name: Red Hat Enterprise Linux 6 \- Minimal Install (64 bit) (1 \- 4 Core) template: REDHAT_6_64 Red Hat Enterprise Linux 6 \- Minimal Install (64 bit) (5 \- 100 Core): \-\-\-\-\-\-\-\-\-\- name: Red Hat Enterprise Linux 6 \- Minimal Install (64 bit) (5 \- 100 Core) template: REDHAT_6_64 .ft P .fi .sp Note that the template (meaning, the \fIimage\fP option) for both of these is the same, but the names suggests how many CPU cores are supported. .SS ram .sp This is the amount of memory, in megabytes, that will be allocated to this instance. .SS disk_size .sp The amount of disk space that will be allocated to this image, in megabytes. .SS local_disk .sp When true the disks for the computing instance will be provisioned on the host which it runs, otherwise SAN disks will be provisioned. .SS hourly_billing .sp When true the computing instance will be billed on hourly usage, otherwise it will be billed on a monthly basis. .SS domain .sp The domain name that will be used in the FQDN (Fully Qualified Domain Name) for this instance. The \fIdomain\fP setting will be used in conjunction with the instance name to form the FQDN. .SS location .sp Images to build an instance can be found using the \fI\-\-list\-locations\fP option: .sp .nf .ft C # salt\-cloud \-\-list\-location my\-softlayer .ft P .fi .SS max_net_speed .sp Specifies the connection speed for the instance\(aqs network components. This setting is optional. By default, this is set to 10. .SS public_vlan .sp If it is necessary for an instance to be created within a specific frontend VLAN, the ID for that VLAN can be specified in either the provider or profile configuration. .sp This ID can be queried using the \fIlist_vlans\fP function, as described below. This setting is optional. .SS private_vlan .sp If it is necessary for an instance to be created within a specific backend VLAN, the ID for that VLAN can be specified in either the provider or profile configuration. .sp This ID can be queried using the \fIlist_vlans\fP function, as described below. This setting is optional. .SS private_network .sp If a server is to only be used internally, meaning it does not have a public VLAN associated with it, this value would be set to True. This setting is optional. The default is False. .SS private_ssh .sp Whether to run the deploy script on the server using the public IP address or the private IP address. If set to True, Salt Cloud will attempt to SSH into the new server using the private IP address. The default is False. This settiong is optional. .SS global_identifier .sp When creating an instance using a custom template, this option is set to the corresponding value obtained using the \fIlist_custom_images\fP function. This option will not be used if an \fIimage\fP is set, and if an \fIimage\fP is not set, it is required. .sp The profile can be realized now with a salt command: .sp .nf .ft C # salt\-cloud \-p base_softlayer_ubuntu myserver .ft P .fi .sp Using the above configuration, this will create \fImyserver.example.com\fP. .sp Once the instance has been created with salt\-minion installed, connectivity to it can be verified with Salt: .sp .nf .ft C # salt \(aqmyserver.example.com\(aq test.ping .ft P .fi .SS Cloud Profiles .sp Set up an initial profile at \fB/etc/salt/cloud.profiles\fP: .sp .nf .ft C base_softlayer_hw_centos: provider: my\-softlayer\-hw # CentOS 6.0 \- Minimal Install (64 bit) image: 13963 # 2 x 2.0 GHz Core Bare Metal Instance \- 2 GB Ram size: 1921 # 250GB SATA II hdd: 19 # San Jose 01 location: 168642 domain: example.com # Optional vlan: 396 port_speed: 273 banwidth: 248 .ft P .fi .sp Most of the above items are required; optional items are specified below. .SS image .sp Images to build an instance can be found using the \fI\-\-list\-images\fP option: .sp .nf .ft C # salt\-cloud \-\-list\-images my\-softlayer\-hw .ft P .fi .sp A list of \fIid\(gas and names will be provided. The \(ganame\fP will describe the operating system and architecture. The \fIid\fP will be the setting to be used in the profile. .SS size .sp Sizes to build an instance can be found using the \fI\-\-list\-sizes\fP option: .sp .nf .ft C # salt\-cloud \-\-list\-sizes my\-softlayer\-hw .ft P .fi .sp A list of \fIid\(gas and names will be provided. The \(ganame\fP will describe the speed and quantity of CPU cores, and the amount of memory that the hardware will contain. The \fIid\fP will be the setting to be used in the profile. .SS hdd .sp There are currently two sizes of hard disk drive (HDD) that are available for hardware instances on SoftLayer: .sp .nf .ft C 19: 250GB SATA II 1267: 500GB SATA II .ft P .fi .sp The \fIhdd\fP setting in the profile will be either 19 or 1267. Other sizes may be added in the future. .SS location .sp Locations to build an instance can be found using the \fI\-\-list\-images\fP option: .sp .nf .ft C # salt\-cloud \-\-list\-locations my\-softlayer\-hw .ft P .fi .sp A list of IDs and names will be provided. The \fIlocation\fP will describe the location in human terms. The \fIid\fP will be the setting to be used in the profile. .SS domain .sp The domain name that will be used in the FQDN (Fully Qualified Domain Name) for this instance. The \fIdomain\fP setting will be used in conjunction with the instance name to form the FQDN. .SS vlan .sp If it is necessary for an instance to be created within a specific VLAN, the ID for that VLAN can be specified in either the provider or profile configuration. .sp This ID can be queried using the \fIlist_vlans\fP function, as described below. .SS port_speed .sp Specifies the speed for the instance\(aqs network port. This setting refers to an ID within the SoftLayer API, which sets the port speed. This setting is optional. The default is 273, or, 100 Mbps Public & Private Networks. The following settings are available: .INDENT 0.0 .IP \(bu 2 273: 100 Mbps Public & Private Networks .IP \(bu 2 274: 1 Gbps Public & Private Networks .IP \(bu 2 21509: 10 Mbps Dual Public & Private Networks (up to 20 Mbps) .IP \(bu 2 21513: 100 Mbps Dual Public & Private Networks (up to 200 Mbps) .IP \(bu 2 2314: 1 Gbps Dual Public & Private Networks (up to 2 Gbps) .IP \(bu 2 272: 10 Mbps Public & Private Networks .UNINDENT .SS bandwidth .sp Specifies the network bandwidth available for the instance. This setting refers to an ID within the SoftLayer API, which sets the bandwidth. This setting is optional. The default is 248, or, 5000 GB Bandwidth. The following settings are available: .INDENT 0.0 .IP \(bu 2 248: 5000 GB Bandwidth .IP \(bu 2 129: 6000 GB Bandwidth .IP \(bu 2 130: 8000 GB Bandwidth .IP \(bu 2 131: 10000 GB Bandwidth .IP \(bu 2 36: Unlimited Bandwidth (10 Mbps Uplink) .IP \(bu 2 125: Unlimited Bandwidth (100 Mbps Uplink) .UNINDENT .SS Actions .sp The following actions are currently supported by the SoftLayer Salt Cloud driver. .SS show_instance .sp This action is a thin wrapper around \fI\-\-full\-query\fP, which displays details on a single instance only. In an environment with several machines, this will save a user from having to sort through all instance data, just to examine a single instance. .sp .nf .ft C $ salt\-cloud \-a show_instance myinstance .ft P .fi .SS Functions .sp The following functions are currently supported by the SoftLayer Salt Cloud driver. .SS list_vlans .sp This function lists all VLANs associated with the account, and all known data from the SoftLayer API concerning those VLANs. .sp .nf .ft C $ salt\-cloud \-f list_vlans my\-softlayer $ salt\-cloud \-f list_vlans my\-softlayer\-hw .ft P .fi .sp The \fIid\fP returned in this list is necessary for the \fIvlan\fP option when creating an instance. .SS list_custom_images .sp This function lists any custom templates associated with the account, that can be used to create a new instance. .sp .nf .ft C $ salt\-cloud \-f list_custom_images my\-softlayer .ft P .fi .sp The \fIglobalIdentifier\fP returned in this list is necessary for the \fIglobal_identifier\fP option when creating an image using a custom template. .SS Optional Products for SoftLayer HW .sp The softlayer_hw provider supports the ability to add optional products, which are supported by SoftLayer\(aqs API. These products each have an ID associated with them, that can be passed into Salt Cloud with the \fIoptional_products\fP option: .sp .nf .ft C softlayer_hw_test: provider: my\-softlayer\-hw # CentOS 6.0 \- Minimal Install (64 bit) image: 13963 # 2 x 2.0 GHz Core Bare Metal Instance \- 2 GB Ram size: 1921 # 250GB SATA II hdd: 19 # San Jose 01 location: 168642 domain: example.com optional_products: # MySQL for Linux \- id: 28 # Business Continuance Insurance \- id: 104 .ft P .fi .sp These values can be manually obtained by looking at the source of an order page on the SoftLayer web interface. For convenience, many of these values are listed here: .SS Public Secondary IP Addresses .INDENT 0.0 .IP \(bu 2 22: 4 Public IP Addresses .IP \(bu 2 23: 8 Public IP Addresses .UNINDENT .SS Primary IPv6 Addresses .INDENT 0.0 .IP \(bu 2 17129: 1 IPv6 Address .UNINDENT .SS Public Static IPv6 Addresses .INDENT 0.0 .IP \(bu 2 1481: /64 Block Static Public IPv6 Addresses .UNINDENT .SS OS\-Specific Addon .INDENT 0.0 .IP \(bu 2 17139: XenServer Advanced for XenServer 6.x .IP \(bu 2 17141: XenServer Enterprise for XenServer 6.x .IP \(bu 2 2334: XenServer Advanced for XenServer 5.6 .IP \(bu 2 2335: XenServer Enterprise for XenServer 5.6 .IP \(bu 2 13915: Microsoft WebMatrix .IP \(bu 2 21276: VMware vCenter 5.1 Standard .UNINDENT .SS Control Panel Software .INDENT 0.0 .IP \(bu 2 121: cPanel/WHM with Fantastico and RVskin .IP \(bu 2 20778: Parallels Plesk Panel 11 (Linux) 100 Domain w/ Power Pack .IP \(bu 2 20786: Parallels Plesk Panel 11 (Windows) 100 Domain w/ Power Pack .IP \(bu 2 20787: Parallels Plesk Panel 11 (Linux) Unlimited Domain w/ Power Pack .IP \(bu 2 20792: Parallels Plesk Panel 11 (Windows) Unlimited Domain w/ Power Pack .IP \(bu 2 2340: Parallels Plesk Panel 10 (Linux) 100 Domain w/ Power Pack .IP \(bu 2 2339: Parallels Plesk Panel 10 (Linux) Unlimited Domain w/ Power Pack .IP \(bu 2 13704: Parallels Plesk Panel 10 (Windows) Unlimited Domain w/ Power Pack .UNINDENT .SS Database Software .INDENT 0.0 .IP \(bu 2 29: MySQL 5.0 for Windows .IP \(bu 2 28: MySQL for Linux .IP \(bu 2 21501: Riak 1.x .IP \(bu 2 20893: MongoDB .IP \(bu 2 30: Microsoft SQL Server 2005 Express .IP \(bu 2 92: Microsoft SQL Server 2005 Workgroup .IP \(bu 2 90: Microsoft SQL Server 2005 Standard .IP \(bu 2 94: Microsoft SQL Server 2005 Enterprise .IP \(bu 2 1330: Microsoft SQL Server 2008 Express .IP \(bu 2 1340: Microsoft SQL Server 2008 Web .IP \(bu 2 1337: Microsoft SQL Server 2008 Workgroup .IP \(bu 2 1334: Microsoft SQL Server 2008 Standard .IP \(bu 2 1331: Microsoft SQL Server 2008 Enterprise .IP \(bu 2 2179: Microsoft SQL Server 2008 Express R2 .IP \(bu 2 2173: Microsoft SQL Server 2008 Web R2 .IP \(bu 2 2183: Microsoft SQL Server 2008 Workgroup R2 .IP \(bu 2 2180: Microsoft SQL Server 2008 Standard R2 .IP \(bu 2 2176: Microsoft SQL Server 2008 Enterprise R2 .UNINDENT .SS Anti\-Virus & Spyware Protection .INDENT 0.0 .IP \(bu 2 594: McAfee VirusScan Anti\-Virus \- Windows .IP \(bu 2 414: McAfee Total Protection \- Windows .UNINDENT .SS Insurance .INDENT 0.0 .IP \(bu 2 104: Business Continuance Insurance .UNINDENT .SS Monitoring .INDENT 0.0 .IP \(bu 2 55: Host Ping .IP \(bu 2 56: Host Ping and TCP Service Monitoring .UNINDENT .SS Notification .INDENT 0.0 .IP \(bu 2 57: Email and Ticket .UNINDENT .SS Advanced Monitoring .INDENT 0.0 .IP \(bu 2 2302: Monitoring Package \- Basic .IP \(bu 2 2303: Monitoring Package \- Advanced .IP \(bu 2 2304: Monitoring Package \- Premium Application .UNINDENT .SS Response .INDENT 0.0 .IP \(bu 2 58: Automated Notification .IP \(bu 2 59: Automated Reboot from Monitoring .IP \(bu 2 60: 24x7x365 NOC Monitoring, Notification, and Response .UNINDENT .SS Intrusion Detection & Protection .INDENT 0.0 .IP \(bu 2 413: McAfee Host Intrusion Protection w/Reporting .UNINDENT .SS Hardware & Software Firewalls .INDENT 0.0 .IP \(bu 2 411: APF Software Firewall for Linux .IP \(bu 2 894: Microsoft Windows Firewall .IP \(bu 2 410: 10Mbps Hardware Firewall .IP \(bu 2 409: 100Mbps Hardware Firewall .IP \(bu 2 408: 1000Mbps Hardware Firewall .UNINDENT .SS Getting Started with VEXXHOST .sp \fI\%VEXXHOST\fP is an cloud computing provider which provides \fI\%Canadian cloud computing\fP services which are based in Monteral and uses the libcloud OpenStack driver. VEXXHOST currently runs the Havana release of OpenStack. When provisioning new instances, they automatically get a public IP and private IP address. Therefore, you do not need to assign a floating IP to access your instance once it\(aqs booted. .SS Cloud Provider Configuration .sp To use the \fIopenstack\fP driver for the VEXXHOST public cloud, you will need to set up the cloud provider configuration file as in the example below: .sp \fB/etc/salt/cloud.providers.d/vexxhost.conf\fP: In order to use the VEXXHOST public cloud, you will need to setup a cloud provider configuration file as in the example below which uses the OpenStack driver. .sp .nf .ft C vexxhost: # Set the location of the salt\-master # minion: master: saltmaster.example.com # Configure VEXXHOST using the OpenStack plugin # identity_url: http://auth.api.thenebulacloud.com:5000/v2.0/tokens compute_name: nova # Set the compute region: # compute_region: na\-yul\-nhs1 # Configure VEXXHOST authentication credentials # user: your\-tenant\-id password: your\-api\-key tenant: your\-tenant\-name # keys to allow connection to the instance launched # ssh_key_name: yourkey ssh_key_file: /path/to/key/yourkey.priv provider: openstack .ft P .fi .SS Authentication .sp All of the authentication fields that you need can be found by logging into your VEXXHOST customer center. Once you\(aqve logged in, you will need to click on "CloudConsole" and then click on "API Credentials". .SS Cloud Profile Configuration .sp In order to get the correct image UUID and the instance type to use in the cloud profile, you can run the following command respectively: .sp .nf .ft C # salt\-cloud \-\-list\-images=vexxhost\-config # salt\-cloud \-\-list\-sizes=vexxhost\-config .ft P .fi .sp Once you have that, you can go ahead and create a new cloud profile. This profile will build an Ubuntu 12.04 LTS \fInb.2G\fP instance. .sp \fB/etc/salt/cloud.profiles.d/vh_ubuntu1204_2G.conf\fP: .sp .nf .ft C vh_ubuntu1204_2G: provider: vexxhost image: 4051139f\-750d\-4d72\-8ef0\-074f2ccc7e5a size: nb.2G .ft P .fi .SS Provision an instance .sp To create an instance based on the sample profile that we created above, you can run the following \fIsalt\-cloud\fP command. .sp .nf .ft C # salt\-cloud \-p vh_ubuntu1204_2G vh_instance1 .ft P .fi .sp Typically, instances are provisioned in under 30 seconds on the VEXXHOST public cloud. After the instance provisions, it will be set up a minion and then return all the instance information once it\(aqs complete. .sp Once the instance has been setup, you can test connectivity to it by running the following command: .sp .nf .ft C # salt vh_instance1 test.ping .ft P .fi .sp You can now continue to provision new instances and they will all automatically be set up as minions of the master you\(aqve defined in the configuration file. .SS Miscellaneous Options .SS Miscellaneous Salt Cloud Options .sp This page describes various miscellaneous options available in Salt Cloud .SS Deploy Script Arguments .sp Custom deploy scripts are unlikely to need custom arguments to be passed to them, but salt\-bootstrap has been extended quite a bit, and this may be necessary. script_args can be specified in either the profile or the map file, to pass arguments to the deploy script: .sp .nf .ft C ec2\-amazon: provider: ec2 image: ami\-1624987f size: Micro Instance ssh_username: ec2\-user script: bootstrap\-salt script_args: \-c /tmp/ .ft P .fi .sp This has also been tested to work with pipes, if needed: .sp .nf .ft C script_args: | head .ft P .fi .SS Sync After Install .sp Salt allows users to create custom modules, grains and states which can be synchronised to minions to extend Salt with further functionality. .sp This option will inform Salt Cloud to synchronise your custom modules, grains, states or all these to the minion just after it has been created. For this to happen, the following line needs to be added to the main cloud configuration file: .sp .nf .ft C sync_after_install: all .ft P .fi .sp The available options for this setting are: .sp .nf .ft C modules grains states all .ft P .fi .SS Setting up New Salt Masters .sp It has become increasingly common for users to set up multi\-hierarchal infrastructures using Salt Cloud. This sometimes involves setting up an instance to be a master in addition to a minion. With that in mind, you can now lay down master configuration on a machine by specifying master options in the profile or map file. .sp .nf .ft C make_master: True .ft P .fi .sp This will cause Salt Cloud to generate master keys for the instance, and tell salt\-bootstrap to install the salt\-master package, in addition to the salt\-minion package. .sp The default master configuration is usually appropriate for most users, and will not be changed unless specific master configuration has been added to the profile or map: .sp .nf .ft C master: user: root interface: 0.0.0.0 .ft P .fi .SS Delete SSH Keys .sp When Salt Cloud deploys an instance, the SSH pub key for the instance is added to the known_hosts file for the user that ran the salt\-cloud command. When an instance is deployed, a cloud provider generally recycles the IP address for the instance. When Salt Cloud attempts to deploy an instance using a recycled IP address that has previously been accessed from the same machine, the old key in the known_hosts file will cause a conflict. .sp In order to mitigate this issue, Salt Cloud can be configured to remove old keys from the known_hosts file when destroying the node. In order to do this, the following line needs to be added to the main cloud configuration file: .sp .nf .ft C delete_sshkeys: True .ft P .fi .SS Keeping /tmp/ Files .sp When Salt Cloud deploys an instance, it uploads temporary files to /tmp/ for salt\-bootstrap to put in place. After the script has run, they are deleted. To keep these files around (mostly for debugging purposes), the \-\-keep\-tmp option can be added: .sp .nf .ft C salt\-cloud \-p myprofile mymachine \-\-keep\-tmp .ft P .fi .sp For those wondering why /tmp/ was used instead of /root/, this had to be done for images which require the use of sudo, and therefore do not allow remote root logins, even for file transfers (which makes /root/ unavailable). .SS Hide Output From Minion Install .sp By default Salt Cloud will stream the output from the minion deploy script directly to STDOUT. Although this can been very useful, in certain cases you may wish to switch this off. The following config option is there to enable or disable this output: .sp .nf .ft C display_ssh_output: False .ft P .fi .SS Connection Timeout .sp There are several stages when deploying Salt where Salt Cloud needs to wait for something to happen. The VM getting it\(aqs IP address, the VM\(aqs SSH port is available, etc. .sp If you find that the Salt Cloud defaults are not enough and your deployment fails because Salt Cloud did not wait log enough, there are some settings you can tweak. .IP "Note" .sp All values should be provided in seconds .RE .sp You can tweak these settings globally, per cloud provider, or event per profile definition. .SS wait_for_ip_timeout .sp The amount of time Salt Cloud should wait for a VM to start and get an IP back from the cloud provider. Default: 5 minutes. .SS wait_for_ip_interval .sp The amount of time Salt Cloud should sleep while querying for the VM\(aqs IP. Default: 5 seconds. .SS ssh_connect_timeout .sp The amount of time Salt Cloud should wait for a successful SSH connection to the VM. Default: 5 minutes. .SS wait_for_passwd_timeout .sp The amount of time until an ssh connection can be established via password or ssh key. Default 15 seconds. .SS wait_for_passwd_maxtries .sp The number of attempts to connect to the VM until we abandon. Default 15 attempts .SS wait_for_fun_timeout .sp Some cloud drivers check for an available IP or a successful SSH connection using a function, namely, SoftLayer and SoftLayer\-HW. So, the amount of time Salt Cloud should retry such functions before failing. Default: 5 minutes. .SS wait_for_spot_timeout .sp The amount of time Salt Cloud should wait before an EC2 Spot instance is available. This setting is only available for the EC2 cloud driver. .SS Salt Cloud Cache .sp Salt Cloud can maintain a cache of node data, for supported providers. The following options manage this functionality. .SS update_cachedir .sp On supported cloud providers, whether or not to maintain a cache of nodes returned from a \-\-full\-query. The data will be stored in \fBjson\fP format under \fB/cloud/active///.json\fP. This setting can be True or False. .SS diff_cache_events .sp When the cloud cachedir is being managed, if differences are encountered between the data that is returned live from the cloud provider and the data in the cache, fire events which describe the changes. This setting can be True or False. .sp Some of these events will contain data which describe a node. Because some of the fields returned may contain sensitive data, the \fBcache_event_strip_fields\fP configuration option exists to strip those fields from the event return. .sp .nf .ft C cache_event_strip_fields: \- password \- priv_key .ft P .fi .sp The following are events that can be fired based on this data. .SS salt/cloud/minionid/cache_node_new .sp A new node was found on the cloud provider which was not listed in the cloud cachedir. A dict describing the new node will be contained in the event. .SS salt/cloud/minionid/cache_node_missing .sp A node that was previously listed in the cloud cachedir is no longer available on the cloud provider. .SS salt/cloud/minionid/cache_node_diff .sp One or more pieces of data in the cloud cachedir has changed on the cloud provider. A dict containing both the old and the new data will be contained in the event. .SS SSH Known Hosts .sp Normally when bootstrapping a VM, salt\-cloud will ignore the SSH host key. This is because it does not know what the host key is before starting (because it doesn\(aqt exist yet). If strict host key checking is turned on without the key in the \fBknown_hosts\fP file, then the host will never be available, and cannot be bootstrapped. .sp If a provider is able to determine the host key before trying to bootstrap it, that provider\(aqs driver can add it to the \fBknown_hosts\fP file, and then turn on strict host key checking. This can be set up in the main cloud configuration file (normally \fB/etc/salt/cloud\fP) or in the provider\-specific configuration file: .sp .nf .ft C known_hosts_file: /path/to/.ssh/known_hosts .ft P .fi .sp If this is not set, it will default to \fB/dev/null\fP, and strict host key checking will be turned off. .sp It is highly recommended that this option is \fInot\fP set, unless the user has verified that the provider supports this functionality, and that the image being used is capable of providing the necessary information. At this time, only the EC2 driver supports this functionality. .SS Troubleshooting Steps .SS Troubleshooting Salt Cloud .sp This page describes various steps for troubleshooting problems that may arise while using Salt Cloud. .SS Virtual Machines Are Created, But Do Not Respond .sp Are TCP ports 4505 and 4506 open on the master? This is easy to overlook on new masters. Information on how to open firewall ports on various platforms can be found \fBhere\fP. .SS Generic Troubleshooting Steps .sp This section describes a set of instructions that are useful to a large number of situations, and are likely to solve most issues that arise. .IP "Version Compatibility" .sp One of the most common issues that Salt Cloud users run into is import errors. These are often caused by version compatibility issues with Salt. .sp Salt 0.16.x works with Salt Cloud 0.8.9 or greater. .sp Salt 0.17.x requires Salt Cloud 0.8.11. .sp Releases after 0.17.x (0.18 or greater) should not encounter issues as Salt Cloud has been merged into Salt itself. .RE .SS Debug Mode .sp Frequently, running Salt Cloud in debug mode will reveal information about a deployment which would otherwise not be obvious: .sp .nf .ft C salt\-cloud \-p myprofile myinstance \-l debug .ft P .fi .sp Keep in mind that a number of messages will appear that look at first like errors, but are in fact intended to give developers factual information to assist in debugging. A number of messages that appear will be for cloud providers that you do not have configured; in these cases, the message usually is intended to confirm that they are not configured. .SS Salt Bootstrap .sp By default, Salt Cloud uses the Salt Bootstrap script to provision instances: .sp This script is packaged with Salt Cloud, but may be updated without updating the Salt package: .sp .nf .ft C salt\-cloud \-u .ft P .fi .SS The Bootstrap Log .sp If the default deploy script was used, there should be a file in the \fB/tmp/\fP directory called \fBbootstrap\-salt.log\fP. This file contains the full output from the deployment, including any errors that may have occurred. .SS Keeping Temp Files .sp Salt Cloud uploads minion\-specific files to instances once they are available via SSH, and then executes a deploy script to put them into the correct place and install Salt. The \fB\-\-keep\-tmp\fP option will instruct Salt Cloud not to remove those files when finished with them, so that the user may inspect them for problems: .sp .nf .ft C salt\-cloud \-p myprofile myinstance \-\-keep\-tmp .ft P .fi .sp By default, Salt Cloud will create a directory on the target instance called \fB/tmp/.saltcloud/\fP. This directory should be owned by the user that is to execute the deploy script, and should have permissions of \fB0700\fP. .sp Most cloud providers are configured to use \fBroot\fP as the default initial user for deployment, and as such, this directory and all files in it should be owned by the \fBroot\fP user. .sp The \fB/tmp/.saltcloud/\fP directory should the following files: .INDENT 0.0 .IP \(bu 2 A \fBdeploy.sh\fP script. This script should have permissions of \fB0755\fP. .IP \(bu 2 A \fB.pem\fP and \fB.pub\fP key named after the minion. The \fB.pem\fP file should have permissions of \fB0600\fP. Ensure that the \fB.pem\fP and \fB.pub\fP files have been properly copied to the \fB/etc/salt/pki/minion/\fP directory. .IP \(bu 2 A file called \fBminion\fP. This file should have been copied to the \fB/etc/salt/\fP directory. .IP \(bu 2 Optionally, a file called \fBgrains\fP. This file, if present, should have been copied to the \fB/etc/salt/\fP directory. .UNINDENT .SS Unprivileged Primary Users .sp Some providers, most notably EC2, are configured with a different primary user. Some common examples are \fBec2\-user\fP, \fBubuntu\fP, \fBfedora\fP and \fBbitnami\fP. In these cases, the \fB/tmp/.saltcloud/\fP directory and all files in it should be owned by this user. .sp Some providers, such as EC2, are configured to not require these users to provide a password when using the \fBsudo\fP command. Because it is more secure to require \fBsudo\fP users to provide a password, other providers are configured that way. .sp If this instance is required to provide a password, it needs to be configured in Salt Cloud. A password for sudo to use may be added to either the provider configuration or the profile configuration: .sp .nf .ft C sudo_password: mypassword .ft P .fi .SS \fB/tmp/\fP is Mounted as \fBnoexec\fP .sp It is more secure to mount the \fB/tmp/\fP directory with a \fBnoexec\fP option. This is uncommon on most cloud providers, but very common in private environments. To see if the \fB/tmp/\fP directory is mounted this way, run the following command: .sp .nf .ft C mount | grep tmp .ft P .fi .sp The if the output of this command includes a line that looks like this, then the \fB/tmp/\fP directory is mounted as \fBnoexec\fP: .sp .nf .ft C tmpfs on /tmp type tmpfs (rw,noexec) .ft P .fi .sp If this is the case, then the \fBdeploy_command\fP will need to be changed in order to run the deploy script through the \fBsh\fP command, rather than trying to execute it directly. This may be specified in either the provider or the profile config: .sp .nf .ft C deploy_command: sh /tmp/.saltcloud/deploy.sh .ft P .fi .sp Please note that by default, Salt Cloud will place its files in a directory called \fB/tmp/.saltcloud/\fP. This may be also be changed in the provider or profile configuration: .sp .nf .ft C tmp_dir: /tmp/.saltcloud/ .ft P .fi .sp If this directory is changed, then the \fBdeploy_command\fP need to be changed in order to reflect the \fBtmp_dir\fP configuration. .SS Executing the Deploy Script Manually .sp If all of the files needed for deployment were successfully uploaded to the correct locations, and contain the correct permissions and ownerships, the deploy script may be executed manually in order to check for other issues: .sp .nf .ft C cd /tmp/.saltcloud/ \&./deploy.sh .ft P .fi .SS Extending Salt Cloud .SS Writing Cloud Provider Modules .sp Salt Cloud runs on a module system similar to the main Salt project. The modules inside saltcloud exist in the \fBsalt/cloud/clouds\fP directory of the salt source. .sp There are two basic types of cloud modules. If a cloud provider is supported by libcloud, then using it is the fastest route to getting a module written. The Apache Libcloud project is located at: .sp \fI\%http://libcloud.apache.org/\fP .sp Not every cloud provider is supported by libcloud. Additionally, not every feature in a supported cloud provider is necessary supported by libcloud. In either of these cases, a module can be created which does not rely on libcloud. .SS All Modules .sp The following functions are required by all modules, whether or not they are based on libcloud. .SS The __virtual__() Function .sp This function determines whether or not to make this cloud module available upon execution. Most often, it uses \fBget_configured_provider()\fP to determine if the necessary configuration has been set up. It may also check for necessary imports, to decide whether to load the module. In most cases, it will return a \fBTrue\fP or \fBFalse\fP value. If the name of the driver used does not match the filename, then that name should be returned instead of \fBTrue\fP. An example of this may be seen in the Azure module: .sp \fI\%https://github.com/saltstack/salt/tree/develop/salt/cloud/clouds/msazure.py\fP .SS The get_configured_provider() Function .sp This function uses \fBconfig.is_provider_configured()\fP to determine wither all required information for this driver has been configured. The last value in the list of required settings should be followed by a comma. .SS Libcloud Based Modules .sp Writing a cloud module based on libcloud has two major advantages. First of all, much of the work has already been done by the libcloud project. Second, most of the functions necessary to Salt have already been added to the Salt Cloud project. .SS The create() Function .sp The most important function that does need to be manually written is the \fBcreate()\fP function. This is what is used to request a virtual machine to be created by the cloud provider, wait for it to become available, and then (optionally) log in and install Salt on it. .sp A good example to follow for writing a cloud provider module based on libcloud is the module provided for Linode: .sp \fI\%https://github.com/saltstack/salt/tree/develop/salt/cloud/clouds/linode.py\fP .sp The basic flow of a \fBcreate()\fP function is as follows: .INDENT 0.0 .IP \(bu 2 Send a request to the cloud provider to create a virtual machine. .IP \(bu 2 Wait for the virtual machine to become available. .IP \(bu 2 Generate kwargs to be used to deploy Salt. .IP \(bu 2 Log into the virtual machine and deploy Salt. .IP \(bu 2 Return a data structure that describes the newly\-created virtual machine. .UNINDENT .sp At various points throughout this function, events may be fired on the Salt event bus. Four of these events, which are described below, are required. Other events may be added by the user, where appropriate. .sp When the \fBcreate()\fP function is called, it is passed a data structure called \fBvm_\fP. This dict contains a composite of information describing the virtual machine to be created. A dict called \fB__opts__\fP is also provided by Salt, which contains the options used to run Salt Cloud, as well as a set of configuration and environment variables. .sp The first thing the \fBcreate()\fP function must do is fire an event stating that it has started the create process. This event is tagged \fBsalt/cloud//creating\fP. The payload contains the names of the VM, profile and provider. .sp A set of kwargs is then usually created, to describe the parameters required by the cloud provider to request the virtual machine. .sp An event is then fired to state that a virtual machine is about to be requested. It is tagged as \fBsalt/cloud//requesting\fP. The payload contains most or all of the parameters that will be sent to the cloud provider. Any private information (such as passwords) should not be sent in the event. .sp After a request is made, a set of deploy kwargs will be generated. These will be used to install Salt on the target machine. Windows options are supported at this point, and should be generated, even if the cloud provider does not currently support Windows. This will save time in the future if the provider does eventually decide to support Windows. .sp An event is then fired to state that the deploy process is about to begin. This event is tagged \fBsalt/cloud//deploying\fP. The payload for the event will contain a set of deploy kwargs, useful for debugging purposed. Any private data, including passwords and keys (including public keys) should be stripped from the deploy kwargs before the event is fired. .sp If any Windows options have been passed in, the \fBsalt.utils.cloud.deploy_windows()\fP function will be called. Otherwise, it will be assumed that the target is a Linux or Unix machine, and the \fBsalt.utils.cloud.deploy_script()\fP will be called. .sp Both of these functions will wait for the target machine to become available, then the necessary port to log in, then a successful login that can be used to install Salt. Minion configuration and keys will then be uploaded to a temporary directory on the target by the appropriate function. On a Windows target, the Windows Minion Installer will be run in silent mode. On a Linux/Unix target, a deploy script (bootstrap\-salt.sh, by default) will be run, which will auto\-detect the operating system, and install Salt using its native package manager. These do not need to be handled by the developer in the cloud module. .sp After the appropriate deploy function completes, a final event is fired which describes the virtual machine that has just been created. This event is tagged \fBsalt/cloud//created\fP. The payload contains the names of the VM, profile and provider. .sp Finally, a dict (queried from the provider) which describes the new virtual machine is returned to the user. Because this data is not fired on the event bus it can, and should, return any passwords that were returned by the cloud provider. In some cases (for example, Rackspace), this is the only time that the password can be queried by the user; post\-creation queries may not contain password information (depending upon the provider). .SS The libcloudfuncs Functions .sp A number of other functions are required for all cloud providers. However, with libcloud\-based modules, these are all provided for free by the libcloudfuncs library. The following two lines set up the imports: .sp .nf .ft C from salt.cloud.libcloudfuncs import * # pylint: disable=W0614,W0401 from salt.utils import namespaced_function .ft P .fi .sp And then a series of declarations will make the necessary functions available within the cloud module. .sp .nf .ft C get_size = namespaced_function(get_size, globals()) get_image = namespaced_function(get_image, globals()) avail_locations = namespaced_function(avail_locations, globals()) avail_images = namespaced_function(avail_images, globals()) avail_sizes = namespaced_function(avail_sizes, globals()) script = namespaced_function(script, globals()) destroy = namespaced_function(destroy, globals()) list_nodes = namespaced_function(list_nodes, globals()) list_nodes_full = namespaced_function(list_nodes_full, globals()) list_nodes_select = namespaced_function(list_nodes_select, globals()) show_instance = namespaced_function(show_instance, globals()) .ft P .fi .sp If necessary, these functions may be replaced by removing the appropriate declaration line, and then adding the function as normal. .sp These functions are required for all cloud modules, and are described in detail in the next section. .SS Non\-Libcloud Based Modules .sp In some cases, using libcloud is not an option. This may be because libcloud has not yet included the necessary driver itself, or it may be that the driver that is included with libcloud does not contain all of the necessary features required by the developer. When this is the case, some or all of the functions in \fBlibcloudfuncs\fP may be replaced. If they are all replaced, the libcloud imports should be absent from the Salt Cloud module. .sp A good example of a non\-libcloud provider is the Digital Ocean module: .sp \fI\%https://github.com/saltstack/salt/tree/develop/salt/cloud/clouds/digital_ocean.py\fP .SS The \fBcreate()\fP Function .sp The \fBcreate()\fP function must be created as described in the libcloud\-based module documentation. .SS The get_size() Function .sp This function is only necessary for libcloud\-based modules, and does not need to exist otherwise. .SS The get_image() Function .sp This function is only necessary for libcloud\-based modules, and does not need to exist otherwise. .SS The avail_locations() Function .sp This function returns a list of locations available, if the cloud provider uses multiple data centers. It is not necessary if the cloud provider only uses one data center. It is normally called using the \fB\-\-list\-locations\fP option. .sp .nf .ft C salt\-cloud \-\-list\-locations my\-cloud\-provider .ft P .fi .SS The avail_images() Function .sp This function returns a list of images available for this cloud provider. There are not currently any known cloud providers that do not provide this functionality, though they may refer to images by a different name (for example, "templates"). It is normally called using the \fB\-\-list\-images\fP option. .sp .nf .ft C salt\-cloud \-\-list\-images my\-cloud\-provider .ft P .fi .SS The avail_sizes() Function .sp This function returns a list of sizes available for this cloud provider. Generally, this refers to a combination of RAM, CPU and/or disk space. This functionality may not be present on some cloud providers. For example, the Parallels module breaks down RAM, CPU and disk space into separate options, whereas in other providers, these options are baked into the image. It is normally called using the \fB\-\-list\-sizes\fP option. .sp .nf .ft C salt\-cloud \-\-list\-sizes my\-cloud\-provider .ft P .fi .SS The script() Function .sp This function builds the deploy script to be used on the remote machine. It is likely to be moved into the \fBsalt.utils.cloud\fP library in the near future, as it is very generic and can usually be copied wholesale from another module. An excellent example is in the Azure driver. .SS The destroy() Function .sp This function irreversibly destroys a virtual machine on the cloud provider. Before doing so, it should fire an event on the Salt event bus. The tag for this event is \fBsalt/cloud//destroying\fP. Once the virtual machine has been destroyed, another event is fired. The tag for that event is \fBsalt/cloud//destroyed\fP. .sp This function is normally called with the \fB\-d\fP options: .sp .nf .ft C salt\-cloud \-d myinstance .ft P .fi .SS The list_nodes() Function .sp This function returns a list of nodes available on this cloud provider, using the following fields: .INDENT 0.0 .IP \(bu 2 id (str) .IP \(bu 2 image (str) .IP \(bu 2 size (str) .IP \(bu 2 state (str) .IP \(bu 2 private_ips (list) .IP \(bu 2 public_ips (list) .UNINDENT .sp No other fields should be returned in this function, and all of these fields should be returned, even if empty. The private_ips and public_ips fields should always be of a list type, even if empty, and the other fields should always be of a str type. This function is normally called with the \fB\-Q\fP option: .sp .nf .ft C salt\-cloud \-Q .ft P .fi .SS The list_nodes_full() Function .sp All information available about all nodes should be returned in this function. The fields in the list_nodes() function should also be returned, even if they would not normally be provided by the cloud provider. This is because some functions both within Salt and 3rd party will break if an expected field is not present. This function is normally called with the \fB\-F\fP option: .sp .nf .ft C salt\-cloud \-F .ft P .fi .SS The list_nodes_select() Function .sp This function returns only the fields specified in the \fBquery.selection\fP option in \fB/etc/salt/cloud\fP. Because this function is so generic, all of the heavy lifting has been moved into the \fBsalt.utils.cloud\fP library. .sp A function to call \fBlist_nodes_select()\fP still needs to be present. In general, the following code can be used as\-is: .sp .nf .ft C def list_nodes_select(call=None): \(aq\(aq\(aq Return a list of the VMs that are on the provider, with select fields \(aq\(aq\(aq return salt.utils.cloud.list_nodes_select( list_nodes_full(\(aqfunction\(aq), __opts__[\(aqquery.selection\(aq], call, ) .ft P .fi .sp However, depending on the cloud provider, additional variables may be required. For instance, some modules use a \fBconn\fP object, or may need to pass other options into \fBlist_nodes_full()\fP. In this case, be sure to update the function appropriately: .sp .nf .ft C def list_nodes_select(conn=None, call=None): \(aq\(aq\(aq Return a list of the VMs that are on the provider, with select fields \(aq\(aq\(aq if not conn: conn = get_conn() # pylint: disable=E0602 return salt.utils.cloud.list_nodes_select( list_nodes_full(conn, \(aqfunction\(aq), __opts__[\(aqquery.selection\(aq], call, ) .ft P .fi .sp This function is normally called with the \fB\-S\fP option: .sp .nf .ft C salt\-cloud \-S .ft P .fi .SS The show_instance() Function .sp This function is used to display all of the information about a single node that is available from the cloud provider. The simplest way to provide this is usually to call \fBlist_nodes_full()\fP, and return just the data for the requested node. It is normally called as an action: .sp .nf .ft C salt\-cloud \-a show_instance myinstance .ft P .fi .SS Actions and Functions .sp Extra functionality may be added to a cloud provider in the form of an \fB\-\-action\fP or a \fB\-\-function\fP. Actions are performed against a cloud instance/virtual machine, and functions are performed against a cloud provider. .SS Actions .sp Actions are calls that are performed against a specific instance or virtual machine. The \fBshow_instance\fP action should be available in all cloud modules. Actions are normally called with the \fB\-a\fP option: .sp .nf .ft C salt\-cloud \-a show_instance myinstance .ft P .fi .sp Actions must accept a \fBname\fP as a first argument, may optionally support any number of kwargs as appropriate, and must accept an argument of \fBcall\fP, with a default of \fBNone\fP. .sp Before performing any other work, an action should normally verify that it has been called correctly. It may then perform the desired feature, and return useful information to the user. A basic action looks like: .sp .nf .ft C def show_instance(name, call=None): \(aq\(aq\(aq Show the details from EC2 concerning an AMI \(aq\(aq\(aq if call != \(aqaction\(aq: raise SaltCloudSystemExit( \(aqThe show_instance action must be called with \-a or \-\-action.\(aq ) return _get_node(name) .ft P .fi .sp Please note that generic kwargs, if used, are passed through to actions as \fBkwargs\fP and not \fB**kwargs\fP. An example of this is seen in the Functions section. .SS Functions .sp Functions are called that are performed against a specific cloud provider. An optional function that is often useful is \fBshow_image\fP, which describes an image in detail. Functions are normally called with the \fB\-f\fP option: .sp .nf .ft C salt\-cloud \-f show_image my\-cloud\-provider image=\(aqUbuntu 13.10 64\-bit\(aq .ft P .fi .sp A function may accept any number of kwargs as appropriate, and must accept an argument of \fBcall\fP with a default of \fBNone\fP. .sp Before performing any other work, a function should normally verify that it has been called correctly. It may then perform the desired feature, and return useful information to the user. A basic function looks like: .sp .nf .ft C def show_image(kwargs, call=None): \(aq\(aq\(aq Show the details from EC2 concerning an AMI \(aq\(aq\(aq if call != \(aqfunction\(aq: raise SaltCloudSystemExit( \(aqThe show_image action must be called with \-f or \-\-function.\(aq ) params = {\(aqImageId.1\(aq: kwargs[\(aqimage\(aq], \(aqAction\(aq: \(aqDescribeImages\(aq} result = query(params) log.info(result) return result .ft P .fi .sp Take note that generic kwargs are passed through to functions as \fBkwargs\fP and not \fB**kwargs\fP. .SS OS Support for Cloud VMs .sp Salt Cloud works primarily by executing a script on the virtual machines as soon as they become available. The script that is executed is referenced in the cloud profile as the \fBscript\fP. In older versions, this was the \fBos\fP argument. This was changed in 0.8.2. .sp A number of legacy scripts exist in the deploy directory in the saltcloud source tree. The preferred method is currently to use the salt\-bootstrap script. A stable version is included with each release tarball starting with 0.8.4. The most updated version can be found at: .sp \fI\%https://github.com/saltstack/salt-bootstrap\fP .sp If you do not specify a script argument, this script will be used at the default. .sp If the Salt Bootstrap script does not meet your needs, you may write your own. The script should be written in bash and is a Jinja template. Deploy scripts need to execute a number of functions to do a complete salt setup. These functions include: .INDENT 0.0 .IP 1. 3 Install the salt minion. If this can be done via system packages this method is HIGHLY preferred. .IP 2. 3 Add the salt minion keys before the minion is started for the first time. The minion keys are available as strings that can be copied into place in the Jinja template under the dict named "vm". .IP 3. 3 Start the salt\-minion daemon and enable it at startup time. .IP 4. 3 Set up the minion configuration file from the "minion" data available in the Jinja template. .UNINDENT .sp A good, well commented, example of this process is the Fedora deployment script: .sp \fI\%https://github.com/saltstack/salt-cloud/blob/master/saltcloud/deploy/Fedora.sh\fP .sp A number of legacy deploy scripts are included with the release tarball. None of them are as functional or complete as Salt Bootstrap, and are still included for academic purposes. .SS Other Generic Deploy Scripts .sp If you want to be assured of always using the latest Salt Bootstrap script, there are a few generic templates available in the deploy directory of your saltcloud source tree: .sp .nf .ft C curl\-bootstrap curl\-bootstrap\-git python\-bootstrap wget\-bootstrap wget\-bootstrap\-git .ft P .fi .sp These are example scripts which were designed to be customized, adapted, and refit to meet your needs. One important use of them is to pass options to the salt\-bootstrap script, such as updating to specific git tags. .SS Post\-Deploy Commands .sp Once a minion has been deployed, it has the option to run a salt command. Normally, this would be the state.highstate command, which would finish provisioning the VM. Another common option is state.sls, or for just testing, test.ping. This is configured in the main cloud config file: .sp .nf .ft C start_action: state.highstate .ft P .fi .sp This is currently considered to be experimental functionality, and may not work well with all providers. If you experience problems with Salt Cloud hanging after Salt is deployed, consider using Startup States instead: .sp \fI\%http://docs.saltstack.com/ref/states/startup.html\fP .SS Skipping the Deploy Script .sp For whatever reason, you may want to skip the deploy script altogether. This results in a VM being spun up much faster, with absolutely no configuration. This can be set from the command line: .sp .nf .ft C salt\-cloud \-\-no\-deploy \-p micro_aws my_instance .ft P .fi .sp Or it can be set from the main cloud config file: .sp .nf .ft C deploy: False .ft P .fi .sp Or it can be set from the provider\(aqs configuration: .sp .nf .ft C RACKSPACE.user: example_user RACKSPACE.apikey: 123984bjjas87034 RACKSPACE.deploy: False .ft P .fi .sp Or even on the VM\(aqs profile settings: .sp .nf .ft C ubuntu_aws: provider: aws image: ami\-7e2da54e size: Micro Instance deploy: False .ft P .fi .sp The default for deploy is True. .sp In the profile, you may also set the script option to \fBNone\fP: .sp .nf .ft C script: None .ft P .fi .sp This is the slowest option, since it still uploads the None deploy script and executes it. .SS Updating Salt Bootstrap .sp Salt Bootstrap can be updated automatically with salt\-cloud: .sp .nf .ft C salt\-cloud \-u salt\-cloud \-\-update\-bootstrap .ft P .fi .sp Bear in mind that this updates to the latest (unstable) version, so use with caution. .SS Keeping /tmp/ Files .sp When Salt Cloud deploys an instance, it uploads temporary files to /tmp/ for salt\-bootstrap to put in place. After the script has run, they are deleted. To keep these files around (mostly for debugging purposes), the \-\-keep\-tmp option can be added: .sp .nf .ft C salt\-cloud \-p myprofile mymachine \-\-keep\-tmp .ft P .fi .sp For those wondering why /tmp/ was used instead of /root/, this had to be done for images which require the use of sudo, and therefore do not allow remote root logins, even for file transfers (which makes /root/ unavailable). .SS Deploy Script Arguments .sp Custom deploy scripts are unlikely to need custom arguments to be passed to them, but salt\-bootstrap has been extended quite a bit, and this may be necessary. script_args can be specified in either the profile or the map file, to pass arguments to the deploy script: .sp .nf .ft C aws\-amazon: provider: aws image: ami\-1624987f size: Micro Instance ssh_username: ec2\-user script: bootstrap\-salt script_args: \-c /tmp/ .ft P .fi .sp This has also been tested to work with pipes, if needed: .sp .nf .ft C script_args: | head .ft P .fi .SS Using Salt Cloud from Salt .SS Using the Salt Modules for Cloud .sp In addition to the \fBsalt\-cloud\fP command, Salt Cloud can be called from Salt, in a variety of different ways. Most users will be interested in either the execution module or the state module, but it is also possible to call Salt Cloud as a runner. .sp Because the actual work will be performed on a remote minion, the normal Salt Cloud configuration must exist on any target minion that needs to execute a Salt Cloud command. Because Salt Cloud now supports breaking out configuration into individual files, the configuration is easily managed using Salt\(aqs own \fBfile.managed\fP state function. For example, the following directories allow this configuration to be managed easily: .sp .nf .ft C /etc/salt/cloud.providers.d/ /etc/salt/cloud.profiles.d/ .ft P .fi .SS Minion Keys .sp Keep in mind that when creating minions, Salt Cloud will create public and private minion keys, upload them to the minion, and place the public key on the machine that created the minion. It will \fInot\fP attempt to place any public minion keys on the master, unless the minion which was used to create the instance is also the Salt Master. This is because granting arbitrary minions access to modify keys on the master is a serious security risk, and must be avoided. .SS Execution Module .sp The \fBcloud\fP module is available to use from the command line. At the moment, almost every standard Salt Cloud feature is available to use. The following commands are available: .SS list_images .sp This command is designed to show images that are available to be used to create an instance using Salt Cloud. In general they are used in the creation of profiles, but may also be used to create an instance directly (see below). Listing images requires a provider to be configured, and specified: .sp .nf .ft C salt myminion cloud.list_images my\-cloud\-provider .ft P .fi .SS list_sizes .sp This command is designed to show sizes that are available to be used to create an instance using Salt Cloud. In general they are used in the creation of profiles, but may also be used to create an instance directly (see below). This command is not available for all cloud providers; see the provider\-specific documentation for details. Listing sizes requires a provider to be configured, and specified: .sp .nf .ft C salt myminion cloud.list_sizes my\-cloud\-provider .ft P .fi .SS list_locations .sp This command is designed to show locations that are available to be used to create an instance using Salt Cloud. In general they are used in the creation of profiles, but may also be used to create an instance directly (see below). This command is not available for all cloud providers; see the provider\-specific documentation for details. Listing locations requires a provider to be configured, and specified: .sp .nf .ft C salt myminion cloud.list_locations my\-cloud\-provider .ft P .fi .SS query .sp This command is used to query all configured cloud providers, and display all instances associated with those accounts. By default, it will run a standard query, returning the following fields: .INDENT 0.0 .TP .B \fBid\fP The name or ID of the instance, as used by the cloud provider. .TP .B \fBimage\fP The disk image that was used to create this instance. .TP .B \fBprivate_ips\fP Any public IP addresses currently assigned to this instance. .TP .B \fBpublic_ips\fP Any private IP addresses currently assigned to this instance. .TP .B \fBsize\fP The size of the instance; can refer to RAM, CPU(s), disk space, etc., depending on the cloud provider. .TP .B \fBstate\fP The running state of the instance; for example, \fBrunning\fP, \fBstopped\fP, \fBpending\fP, etc. This state is dependent upon the provider. .UNINDENT .sp This command may also be used to perform a full query or a select query, as described below. The following usages are available: .sp .nf .ft C salt myminion cloud.query salt myminion cloud.query list_nodes salt myminion cloud.query list_nodes_full .ft P .fi .SS full_query .sp This command behaves like the \fBquery\fP command, but lists all information concerning each instance as provided by the cloud provider, in addition to the fields returned by the \fBquery\fP command. .sp .nf .ft C salt myminion cloud.full_query .ft P .fi .SS select_query .sp This command behaves like the \fBquery\fP command, but only returned select fields as defined in the \fB/etc/salt/cloud\fP configuration file. A sample configuration for this section of the file might look like: .sp .nf .ft C query.selection: \- id \- key_name .ft P .fi .sp This configuration would only return the \fBid\fP and \fBkey_name\fP fields, for those cloud providers that support those two fields. This would be called using the following command: .sp .nf .ft C salt myminion cloud.select_query .ft P .fi .SS profile .sp This command is used to create an instance using a profile that is configured on the target minion. Please note that the profile must be configured before this command can be used with it. .sp .nf .ft C salt myminion cloud.profile ec2\-centos64\-x64 my\-new\-instance .ft P .fi .sp Please note that the execution module does \fInot\fP run in parallel mode. Using multiple minions to create instances can effectively perform parallel instance creation. .SS create .sp This command is similar to the \fBprofile\fP command, in that it is used to create a new instance. However, it does not require a profile to be pre\-configured. Instead, all of the options that are normally configured in a profile are passed directly to Salt Cloud to create the instance: .sp .nf .ft C salt myminion cloud.create my\-ec2\-config my\-new\-instance \e image=ami\-1624987f size=\(aqMicro Instance\(aq ssh_username=ec2\-user \e securitygroup=default delvol_on_destroy=True .ft P .fi .sp Please note that the execution module does \fInot\fP run in parallel mode. Using multiple minions to create instances can effectively perform parallel instance creation. .SS destroy .sp This command is used to destroy an instance or instances. This command will search all configured providers and remove any instance(s) which matches the name(s) passed in here. The results of this command are \fInon\-reversable\fP and should be used with caution. .sp .nf .ft C salt myminion cloud.destroy myinstance salt myminion cloud.destroy myinstance1,myinstance2 .ft P .fi .SS action .sp This command implements both the \fBaction\fP and the \fBfunction\fP commands used in the standard \fBsalt\-cloud\fP command. If one of the standard \fBaction\fP commands is used, an instance name must be provided. If one of the standard \fBfunction\fP commands is used, a provider configuration must be named. .sp .nf .ft C salt myminion cloud.action start instance=myinstance salt myminion cloud.action show_image provider=my\-ec2\-config \e image=ami\-1624987f .ft P .fi .sp The actions available are largely dependent upon the module for the specific cloud provider. The following actions are available for all cloud providers: .INDENT 0.0 .TP .B \fBlist_nodes\fP This is a direct call to the \fBquery\fP function as described above, but is only performed against a single cloud provider. A provider configuration must be included. .TP .B \fBlist_nodes_select\fP This is a direct call to the \fBfull_query\fP function as described above, but is only performed against a single cloud provider. A provider configuration must be included. .TP .B \fBlist_nodes_select\fP This is a direct call to the \fBselect_query\fP function as described above, but is only performed against a single cloud provider. A provider configuration must be included. .TP .B \fBshow_instance\fP This is a thin wrapper around \fBlist_nodes\fP, which returns the full information about a single instance. An instance name must be provided. .UNINDENT .SS State Module .sp A subset of the execution module is available through the \fBcloud\fP state module. Not all functions are currently included, because there is currently insufficient code for them to perform statefully. For example, a command to create an instance may be issued with a series of options, but those options cannot currently be statefully managed. Additional states to manage these options will be released at a later time. .SS cloud.present .sp This state will ensure that an instance is present inside a particular cloud provider. Any option that is normally specified in the \fBcloud.create\fP execution module and function may be declared here, but only the actual presence of the instance will be managed statefully. .sp .nf .ft C my\-instance\-name: cloud.present: \- provider: my\-ec2\-config \- image: ami\-1624987f \- size: \(aqMicro Instance\(aq \- ssh_username: ec2\-user \- securitygroup: default \- delvol_on_destroy: True .ft P .fi .SS cloud.profile .sp This state will ensure that an instance is present inside a particular cloud provider. This function calls the \fBcloud.profile\fP execution module and function, but as with \fBcloud.present\fP, only the actual presence of the instance will be managed statefully. .sp .nf .ft C my\-instance\-name: cloud.profile: \- profile: ec2\-centos64\-x64 .ft P .fi .SS cloud.absent .sp This state will ensure that an instance (identified by name) does not exist in any of the cloud providers configured on the target minion. Please note that this state is \fInon\-reversable\fP and may be considered especially destructive when issued as a cloud state. .sp .nf .ft C my\-instance\-name: cloud.absent .ft P .fi .SS Runner Module .sp The \fBcloud\fP runner module is executed on the master, and performs actions using the configuration and Salt modules on the master itself. This means that any public minion keys will also be properly accepted by the master. .sp Using the functions in the runner module is no different than using those in the execution module, outside of the behavior described in the above paragraph. The following functions are available inside the runner: .INDENT 0.0 .IP \(bu 2 list_images .IP \(bu 2 list_sizes .IP \(bu 2 list_locations .IP \(bu 2 query .IP \(bu 2 full_query .IP \(bu 2 select_query .IP \(bu 2 profile .IP \(bu 2 destroy .IP \(bu 2 action .UNINDENT .sp Outside of the standard usage of \fBsalt\-run\fP itself, commands are executed as usual: .sp .nf .ft C salt\-run cloud.profile ec2\-centos64\-x86_64 my\-instance\-name .ft P .fi .SS CloudClient .sp The execution, state and runner modules ultimately all use the CloudClient library that ships with Salt. To use the CloudClient library locally (either on the master or a minion), create a client object and issue a command against it: .sp .nf .ft C import salt.cloud import pprint client = salt.cloud.CloudClient(\(aq/etc/salt/cloud\(aq) nodes = client.query() pprint.pprint(nodes) .ft P .fi .SS Feature Comparison .SS Feature Matrix .sp A number of features are available in most cloud providers, but not all are available everywhere. This may be because the feature isn\(aqt supported by the cloud provider itself, or it may only be that the feature has not yet been added to Salt Cloud. In a handful of cases, it is because the feature does not make sense for a particular cloud provider (Saltify, for instance). .sp This matrix shows which features are available in which cloud providers, as far as Salt Cloud is concerned. This is not a comprehensive list of all features available in all cloud providers, and should not be used to make business decisions concerning choosing a cloud provider. In most cases, adding support for a feature to Salt Cloud requires only a little effort. .SS Legacy Drivers .sp Both AWS and Rackspace are listed as "Legacy". This is because those drivers have been replaced by other drivers, which are generally the preferred method for working with those providers. .sp The EC2 driver should be used instead of the AWS driver, when possible. The OpenStack driver should be used instead of the Rackspace driver, unless the user is dealing with instances in "the old cloud" in Rackspace. .SS Note for Developers .sp When adding new features to a particular cloud provider, please make sure to add the feature to this table. Additionally, if you notice a feature that is not properly listed here, pull requests to fix them is appreciated. .SS Standard Features .sp These are features that are available for almost every provider. .TS center; |l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|. _ T{ T} T{ AWS (Legacy) T} T{ CloudStack T} T{ Digital Ocean T} T{ EC2 T} T{ GoGrid T} T{ JoyEnt T} T{ Linode T} T{ OpenStack T} T{ Parallels T} T{ Rackspace (Legacy) T} T{ Saltify T} T{ Softlayer T} T{ Softlayer Hardware T} T{ Aliyun T} _ T{ Query T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ T} T{ Yes T} T{ Yes T} T{ Yes T} _ T{ Full Query T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ T} T{ Yes T} T{ Yes T} T{ Yes T} _ T{ Selective Query T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ T} T{ Yes T} T{ Yes T} T{ Yes T} _ T{ List Sizes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ T} T{ Yes T} T{ Yes T} T{ Yes T} _ T{ List Images T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ T} T{ Yes T} T{ Yes T} T{ Yes T} _ T{ List Locations T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ T} T{ Yes T} T{ Yes T} T{ Yes T} _ T{ create T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} _ T{ destroy T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ Yes T} T{ T} T{ Yes T} T{ Yes T} T{ Yes T} _ .TE .SS Actions .sp These are features that are performed on a specific instance, and require an instance name to be passed in. For example: .sp .nf .ft C # salt\-cloud \-a attach_volume ami.example.com .ft P .fi .TS center; |l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|. _ T{ Actions T} T{ AWS (Legacy) T} T{ CloudStack T} T{ Digital Ocean T} T{ EC2 T} T{ GoGrid T} T{ JoyEnt T} T{ Linode T} T{ OpenStack T} T{ Parallels T} T{ Rackspace (Legacy) T} T{ Saltify T} T{ Softlayer T} T{ Softlayer Hardware T} T{ Aliyun T} _ T{ attach_volume T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ create_attach_volumes T} T{ Yes T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ del_tags T} T{ Yes T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ delvol_on_destroy T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ detach_volume T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ disable_term_protect T} T{ Yes T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ enable_term_protect T} T{ Yes T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ get_tags T} T{ Yes T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ keepvol_on_destroy T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ list_keypairs T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ rename T} T{ Yes T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ set_tags T} T{ Yes T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ show_delvol_on_destroy T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ show_instance T} T{ T} T{ T} T{ Yes T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ Yes T} T{ Yes T} T{ Yes T} _ T{ show_term_protect T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ start T} T{ Yes T} T{ T} T{ T} T{ Yes T} T{ T} T{ Yes T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} _ T{ stop T} T{ Yes T} T{ T} T{ T} T{ Yes T} T{ T} T{ Yes T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} _ T{ take_action T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ .TE .SS Functions .sp These are features that are performed against a specific cloud provider, and require the name of the provider to be passed in. For example: .sp .nf .ft C # salt\-cloud \-f list_images my_digitalocean .ft P .fi .TS center; |l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|. _ T{ Functions T} T{ AWS (Legacy) T} T{ CloudStack T} T{ Digital Ocean T} T{ EC2 T} T{ GoGrid T} T{ JoyEnt T} T{ Linode T} T{ OpenStack T} T{ Parallels T} T{ Rackspace (Legacy) T} T{ Saltify T} T{ Softlayer T} T{ Softlayer Hardware T} T{ Aliyun T} _ T{ block_device_mappings T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ create_keypair T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ create_volume T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ delete_key T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ delete_keypair T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ delete_volume T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ get_image T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} _ T{ get_ip T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ get_key T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ get_keyid T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ get_keypair T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ get_networkid T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ get_node T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ get_password T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ get_size T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} _ T{ get_spot_config T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ get_subnetid T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ iam_profile T} T{ Yes T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} _ T{ import_key T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ key_list T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ keyname T} T{ Yes T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ list_availability_zones T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} _ T{ list_custom_images T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} _ T{ list_keys T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ list_vlans T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} T{ Yes T} T{ T} _ T{ rackconnect T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ reboot T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} _ T{ reformat_node T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ securitygroup T} T{ Yes T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ securitygroupid T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} _ T{ show_image T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} _ T{ show_key T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ show_keypair T} T{ T} T{ T} T{ Yes T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} _ T{ show_volume T} T{ T} T{ T} T{ T} T{ Yes T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ T} T{ Yes T} _ .TE .SH NETAPI MODULES .SS Writing netapi modules .sp \fBnetapi\fP modules, put simply, bind a port and start a service. They are purposefully open\-ended and can be used to present a variety of external interfaces to Salt, and even present multiple interfaces at once. .IP "See also" .sp \fIThe full list of netapi modules\fP .RE .SS Configuration .sp All \fBnetapi\fP configuration is done in the \fISalt master config\fP and takes a form similar to the following: .sp .nf .ft C rest_cherrypy: port: 8000 debug: True ssl_crt: /etc/pki/tls/certs/localhost.crt ssl_key: /etc/pki/tls/certs/localhost.key .ft P .fi .SS The \fB__virtual__\fP function .sp Like all module types in Salt, \fBnetapi\fP modules go through Salt\(aqs loader interface to determine if they should be loaded into memory and then executed. .sp The \fB__virtual__\fP function in the module makes this determination and should return \fBFalse\fP or a string that will serve as the name of the module. If the module raises an \fBImportError\fP or any other errors, it will not be loaded. .SS The \fBstart\fP function .sp The \fBstart()\fP function will be called for each \fBnetapi\fP module that is loaded. This function should contain the server loop that actually starts the service. This is started in a multiprocess. .SS Inline documentation .sp As with the rest of Salt, it is a best\-practice to include liberal inline documentation in the form of a module docstring and docstrings on any classes, methods, and functions in your \fBnetapi\fP module. .SS Loader “magic” methods .sp The loader makes the \fB__opts__\fP data structure available to any function in a \fBnetapi\fP module. .SS Introduction to netapi modules .sp netapi modules provide API\-centric access to Salt. Usually externally\-facing services such as REST or WebSockets, XMPP, XMLRPC, etc. .sp In general netapi modules bind to a port and start a service. They are purposefully open\-ended. A single module can be configured to run as well as multiple modules simultaneously. .sp netapi modules are enabled by adding configuration to your Salt Master config file and then starting the \fBsalt\-api\fP daemon. Check the docs for each module to see external requirements and configuration settings. .sp Communication with Salt and Salt satellite projects is done using Salt\(aqs own \fIPython API\fP. A list of available client interfaces is below. .IP "salt\-api" .sp Prior to Salt\(aqs Helium release, netapi modules lived in the separate sister projected \fBsalt\-api\fP. That project has been merged into the main Salt project. .RE .IP "See also" .sp \fIThe full list of netapi modules\fP .RE .SS Client interfaces .sp Salt\(aqs client interfaces expose executing functions by crafting a dictionary of values that are mapped to function arguments. This allows calling functions simply by creating a data structure. (And this is exactly how much of Salt\(aqs own internals work!) .INDENT 0.0 .TP .B class salt.netapi.NetapiClient(opts) Provide a uniform method of accessing the various client interfaces in Salt in the form of low\-data data structures. For example: .sp .nf .ft C >>> client = NetapiClient(__opts__) >>> lowstate = {\(aqclient\(aq: \(aqlocal\(aq, \(aqtgt\(aq: \(aq*\(aq, \(aqfun\(aq: \(aqtest.ping\(aq, \(aqarg\(aq: \(aq\(aq} >>> client.run(lowstate) .ft P .fi .INDENT 7.0 .TP .B local(*args, **kwargs) Run \fIexecution modules\fP synchronously .sp Wraps \fBsalt.client.LocalClient.cmd()\fP. .INDENT 7.0 .TP .B Returns Returns the result from the execution module .UNINDENT .UNINDENT .INDENT 7.0 .TP .B local_async(*args, **kwargs) Run \fIexecution modules\fP asynchronously .sp Wraps \fBsalt.client.LocalClient.run_job()\fP. .INDENT 7.0 .TP .B Returns job ID .UNINDENT .UNINDENT .INDENT 7.0 .TP .B local_batch(*args, **kwargs) Run \fIexecution modules\fP against batches of minions .sp New in version 0.8.4. .sp Wraps \fBsalt.client.LocalClient.cmd_batch()\fP .INDENT 7.0 .TP .B Returns Returns the result from the exeuction module for each batch of returns .UNINDENT .UNINDENT .INDENT 7.0 .TP .B runner(fun, **kwargs) Run \fIrunner modules \fP .sp Wraps \fBsalt.runner.RunnerClient.low()\fP. .INDENT 7.0 .TP .B Returns Returns the result from the runner module .UNINDENT .UNINDENT .INDENT 7.0 .TP .B wheel(fun, **kwargs) Run \fIwheel modules\fP .sp Wraps \fBsalt.wheel.WheelClient.master_call()\fP. .INDENT 7.0 .TP .B Returns Returns the result from the wheel module .UNINDENT .UNINDENT .UNINDENT .SH SALT VIRT .sp The Salt Virt cloud controller capability was initial added to Salt in version 0.14.0 as an alpha technology. .sp The initial Salt Virt system supports core cloud operations: .INDENT 0.0 .IP \(bu 2 Virtual machine deployment .IP \(bu 2 Inspection of deployed VMs .IP \(bu 2 Virtual machine migration .IP \(bu 2 Network profiling .IP \(bu 2 Automatic VM integration with all aspects of Salt .IP \(bu 2 Image Pre\-seeding .UNINDENT .sp Many features are currently under development to enhance the capabilities of the Salt Virt systems. .IP Note It is noteworthy that Salt was originally developed with the intent of using the Salt communication system as the backbone to a cloud controller. This means that the Salt Virt system is not an afterthought, simply a system that took the back seat to other development. The original attempt to develop the cloud control aspects of Salt was a project called butter. This project never took off, but was functional and proves the early viability of Salt to be a cloud controller. .RE .SS Salt Virt Tutorial .sp A tutorial about how to get Salt Virt up and running has been added to the tutorial section: .sp \fBCloud Controller Tutorial\fP .SS The Salt Virt Runner .sp The point of interaction with the cloud controller is the \fBvirt\fP runner. The \fBvirt\fP runner comes with routines to execute specific virtual machine routines. .sp Reference documentation for the virt runner is available with the runner module documentation: .sp \fBVirt Runner Reference\fP .SS Based on Live State Data .sp The Salt Virt system is based on using Salt to query live data about hypervisors and then using the data gathered to make decisions about cloud operations. This means that no external resources are required to run Salt Virt, and that the information gathered about the cloud is live and accurate. .SS Deploy from Network or Disk .SS Virtual Machine Disk Profiles .sp Salt Virt allows for the disks created for deployed virtual machines to be finely configured. The configuration is a simple data structure which is read from the \fBconfig.option\fP function, meaning that the configuration can be stored in the minion config file, the master config file, or the minion\(aqs pillar. .sp This configuration option is called \fBvirt.disk\fP. The default \fBvirt.disk\fP data structure looks like this: .sp .nf .ft C virt.disk: default: \- system: size: 8192 format: qcow2 model: virtio .ft P .fi .IP Note The format and model does not need to be defined, Salt will default to the optimal format used by the underlying hypervisor, in the case of kvm this it is \fBqcow2\fP and \fBvirtio\fP. .RE .sp This configuration sets up a disk profile called default. The default profile creates a single system disk on the virtual machine. .SS Define More Profiles .sp Many environments will require more complex disk profiles and may require more than one profile, this can be easily accomplished: .sp .nf .ft C virt.disk: default: \- system: size: 8192 database: \- system: size: 8192 \- data: size: 30720 web: \- system: size: 1024 \- logs: size: 5120 .ft P .fi .sp This configuration allows for one of three profiles to be selected, allowing virtual machines to be created with different storage needs of the deployed vm. .SS Virtual Machine Network Profiles .sp Salt Virt allows for the network devices created for deployed virtual machines to be finely configured. The configuration is a simple data structure which is read from the \fBconfig.option\fP function, meaning that the configuration can be stored in the minion config file, the master config file, or the minion\(aqs pillar. .sp This configuration option is called \fBvirt.nic\fP. By default the \fBvirt.nic\fP option is empty but defaults to a data structure which looks like this: .sp .nf .ft C virt.nic: default: eth0: bridge: br0 model: virtio .ft P .fi .IP Note The model does not need to be defined, Salt will default to the optimal model used by the underlying hypervisor, in the case of kvm this model is \fBvirtio\fP .RE .sp This configuration sets up a network profile called default. The default profile creates a single Ethernet device on the virtual machine that is bridged to the hypervisor\(aqs \fBbr0\fP interface. This default setup does not require setting up the \fBvirt.nic\fP configuration, and is the reason why a default install only requires setting up the \fBbr0\fP bridge device on the hypervisor. .SS Define More Profiles .sp Many environments will require more complex network profiles and may require more than one profile, this can be easily accomplished: .sp .nf .ft C virt.nic: dual: eth0: bridge: service_br eth1: bridge: storage_br single: eth0: bridge: service_br triple: eth0: bridge: service_br eth1: bridge: storage_br eth2: bridge: dmz_br all: eth0: bridge: service_br eth1: bridge: storage_br eth2: bridge: dmz_br eth3: bridge: database_br dmz: eth0: bridge: service_br eth1: bridge: dmz_br database: eth0: bridge: service_br eth1: bridge: database_br .ft P .fi .sp This configuration allows for one of six profiles to be selected, allowing virtual machines to be created which attach to different network depending on the needs of the deployed vm. .SH UNDERSTANDING YAML .sp The default renderer for SLS files is the YAML renderer. YAML is a markup language with many powerful features. However, Salt uses a small subset of YAML that maps over very commonly used data structures, like lists and dictionaries. It is the job of the YAML renderer to take the YAML data structure and compile it into a Python data structure for use by Salt. .sp Though YAML syntax may seem daunting and terse at first, there are only three very simple rules to remember when writing YAML for SLS files. .SS Rule One: Indentation .sp YAML uses a fixed indentation scheme to represent relationships between data layers. Salt requires that the indentation for each level consists of exactly two spaces. Do not use tabs. .SS Rule Two: Colons .sp Python dictionaries are, of course, simply key\-value pairs. Users from other languages may recognize this data type as hashes or associative arrays. .sp Dictionary keys are represented in YAML as strings terminated by a trailing colon. Values are represented by either a string following the colon, separated by a space: .sp .nf .ft C my_key: my_value .ft P .fi .sp In Python, the above maps to: .sp .nf .ft C {\(aqmy_key\(aq: \(aqmy_value\(aq} .ft P .fi .sp Alternatively, a value can be associated with a key through indentation. .sp .nf .ft C my_key: my_value .ft P .fi .IP Note The above syntax is valid YAML but is uncommon in SLS files because most often, the value for a key is not singular but instead is a \fIlist\fP of values. .RE .sp In Python, the above maps to: .sp .nf .ft C {\(aqmy_key\(aq: \(aqmy_value\(aq} .ft P .fi .sp Dictionaries can be nested: .sp .nf .ft C first_level_dict_key: second_level_dict_key: value_in_second_level_dict .ft P .fi .sp And in Python: .sp .nf .ft C { \(aqfirst_level_dict_key\(aq: { \(aqsecond_level_dict_key\(aq: \(aqvalue_in_second_level_dict\(aq } } .ft P .fi .SS Rule Three: Dashes .sp To represent lists of items, a single dash followed by a space is used. Multiple items are a part of the same list as a function of their having the same level of indentation. .sp .nf .ft C \- list_value_one \- list_value_two \- list_value_three .ft P .fi .sp Lists can be the value of a key\-value pair. This is quite common in Salt: .sp .nf .ft C my_dictionary: \- list_value_one \- list_value_two \- list_value_three .ft P .fi .sp In Python, the above maps to: .sp .nf .ft C {\(aqmy_dictionary\(aq: [\(aqlist_value_one\(aq, \(aqlist_value_two\(aq, \(aqlist_value_three\(aq]} .ft P .fi .SS Learning More .sp One easy way to learn more about how YAML gets rendered into Python data structures is to use an online YAML parser to see the Python output. .sp One excellent choice for experimenting with YAML parsing is: \fI\%http://yaml-online-parser.appspot.com/\fP .SH MASTER TOPS SYSTEM .sp In 0.10.4 the \fIexternal_nodes\fP system was upgraded to allow for modular subsystems to be used to generate the top file data for a highstate run on the master. .sp The old \fIexternal_nodes\fP option still works, but will be removed in the future in favor of the new \fImaster_tops\fP option which uses the modular system instead. The master tops system contains a number of subsystems that are loaded via the Salt loader interfaces like modules, states, returners, runners, etc. .sp Using the new \fImaster_tops\fP option is simple: .sp .nf .ft C master_tops: ext_nodes: cobbler\-external\-nodes .ft P .fi .sp for \fBCobbler\fP or: .sp .nf .ft C master_tops: reclass: inventory_base_uri: /etc/reclass classes_uri: roles .ft P .fi .sp for \fBReclass\fP. .SH SALT SSH .IP Note SALT\-SSH IS ALPHA SOFTWARE AND MAY NOT BE READY FOR PRODUCTION USE .RE .IP Note On many systems, \fBsalt\-ssh\fP will be in its own package, usually named \fBsalt\-ssh\fP. .RE .sp In version 0.17.0 of Salt a new transport system was introduced, the ability to use SSH for Salt communication. This addition allows for Salt routines to be executed on remote systems entirely through ssh, bypassing the need for a Salt Minion to be running on the remote systems and the need for a Salt Master. .IP Note The Salt SSH system does not supercede the standard Salt communication systems, it simply offers an SSH based alternative that does not require ZeroMQ and a remote agent. Be aware that since all communication with Salt SSH is executed via SSH it is substantially slower than standard Salt with ZeroMQ. .RE .sp Salt SSH is very easy to use, simply set up a basic \fIroster\fP file of the systems to connect to and run \fBsalt\-ssh\fP commands in a similar way as standard \fBsalt\fP commands. .IP Note The Salt SSH eventually is supposed to support the same set of commands and functionality as standard \fBsalt\fP command. .sp At the moment fileserver operations must be wrapped to ensure that the relevant files are delivered with the \fBsalt\-ssh\fP commands. The state module is an exception, which compiles the state run on the master, and in the process finds all the references to \fBsalt://\fP paths and copies those files down in the same tarball as the state run. However, needed fileserver wrappers are still under development. .RE .SS Salt SSH Roster .sp The roster system in Salt allows for remote minions to be easily defined. .IP Note See the \fBRoster documentation\fP for more details. .RE .sp Simply create the roster file, the default location is \fI/etc/salt/roster\fP: .sp .nf .ft C web1: 192.168.42.1 .ft P .fi .sp This is a very basic roster file where a Salt ID is being assigned to an IP address. A more elaborate roster can be created: .sp .nf .ft C web1: host: 192.168.42.1 # The IP addr or DNS hostname user: fred # Remote executions will be executed as user fred passwd: foobarbaz # The password to use for login, if omitted, keys are used sudo: True # Whether to sudo to root, not enabled by default web2: host: 192.168.42.2 .ft P .fi .SS Calling Salt SSH .sp The \fBsalt\-ssh\fP command can be easily executed in the same way as a salt command: .sp .nf .ft C salt\-ssh \(aq*\(aq test.ping .ft P .fi .sp Commands with \fBsalt\-ssh\fP follow the same syntax as the \fBsalt\fP command. .sp The standard salt functions are available! The output is the same as \fBsalt\fP and many of the same flags are available. Please see \fI\%http://docs.saltstack.com/ref/cli/salt-ssh.html\fP for all of the available options. .SS Raw Shell Calls .sp By default \fBsalt\-ssh\fP runs Salt execution modules on the remote system, but \fBsalt\-ssh\fP can also execute raw shell commands: .sp .nf .ft C salt\-ssh \(aq*\(aq \-r \(aqifconfig\(aq .ft P .fi .SS States Via Salt SSH .sp The Salt State system can also be used with \fBsalt\-ssh\fP. The state system abstracts the same interface to the user in \fBsalt\-ssh\fP as it does when using standard \fBsalt\fP. The intent is that Salt Formulas defined for standard \fBsalt\fP will work seamlessly with \fBsalt\-ssh\fP and vice\-versa. .sp The standard Salt States walkthroughs function by simply replacing \fBsalt\fP commands with \fBsalt\-ssh\fP. .SS Targeting with Salt SSH .sp Due to the fact that the targeting approach differs in salt\-ssh, only glob and regex targets are supported as of this writing, the remaining target systems still need to be implemented. .SS Configuring Salt SSH .sp Salt SSH takes its configuration from a master configuration file. Normally, this file is in \fB/etc/salt/master\fP. If one wishes to use a customized configuration file, the \fB\-c\fP option to Salt SSH facilitates passing in a directory to look inside for a configuration file named \fBmaster\fP. .SS Running Salt SSH as non\-root user .sp By default, Salt read all the configuration from /etc/salt/. If you are running Salt SSH with a regular user you have to modify some paths or you will get "Permission denied" messages. You have to modify two parameters: \fBpki_dir\fP and \fBcachedir\fP. Those should point to a full path writable for the user. .sp It\(aqs recommed not to modify /etc/salt for this purpose. Create a private copy of /etc/salt for the user and run the command with \fB\-c /new/config/path\fP. .SS Define CLI Options with Saltfile .sp If you are commonly passing in CLI options to \fBsalt\-ssh\fP, you can create a \fBSaltfile\fP to automatically use these options. This is common if you\(aqre managing several different salt projects on the same server. .sp So if you \fBcd\fP into a directory with a Saltfile with the following contents: .sp .nf .ft C salt\-ssh: config_dir: path/to/config/dir max_prox: 30 .ft P .fi .sp Instead of having to call \fBsalt\-ssh \-\-config\-dir=path/to/config/dir \-\-max\-procs=30 \e* test.ping\fP you can call \fBsalt\-ssh \e* test.ping\fP. .SH SALT ROSTERS .sp Salt rosters are pluggable systems added in Salt 0.17.0 to facilitate the \fBsalt\-ssh\fP system. The roster system was created because \fBsalt\-ssh\fP needs a means to identify which systems need to be targeted for execution. .IP Note The Roster System is not needed or used in standard Salt because the master does not need to be initially aware of target systems, since the Salt Minion checks itself into the master. .RE .sp Since the roster system is pluggable, it can be easily augmented to attach to any existing systems to gather information about what servers are presently available and should be attached to by \fBsalt\-ssh\fP. By default the roster file is located at /etc/salt/roster. .SS How Rosters Work .sp The roster system compiles a data structure internally referred to as \fItargets\fP. The \fItargets\fP is a list of target systems and attributes about how to connect to said systems. The only requirement for a roster module in Salt is to return the \fItargets\fP data structure. .SS Targets Data .sp The information which can be stored in a roster \fItarget\fP is the following: .sp .nf .ft C : # The id to reference the target system with host: # The IP address or DNS name of the remote host user: # The user to log in as passwd: # The password to log in with # Optional parameters port: # The target system\(aqs ssh port number sudo: # Boolean to run command via sudo priv: # File path to ssh private key, defaults to salt\-ssh.rsa timeout: # Number of seconds to wait for response .ft P .fi .SH REFERENCE .SS Full list of builtin auth modules .TS center; |l|l|. _ T{ \fBauto\fP T} T{ An "Always Approved" eauth interface to test against, not intended for T} _ T{ \fBkeystone\fP T} T{ Provide authentication using OpenStack Keystone T} _ T{ \fBldap\fP T} T{ Provide authentication using simple LDAP binds T} _ T{ \fBpam\fP T} T{ Authenticate against PAM T} _ T{ \fBpki\fP T} T{ Authenticate via a PKI certificate. T} _ T{ \fBstormpath_mod\fP T} T{ Salt Stormpath Authentication T} _ .TE .SS salt.auth.auto .sp An "Always Approved" eauth interface to test against, not intended for production use .INDENT 0.0 .TP .B salt.auth.auto.auth(username, password) Authenticate! .UNINDENT .SS salt.auth.keystone .sp Provide authentication using OpenStack Keystone .INDENT 0.0 .TP .B depends .INDENT 7.0 .IP \(bu 2 keystoneclient Python module .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.auth.keystone.auth(username, password) Try and authenticate .UNINDENT .INDENT 0.0 .TP .B salt.auth.keystone.get_auth_url() Try and get the URL from the config, else return localhost .UNINDENT .SS salt.auth.ldap .sp Provide authentication using simple LDAP binds .INDENT 0.0 .TP .B depends .INDENT 7.0 .IP \(bu 2 ldap Python module .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.auth.ldap.auth(username, password) Simple LDAP auth .UNINDENT .INDENT 0.0 .TP .B salt.auth.ldap.groups(username, **kwargs) Authenticate against an LDAP group .sp Uses groupou and basedn specified in group to filter group search .UNINDENT .SS salt.auth.pam .sp Authenticate against PAM .sp Provides an authenticate function that will allow the caller to authenticate a user against the Pluggable Authentication Modules (PAM) on the system. .sp Implemented using ctypes, so no compilation is necessary. .IP Note PAM authentication will not work for the \fBroot\fP user. .sp The Python interface to PAM does not support authenticating as \fBroot\fP. .RE .INDENT 0.0 .TP .B class salt.auth.pam.PamConv Wrapper class for pam_conv structure .INDENT 7.0 .TP .B appdata_ptr Structure/Union member .UNINDENT .INDENT 7.0 .TP .B conv Structure/Union member .UNINDENT .UNINDENT .INDENT 0.0 .TP .B class salt.auth.pam.PamHandle Wrapper class for pam_handle_t .INDENT 7.0 .TP .B handle Structure/Union member .UNINDENT .UNINDENT .INDENT 0.0 .TP .B class salt.auth.pam.PamMessage Wrapper class for pam_message structure .INDENT 7.0 .TP .B msg Structure/Union member .UNINDENT .INDENT 7.0 .TP .B msg_style Structure/Union member .UNINDENT .UNINDENT .INDENT 0.0 .TP .B class salt.auth.pam.PamResponse Wrapper class for pam_response structure .INDENT 7.0 .TP .B resp Structure/Union member .UNINDENT .INDENT 7.0 .TP .B resp_retcode Structure/Union member .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.auth.pam.auth(username, password, **kwargs) Authenticate via pam .UNINDENT .INDENT 0.0 .TP .B salt.auth.pam.authenticate(username, password, service=\(aqlogin\(aq) Returns True if the given username and password authenticate for the given service. Returns False otherwise .sp \fBusername\fP: the username to authenticate .sp \fBpassword\fP: the password in plain text .INDENT 7.0 .TP .B \fBservice\fP: the PAM service to authenticate against. Defaults to \(aqlogin\(aq .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.auth.pam.groups(username, *args, **kwargs) Retreive groups for a given user for this auth provider .sp Uses system groups .UNINDENT .SS salt.auth.pki .sp Authenticate via a PKI certificate. .IP Note This module is Experimental and should be used with caution .RE .sp Provides an authenticate function that will allow the caller to authenticate a user via their public cert against a pre\-defined Certificate Authority. .sp TODO: Add a \(aqca_dir\(aq option to configure a directory of CA files, a la Apache. .INDENT 0.0 .TP .B depends .INDENT 7.0 .IP \(bu 2 pyOpenSSL module .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.auth.pki.auth(pem, **kwargs) Returns True if the given user cert was issued by the CA. Returns False otherwise. .sp \fBpem\fP: a pem\-encoded user public key (certificate) .sp Configure the CA cert in the master config file: .sp .nf .ft C external_auth: pki: ca_file: /etc/pki/tls/ca_certs/trusted\-ca.crt .ft P .fi .UNINDENT .SS salt.auth.stormpath_mod .sp Salt Stormpath Authentication .sp Module to provide authentication using Stormpath as the backend. .INDENT 0.0 .TP .B depends .INDENT 7.0 .IP \(bu 2 stormpath\-sdk Python module .UNINDENT .TP .B configuration This module requires the development branch of the stormpath\-sdk which can be found here: \fI\%https://github.com/stormpath/stormpath-sdk-python\fP .sp The following config items are required in the master config: .sp .nf .ft C stormpath.api_key_file: stormpath.app_url: .ft P .fi .sp Ensure that your apiKey.properties is readable by the user the Salt Master is running as, but not readable by other system users. .UNINDENT .INDENT 0.0 .TP .B salt.auth.stormpath_mod.auth(username, password) Try and authenticate .UNINDENT .SS Command Line Reference .sp Salt can be controlled by a command line client by the root user on the Salt master. The Salt command line client uses the Salt client API to communicate with the Salt master server. The Salt client is straightforward and simple to use. .sp Using the Salt client commands can be easily sent to the minions. .sp Each of these commands accepts an explicit \fI\-\-config\fP option to point to either the master or minion configuration file. If this option is not provided and the default configuration file does not exist then Salt falls back to use the environment variables \fBSALT_MASTER_CONFIG\fP and \fBSALT_MINION_CONFIG\fP. .IP "See also" .sp \fBConfiguration\fP .RE .SS Using the Salt Command .sp The Salt command needs a few components to send information to the Salt minions. The target minions need to be defined, the function to call and any arguments the function requires. .SS Defining the Target Minions .sp The first argument passed to salt, defines the target minions, the target minions are accessed via their hostname. The default target type is a bash glob: .sp .nf .ft C salt \(aq*foo.com\(aq sys.doc .ft P .fi .sp Salt can also define the target minions with regular expressions: .sp .nf .ft C salt \-E \(aq.*\(aq cmd.run \(aqls \-l | grep foo\(aq .ft P .fi .sp Or to explicitly list hosts, salt can take a list: .sp .nf .ft C salt \-L foo.bar.baz,quo.qux cmd.run \(aqps aux | grep foo\(aq .ft P .fi .SS More Powerful Targets .sp The simple target specifications, glob, regex and list will cover many use cases, and for some will cover all use cases, but more powerful options exist. .SS Targeting with Grains .sp The Grains interface was built into Salt to allow minions to be targeted by system properties. So minions running on a particular operating system can be called to execute a function, or a specific kernel. .sp Calling via a grain is done by passing the \-G option to salt, specifying a grain and a glob expression to match the value of the grain. The syntax for the target is the grain key followed by a globexpression: "os:Arch*". .sp .nf .ft C salt \-G \(aqos:Fedora\(aq test.ping .ft P .fi .sp Will return True from all of the minions running Fedora. .sp To discover what grains are available and what the values are, execute the grains.item salt function: .sp .nf .ft C salt \(aq*\(aq grains.items .ft P .fi .sp more info on using targeting with grains can be found \fIhere\fP. .SS Targeting with Executions .sp As of 0.8.8 targeting with executions is still under heavy development and this documentation is written to reference the behavior of execution matching in the future. .sp Execution matching allows for a primary function to be executed, and then based on the return of the primary function the main function is executed. .sp Execution matching allows for matching minions based on any arbitrary running data on the minions. .SS Compound Targeting .sp New in version 0.9.5. .sp Multiple target interfaces can be used in conjunction to determine the command targets. These targets can then be combined using and or or statements. This is well defined with an example: .sp .nf .ft C salt \-C \(aqG@os:Debian and webser* or E@db.*\(aq test.ping .ft P .fi .sp In this example any minion who\(aqs id starts with \fBwebser\fP and is running Debian, or any minion who\(aqs id starts with db will be matched. .sp The type of matcher defaults to glob, but can be specified with the corresponding letter followed by the \fB@\fP symbol. In the above example a grain is used with \fBG@\fP as well as a regular expression with \fBE@\fP. The \fBwebser*\fP target does not need to be prefaced with a target type specifier because it is a glob. .sp more info on using compound targeting can be found \fIhere\fP. .SS Node Group Targeting .sp New in version 0.9.5. .sp For certain cases, it can be convenient to have a predefined group of minions on which to execute commands. This can be accomplished using what are called \fInodegroups\fP. Nodegroups allow for predefined compound targets to be declared in the master configuration file, as a sort of shorthand for having to type out complicated compound expressions. .sp .nf .ft C nodegroups: \ group1: \(aqL@foo.domain.com,bar.domain.com,baz.domain.com and bl*.domain.com\(aq \ group2: \(aqG@os:Debian and foo.domain.com\(aq .ft P .fi .sp More info on using nodegroups can be found \fIhere\fP. .SS Calling the Function .sp The function to call on the specified target is placed after the target specification. .sp New in version 0.9.8. .sp Functions may also accept arguments, space\-delimited: .sp .nf .ft C salt \(aq*\(aq cmd.exec_code python \(aqimport sys; print sys.version\(aq .ft P .fi .sp Optional, keyword arguments are also supported: .sp .nf .ft C salt \(aq*\(aq pip.install salt timeout=5 upgrade=True .ft P .fi .sp They are always in the form of \fBkwarg=argument\fP. .sp Arguments are formatted as YAML: .sp .nf .ft C salt \(aq*\(aq cmd.run \(aqecho "Hello: $FIRST_NAME"\(aq env=\(aq{FIRST_NAME: "Joe"}\(aq .ft P .fi .sp Note: dictionaries must have curly braces around them (like the \fBenv\fP keyword argument above). This was changed in 0.15.1: in the above example, the first argument used to be parsed as the dictionary \fB{\(aqecho "Hello\(aq: \(aq$FIRST_NAME"\(aq}\fP. This was generally not the expected behavior. .sp If you want to test what parameters are actually passed to a module, use the \fBtest.arg_repr\fP command: .sp .nf .ft C salt \(aq*\(aq test.arg_repr \(aqecho "Hello: $FIRST_NAME"\(aq env=\(aq{FIRST_NAME: "Joe"}\(aq .ft P .fi .SS Finding available minion functions .sp The Salt functions are self documenting, all of the function documentation can be retried from the minions via the \fBsys.doc()\fP function: .sp .nf .ft C salt \(aq*\(aq sys.doc .ft P .fi .SS Compound Command Execution .sp If a series of commands needs to be sent to a single target specification then the commands can be sent in a single publish. This can make gathering groups of information faster, and lowers the stress on the network for repeated commands. .sp Compound command execution works by sending a list of functions and arguments instead of sending a single function and argument. The functions are executed on the minion in the order they are defined on the command line, and then the data from all of the commands are returned in a dictionary. This means that the set of commands are called in a predictable way, and the returned data can be easily interpreted. .sp Executing compound commands if done by passing a comma delimited list of functions, followed by a comma delimited list of arguments: .sp .nf .ft C salt \(aq*\(aq cmd.run,test.ping,test.echo \(aqcat /proc/cpuinfo\(aq,,foo .ft P .fi .sp The trick to look out for here, is that if a function is being passed no arguments, then there needs to be a placeholder for the absent arguments. This is why in the above example, there are two commas right next to each other. \fBtest.ping\fP takes no arguments, so we need to add another comma, otherwise Salt would attempt to pass "foo" to \fBtest.ping\fP. .sp If you need to pass arguments that include commas, then make sure you add spaces around the commas that separate arguments. For example: .sp .nf .ft C salt \(aq*\(aq cmd.run,test.ping,test.echo \(aqecho "1,2,3"\(aq , , foo .ft P .fi .sp You may change the arguments separator using the \fB\-\-args\-separator\fP option: .sp .nf .ft C salt \-\-args\-separator=:: \(aq*\(aq some.fun,test.echo params with , comma :: foo .ft P .fi .SS salt\-call .SS \fBsalt\-call\fP .SS Synopsis .sp .nf .ft C salt\-call [options] .ft P .fi .SS Description .sp The salt\-call command is used to run module functions locally on a minion instead of executing them from the master. .SS Options .INDENT 0.0 .TP .B \-\-version Print the version of Salt that is running. .UNINDENT .INDENT 0.0 .TP .B \-\-versions\-report Show program\(aqs dependencies and version number, and then exit .UNINDENT .INDENT 0.0 .TP .B \-h, \-\-help Show the help message and exit .UNINDENT .INDENT 0.0 .TP .B \-c CONFIG_DIR, \-\-config\-dir=CONFIG_dir The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is \fB/etc/salt\fP. .UNINDENT .INDENT 0.0 .TP .B \-g, \-\-grains Return the information generated by the Salt grains .UNINDENT .INDENT 0.0 .TP .B \-m MODULE_DIRS, \-\-module\-dirs=MODULE_DIRS Specify an additional directories to pull modules from, multiple directories can be delimited by commas .UNINDENT .INDENT 0.0 .TP .B \-d, \-\-doc, \-\-documentation Return the documentation for the specified module or for all modules if none are specified .UNINDENT .INDENT 0.0 .TP .B \-\-master=MASTER Specify the master to use. The minion must be authenticated with the master. If this option is omitted, the master options from the minion config will be used. If multi masters are set up the first listed master that responds will be used. .UNINDENT .INDENT 0.0 .TP .B \-\-return RETURNER Set salt\-call to pass the return data to one or many returner interfaces. To use many returner interfaces specify a comma delimited list of returners. .UNINDENT .INDENT 0.0 .TP .B \-\-local Run salt\-call locally, as if there was no master running. .UNINDENT .SS Logging Options .sp Logging options which override any settings defined on the configuration files. .INDENT 0.0 .TP .B \-l LOG_LEVEL, \-\-log\-level=LOG_LEVEL Console logging log level. One of \fBall\fP, \fBgarbage\fP, \fBtrace\fP, \fBdebug\fP, \fBinfo\fP, \fBwarning\fP, \fBerror\fP, \fBquiet\fP. Default: \fBinfo\fP. .UNINDENT .INDENT 0.0 .TP .B \-\-log\-file=LOG_FILE Log file path. Default: /var/log/salt/minion. .UNINDENT .INDENT 0.0 .TP .B \-\-log\-file\-level=LOG_LEVEL_LOGFILE Logfile logging log level. One of \fBall\fP, \fBgarbage\fP, \fBtrace\fP, \fBdebug\fP, \fBinfo\fP, \fBwarning\fP, \fBerror\fP, \fBquiet\fP. Default: \fBinfo\fP. .UNINDENT .SS Output Options .INDENT 0.0 .TP .B \-\-out Pass in an alternative outputter to display the return of data. This outputter can be any of the available outputters: .INDENT 7.0 .INDENT 3.5 \fBgrains\fP, \fBhighstate\fP, \fBjson\fP, \fBkey\fP, \fBoverstatestage\fP, \fBpprint\fP, \fBraw\fP, \fBtxt\fP, \fByaml\fP .UNINDENT .UNINDENT .sp Some outputters are formatted only for data returned from specific functions; for instance, the \fBgrains\fP outputter will not work for non\-grains data. .sp If an outputter is used that does not support the data passed into it, then Salt will fall back on the \fBpprint\fP outputter and display the return data using the Python \fBpprint\fP standard library module. .IP Note If using \fB\-\-out=json\fP, you will probably want \fB\-\-static\fP as well. Without the static option, you will get a JSON string for each minion. This is due to using an iterative outputter. So if you want to feed it to a JSON parser, use \fB\-\-static\fP as well. .RE .UNINDENT .INDENT 0.0 .TP .B \-\-out\-indent OUTPUT_INDENT, \-\-output\-indent OUTPUT_INDENT Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation. .UNINDENT .INDENT 0.0 .TP .B \-\-out\-file=OUTPUT_FILE, \-\-output\-file=OUTPUT_FILE Write the output to the specified file. .UNINDENT .INDENT 0.0 .TP .B \-\-no\-color Disable all colored output .UNINDENT .INDENT 0.0 .TP .B \-\-force\-color Force colored output .IP Note When using colored output the color codes are as follows: .sp \fBgreen\fP denotes success, \fBred\fP denotes failure, \fBblue\fP denotes changes and success and \fByellow\fP denotes a expected future change in configuration. .RE .UNINDENT .SS See also .sp \fIsalt(1)\fP \fIsalt\-master(1)\fP \fIsalt\-minion(1)\fP .SS salt .SS \fBsalt\fP .SS Synopsis .INDENT 0.0 .INDENT 3.5 salt \(aq*\(aq [ options ] sys.doc .sp salt \-E \(aq.*\(aq [ options ] sys.doc cmd .sp salt \-G \(aqos:Arch.*\(aq [ options ] test.ping .sp salt \-C \fI\%'G@os\fP:Arch.* and webserv* or \fI\%G@kernel\fP:FreeBSD\(aq [ options ] test.ping .UNINDENT .UNINDENT .SS Description .sp Salt allows for commands to be executed across a swath of remote systems in parallel. This means that remote systems can be both controlled and queried with ease. .SS Options .INDENT 0.0 .TP .B \-\-version Print the version of Salt that is running. .UNINDENT .INDENT 0.0 .TP .B \-\-versions\-report Show program\(aqs dependencies and version number, and then exit .UNINDENT .INDENT 0.0 .TP .B \-h, \-\-help Show the help message and exit .UNINDENT .INDENT 0.0 .TP .B \-c CONFIG_DIR, \-\-config\-dir=CONFIG_dir The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is \fB/etc/salt\fP. .UNINDENT .INDENT 0.0 .TP .B \-t TIMEOUT, \-\-timeout=TIMEOUT The timeout in seconds to wait for replies from the Salt minions. The timeout number specifies how long the command line client will wait to query the minions and check on running jobs. Default: 5 .UNINDENT .INDENT 0.0 .TP .B \-s, \-\-static By default as of version 0.9.8 the salt command returns data to the console as it is received from minions, but previous releases would return data only after all data was received. To only return the data with a hard timeout and after all minions have returned then use the static option. .UNINDENT .INDENT 0.0 .TP .B \-\-async Instead of waiting for the job to run on minions only print the jod id of the started execution and complete. .UNINDENT .INDENT 0.0 .TP .B \-\-state\-output=STATE_OUTPUT New in version 0.17. .sp Override the configured state_output value for minion output. Default: full .UNINDENT .INDENT 0.0 .TP .B \-\-subset=SUBSET Execute the routine on a random subset of the targeted minions. The minions will be verified that they have the named function before executing. .UNINDENT .INDENT 0.0 .TP .B \-v VERBOSE, \-\-verbose Turn on verbosity for the salt call, this will cause the salt command to print out extra data like the job id. .UNINDENT .INDENT 0.0 .TP .B \-\-show\-timeout Instead of only showing the return data from the online minions this option also prints the names of the minions which could not be reached. .UNINDENT .INDENT 0.0 .TP .B \-b BATCH, \-\-batch\-size=BATCH Instead of executing on all targeted minions at once, execute on a progressive set of minions. This option takes an argument in the form of an explicit number of minions to execute at once, or a percentage of minions to execute on. .UNINDENT .INDENT 0.0 .TP .B \-a EAUTH, \-\-auth=EAUTH Pass in an external authentication medium to validate against. The credentials will be prompted for. Can be used with the \-T option. .UNINDENT .INDENT 0.0 .TP .B \-T, \-\-make\-token Used in conjunction with the \-a option. This creates a token that allows for the authenticated user to send commands without needing to re\-authenticate. .UNINDENT .INDENT 0.0 .TP .B \-\-return=RETURNER Chose an alternative returner to call on the minion, if an alternative returner is used then the return will not come back to the command line but will be sent to the specified return system. .UNINDENT .INDENT 0.0 .TP .B \-d, \-\-doc, \-\-documentation Return the documentation for the module functions available on the minions .UNINDENT .INDENT 0.0 .TP .B \-\-args\-separator=ARGS_SEPARATOR Set the special argument used as a delimiter between command arguments of compound commands. This is useful when one wants to pass commas as arguments to some of the commands in a compound command. .UNINDENT .SS Logging Options .sp Logging options which override any settings defined on the configuration files. .INDENT 0.0 .TP .B \-l LOG_LEVEL, \-\-log\-level=LOG_LEVEL Console logging log level. One of \fBall\fP, \fBgarbage\fP, \fBtrace\fP, \fBdebug\fP, \fBinfo\fP, \fBwarning\fP, \fBerror\fP, \fBquiet\fP. Default: \fBwarning\fP. .UNINDENT .INDENT 0.0 .TP .B \-\-log\-file=LOG_FILE Log file path. Default: /var/log/salt/master. .UNINDENT .INDENT 0.0 .TP .B \-\-log\-file\-level=LOG_LEVEL_LOGFILE Logfile logging log level. One of \fBall\fP, \fBgarbage\fP, \fBtrace\fP, \fBdebug\fP, \fBinfo\fP, \fBwarning\fP, \fBerror\fP, \fBquiet\fP. Default: \fBwarning\fP. .UNINDENT .SS Target Selection .INDENT 0.0 .TP .B \-E, \-\-pcre The target expression will be interpreted as a PCRE regular expression rather than a shell glob. .UNINDENT .INDENT 0.0 .TP .B \-L, \-\-list The target expression will be interpreted as a comma\-delimited list; example: server1.foo.bar,server2.foo.bar,example7.quo.qux .UNINDENT .INDENT 0.0 .TP .B \-G, \-\-grain The target expression matches values returned by the Salt grains system on the minions. The target expression is in the format of \(aq:\(aq; example: \(aqos:Arch*\(aq .sp This was changed in version 0.9.8 to accept glob expressions instead of regular expression. To use regular expression matching with grains, use the \-\-grain\-pcre option. .UNINDENT .INDENT 0.0 .TP .B \-\-grain\-pcre The target expression matches values returned by the Salt grains system on the minions. The target expression is in the format of \(aq:< regular expression>\(aq; example: \(aqos:Arch.*\(aq .UNINDENT .INDENT 0.0 .TP .B \-N, \-\-nodegroup Use a predefined compound target defined in the Salt master configuration file. .UNINDENT .INDENT 0.0 .TP .B \-R, \-\-range Instead of using shell globs to evaluate the target, use a range expression to identify targets. Range expressions look like %cluster. .sp Using the Range option requires that a range server is set up and the location of the range server is referenced in the master configuration file. .UNINDENT .INDENT 0.0 .TP .B \-C, \-\-compound Utilize many target definitions to make the call very granular. This option takes a group of targets separated by \fBand\fP or \fBor\fP. The default matcher is a glob as usual. If something other than a glob is used, preface it with the letter denoting the type; example: \(aqwebserv* and \fI\%G@os\fP:Debian or \fI\%E@db*\fP\(aq Make sure that the compound target is encapsulated in quotes. .UNINDENT .INDENT 0.0 .TP .B \-I, \-\-pillar Instead of using shell globs to evaluate the target, use a pillar value to identify targets. The syntax for the target is the pillar key followed by a glob expression: "role:production*" .UNINDENT .INDENT 0.0 .TP .B \-S, \-\-ipcidr Match based on Subnet (CIDR notation) or IPv4 address. .UNINDENT .SS Output Options .INDENT 0.0 .TP .B \-\-out Pass in an alternative outputter to display the return of data. This outputter can be any of the available outputters: .INDENT 7.0 .INDENT 3.5 \fBgrains\fP, \fBhighstate\fP, \fBjson\fP, \fBkey\fP, \fBoverstatestage\fP, \fBpprint\fP, \fBraw\fP, \fBtxt\fP, \fByaml\fP .UNINDENT .UNINDENT .sp Some outputters are formatted only for data returned from specific functions; for instance, the \fBgrains\fP outputter will not work for non\-grains data. .sp If an outputter is used that does not support the data passed into it, then Salt will fall back on the \fBpprint\fP outputter and display the return data using the Python \fBpprint\fP standard library module. .IP Note If using \fB\-\-out=json\fP, you will probably want \fB\-\-static\fP as well. Without the static option, you will get a JSON string for each minion. This is due to using an iterative outputter. So if you want to feed it to a JSON parser, use \fB\-\-static\fP as well. .RE .UNINDENT .INDENT 0.0 .TP .B \-\-out\-indent OUTPUT_INDENT, \-\-output\-indent OUTPUT_INDENT Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation. .UNINDENT .INDENT 0.0 .TP .B \-\-out\-file=OUTPUT_FILE, \-\-output\-file=OUTPUT_FILE Write the output to the specified file. .UNINDENT .INDENT 0.0 .TP .B \-\-no\-color Disable all colored output .UNINDENT .INDENT 0.0 .TP .B \-\-force\-color Force colored output .IP Note When using colored output the color codes are as follows: .sp \fBgreen\fP denotes success, \fBred\fP denotes failure, \fBblue\fP denotes changes and success and \fByellow\fP denotes a expected future change in configuration. .RE .UNINDENT .SS See also .sp \fIsalt(7)\fP \fIsalt\-master(1)\fP \fIsalt\-minion(1)\fP .SS salt\-cloud .SS \fBsalt\-cloud\fP .sp Provision virtual machines in the cloud with Salt .SS Synopsis .sp .nf .ft C salt\-cloud \-m /etc/salt/cloud.map salt\-cloud \-p PROFILE NAME salt\-cloud \-p PROFILE NAME1 NAME2 NAME3 NAME4 NAME5 NAME6 .ft P .fi .SS Description .sp Salt Cloud is the system used to provision virtual machines on various public clouds via a cleanly controlled profile and mapping system. .SS Options .INDENT 0.0 .TP .B \-h, \-\-help Print a usage message briefly summarizing these command\-line options. .UNINDENT .INDENT 0.0 .TP .B \-p PROFILE, \-\-profile=PROFILE Select a single profile to build the named cloud VMs from. The profile must be defined in the specified profiles file. .UNINDENT .INDENT 0.0 .TP .B \-m MAP, \-\-map=MAP Specify a map file to use. If used without any other options, this option will ensure that all of the mapped VMs are created. If the named VM already exists then it will be skipped. .UNINDENT .INDENT 0.0 .TP .B \-H, \-\-hard When specifying a map file, the default behavior is to ensure that all of the VMs specified in the map file are created. If the \-\-hard option is set, then any VMs that exist on configured cloud providers that are not specified in the map file will be destroyed. Be advised that this can be a destructive operation and should be used with care. .UNINDENT .INDENT 0.0 .TP .B \-d, \-\-destroy Pass in the name(s) of VMs to destroy, salt\-cloud will search the configured cloud providers for the specified names and destroy the VMs. Be advised that this is a destructive operation and should be used with care. Can be used in conjunction with the \-m option to specify a map of VMs to be deleted. .UNINDENT .INDENT 0.0 .TP .B \-P, \-\-parallel Normally when building many cloud VMs they are executed serially. The \-P option will run each cloud vm build in a separate process allowing for large groups of VMs to be build at once. .sp Be advised that some cloud provider\(aqs systems don\(aqt seem to be well suited for this influx of vm creation. When creating large groups of VMs watch the cloud provider carefully. .UNINDENT .INDENT 0.0 .TP .B \-Q, \-\-query Execute a query and print out information about all cloud VMs. Can be used in conjunction with \-m to display only information about the specified map. .UNINDENT .INDENT 0.0 .TP .B \-F, \-\-full\-query Execute a query and print out all available information about all cloud VMs. Can be used in conjunction with \-m to display only information about the specified map. .UNINDENT .INDENT 0.0 .TP .B \-S, \-\-select\-query Execute a query and print out selected information about all cloud VMs. Can be used in conjunction with \-m to display only information about the specified map. .UNINDENT .INDENT 0.0 .TP .B \-\-list\-images Display a list of images available in configured cloud providers. Pass the cloud provider that available images are desired on, aka "linode", or pass "all" to list images for all configured cloud providers. .UNINDENT .INDENT 0.0 .TP .B \-\-list\-sizes Display a list of sizes available in configured cloud providers. Pass the cloud provider that available sizes are desired on, aka "aws", or pass "all" to list sizes for all configured cloud providers .UNINDENT .INDENT 0.0 .TP .B \-C CLOUD_CONFIG, \-\-cloud\-config=CLOUD_CONFIG Specify an alternative location for the salt cloud configuration file. Default location is /etc/salt/cloud. .UNINDENT .INDENT 0.0 .TP .B \-M MASTER_CONFIG, \-\-master\-config=MASTER_CONFIG Specify an alternative location for the salt master configuration file. The salt master configuration file is used to determine how to handle the minion RSA keys. Default location is /etc/salt/master. .UNINDENT .INDENT 0.0 .TP .B \-V VM_CONFIG, \-\-profiles=VM_CONFIG, \-\-vm_config=VM_CONFIG Specify an alternative location for the salt cloud profiles file. Default location is /etc/salt/cloud.profiles. .UNINDENT .INDENT 0.0 .TP .B \-\-raw\-out Print the output from the salt command in raw python form, this is suitable for re\-reading the output into an executing python script with eval. .UNINDENT .INDENT 0.0 .TP .B \-\-text\-out Print the output from the salt command in the same form the shell would. .UNINDENT .INDENT 0.0 .TP .B \-\-yaml\-out Print the output from the salt command in yaml. .UNINDENT .INDENT 0.0 .TP .B \-\-json\-out Print the output from the salt command in json. .UNINDENT .INDENT 0.0 .TP .B \-\-no\-color Disable all colored output. .UNINDENT .SS Examples .sp To create 4 VMs named web1, web2, db1 and db2 from specified profiles: .sp .nf .ft C salt\-cloud \-p fedora_rackspace web1 web2 db1 db2 .ft P .fi .sp To read in a map file and create all VMs specified therein: .sp .nf .ft C salt\-cloud \-m /path/to/cloud.map .ft P .fi .sp To read in a map file and create all VMs specified therein in parallel: .sp .nf .ft C salt\-cloud \-m /path/to/cloud.map \-P .ft P .fi .sp To delete any VMs specified in the map file: .sp .nf .ft C salt\-cloud \-m /path/to/cloud.map \-d .ft P .fi .sp To delete any VMs NOT specified in the map file: .sp .nf .ft C salt\-cloud \-m /path/to/cloud.map \-H .ft P .fi .sp To display the status of all VMs specified in the map file: .sp .nf .ft C salt\-cloud \-m /path/to/cloud.map \-Q .ft P .fi .SS See also .sp \fIsalt\-cloud(7)\fP \fIsalt(7)\fP \fIsalt\-master(1)\fP \fIsalt\-minion(1)\fP .SS salt\-cp .SS \fBsalt\-cp\fP .sp Copy a file to a set of systems .SS Synopsis .sp .nf .ft C salt\-cp \(aq*\(aq [ options ] SOURCE DEST salt\-cp \-E \(aq.*\(aq [ options ] SOURCE DEST salt\-cp \-G \(aqos:Arch.*\(aq [ options ] SOURCE DEST .ft P .fi .SS Description .sp Salt copy copies a local file out to all of the Salt minions matched by the given target. .SS Options .INDENT 0.0 .TP .B \-\-version Print the version of Salt that is running. .UNINDENT .INDENT 0.0 .TP .B \-\-versions\-report Show program\(aqs dependencies and version number, and then exit .UNINDENT .INDENT 0.0 .TP .B \-h, \-\-help Show the help message and exit .UNINDENT .INDENT 0.0 .TP .B \-c CONFIG_DIR, \-\-config\-dir=CONFIG_dir The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is \fB/etc/salt\fP. .UNINDENT .INDENT 0.0 .TP .B \-t TIMEOUT, \-\-timeout=TIMEOUT The timeout in seconds to wait for replies from the Salt minions. The timeout number specifies how long the command line client will wait to query the minions and check on running jobs. Default: 5 .UNINDENT .SS Logging Options .sp Logging options which override any settings defined on the configuration files. .INDENT 0.0 .TP .B \-l LOG_LEVEL, \-\-log\-level=LOG_LEVEL Console logging log level. One of \fBall\fP, \fBgarbage\fP, \fBtrace\fP, \fBdebug\fP, \fBinfo\fP, \fBwarning\fP, \fBerror\fP, \fBquiet\fP. Default: \fBwarning\fP. .UNINDENT .INDENT 0.0 .TP .B \-\-log\-file=LOG_FILE Log file path. Default: /var/log/salt/master. .UNINDENT .INDENT 0.0 .TP .B \-\-log\-file\-level=LOG_LEVEL_LOGFILE Logfile logging log level. One of \fBall\fP, \fBgarbage\fP, \fBtrace\fP, \fBdebug\fP, \fBinfo\fP, \fBwarning\fP, \fBerror\fP, \fBquiet\fP. Default: \fBwarning\fP. .UNINDENT .SS Target Selection .INDENT 0.0 .TP .B \-E, \-\-pcre The target expression will be interpreted as a PCRE regular expression rather than a shell glob. .UNINDENT .INDENT 0.0 .TP .B \-L, \-\-list The target expression will be interpreted as a comma\-delimited list; example: server1.foo.bar,server2.foo.bar,example7.quo.qux .UNINDENT .INDENT 0.0 .TP .B \-G, \-\-grain The target expression matches values returned by the Salt grains system on the minions. The target expression is in the format of \(aq:\(aq; example: \(aqos:Arch*\(aq .sp This was changed in version 0.9.8 to accept glob expressions instead of regular expression. To use regular expression matching with grains, use the \-\-grain\-pcre option. .UNINDENT .INDENT 0.0 .TP .B \-\-grain\-pcre The target expression matches values returned by the Salt grains system on the minions. The target expression is in the format of \(aq:< regular expression>\(aq; example: \(aqos:Arch.*\(aq .UNINDENT .INDENT 0.0 .TP .B \-N, \-\-nodegroup Use a predefined compound target defined in the Salt master configuration file. .UNINDENT .INDENT 0.0 .TP .B \-R, \-\-range Instead of using shell globs to evaluate the target, use a range expression to identify targets. Range expressions look like %cluster. .sp Using the Range option requires that a range server is set up and the location of the range server is referenced in the master configuration file. .UNINDENT .SS See also .sp \fIsalt(1)\fP \fIsalt\-master(1)\fP \fIsalt\-minion(1)\fP .SS salt\-key .SS \fBsalt\-key\fP .SS Synopsis .sp salt\-key [ options ] .SS Description .sp Salt\-key executes simple management of Salt server public keys used for authentication. .SS Options .INDENT 0.0 .TP .B \-\-version Print the version of Salt that is running. .UNINDENT .INDENT 0.0 .TP .B \-\-versions\-report Show program\(aqs dependencies and version number, and then exit .UNINDENT .INDENT 0.0 .TP .B \-h, \-\-help Show the help message and exit .UNINDENT .INDENT 0.0 .TP .B \-c CONFIG_DIR, \-\-config\-dir=CONFIG_dir The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is \fB/etc/salt\fP. .UNINDENT .INDENT 0.0 .TP .B \-q, \-\-quiet Suppress output .UNINDENT .INDENT 0.0 .TP .B \-y, \-\-yes Answer \(aqYes\(aq to all questions presented, defaults to False .UNINDENT .SS Logging Options .sp Logging options which override any settings defined on the configuration files. .INDENT 0.0 .TP .B \-\-log\-file=LOG_FILE Log file path. Default: /var/log/salt/minion. .UNINDENT .INDENT 0.0 .TP .B \-\-log\-file\-level=LOG_LEVEL_LOGFILE Logfile logging log level. One of \fBall\fP, \fBgarbage\fP, \fBtrace\fP, \fBdebug\fP, \fBinfo\fP, \fBwarning\fP, \fBerror\fP, \fBquiet\fP. Default: \fBwarning\fP. .UNINDENT .SS Output Options .INDENT 0.0 .TP .B \-\-out Pass in an alternative outputter to display the return of data. This outputter can be any of the available outputters: .INDENT 7.0 .INDENT 3.5 \fBgrains\fP, \fBhighstate\fP, \fBjson\fP, \fBkey\fP, \fBoverstatestage\fP, \fBpprint\fP, \fBraw\fP, \fBtxt\fP, \fByaml\fP .UNINDENT .UNINDENT .sp Some outputters are formatted only for data returned from specific functions; for instance, the \fBgrains\fP outputter will not work for non\-grains data. .sp If an outputter is used that does not support the data passed into it, then Salt will fall back on the \fBpprint\fP outputter and display the return data using the Python \fBpprint\fP standard library module. .IP Note If using \fB\-\-out=json\fP, you will probably want \fB\-\-static\fP as well. Without the static option, you will get a JSON string for each minion. This is due to using an iterative outputter. So if you want to feed it to a JSON parser, use \fB\-\-static\fP as well. .RE .UNINDENT .INDENT 0.0 .TP .B \-\-out\-indent OUTPUT_INDENT, \-\-output\-indent OUTPUT_INDENT Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation. .UNINDENT .INDENT 0.0 .TP .B \-\-out\-file=OUTPUT_FILE, \-\-output\-file=OUTPUT_FILE Write the output to the specified file. .UNINDENT .INDENT 0.0 .TP .B \-\-no\-color Disable all colored output .UNINDENT .INDENT 0.0 .TP .B \-\-force\-color Force colored output .IP Note When using colored output the color codes are as follows: .sp \fBgreen\fP denotes success, \fBred\fP denotes failure, \fBblue\fP denotes changes and success and \fByellow\fP denotes a expected future change in configuration. .RE .UNINDENT .SS Actions .INDENT 0.0 .TP .B \-l ARG, \-\-list=ARG List the public keys. The args \fBpre\fP, \fBun\fP, and \fBunaccepted\fP will list unaccepted/unsigned keys. \fBacc\fP or \fBaccepted\fP will list accepted/signed keys. \fBrej\fP or \fBrejected\fP will list rejected keys. Finally, \fBall\fP will list all keys. .UNINDENT .INDENT 0.0 .TP .B \-L, \-\-list\-all List all public keys. (Deprecated: use \fB\-\-list all\fP) .UNINDENT .INDENT 0.0 .TP .B \-a ACCEPT, \-\-accept=ACCEPT Accept the specified public key (use \-\-include\-all to match rejected keys in addition to pending keys). Globs are supported. .UNINDENT .INDENT 0.0 .TP .B \-A, \-\-accept\-all Accepts all pending keys. .UNINDENT .INDENT 0.0 .TP .B \-r REJECT, \-\-reject=REJECT Reject the specified public key (use \-\-include\-all to match accepted keys in addition to pending keys). Globs are supported. .UNINDENT .INDENT 0.0 .TP .B \-R, \-\-reject\-all Rejects all pending keys. .UNINDENT .INDENT 0.0 .TP .B \-\-include\-all Include non\-pending keys when accepting/rejecting. .UNINDENT .INDENT 0.0 .TP .B \-p PRINT, \-\-print=PRINT Print the specified public key. .UNINDENT .INDENT 0.0 .TP .B \-P, \-\-print\-all Print all public keys .UNINDENT .INDENT 0.0 .TP .B \-d DELETE, \-\-delete=DELETE Delete the specified key. Globs are supported. .UNINDENT .INDENT 0.0 .TP .B \-D, \-\-delete\-all Delete all keys. .UNINDENT .INDENT 0.0 .TP .B \-f FINGER, \-\-finger=FINGER Print the specified key\(aqs fingerprint. .UNINDENT .INDENT 0.0 .TP .B \-F, \-\-finger\-all Print all keys\(aq fingerprints. .UNINDENT .SS Key Generation Options .INDENT 0.0 .TP .B \-\-gen\-keys=GEN_KEYS Set a name to generate a keypair for use with salt .UNINDENT .INDENT 0.0 .TP .B \-\-gen\-keys\-dir=GEN_KEYS_DIR Set the directory to save the generated keypair. Only works with \(aqgen_keys_dir\(aq option; default is the current directory. .UNINDENT .INDENT 0.0 .TP .B \-\-keysize=KEYSIZE Set the keysize for the generated key, only works with the \(aq\-\-gen\-keys\(aq option, the key size must be 2048 or higher, otherwise it will be rounded up to 2048. The default is 2048. .UNINDENT .SS See also .sp \fIsalt(7)\fP \fIsalt\-master(1)\fP \fIsalt\-minion(1)\fP .SS salt\-master .SS \fBsalt\-master\fP .sp The Salt master daemon, used to control the Salt minions .SS Synopsis .sp salt\-master [ options ] .SS Description .sp The master daemon controls the Salt minions .SS Options .INDENT 0.0 .TP .B \-\-version Print the version of Salt that is running. .UNINDENT .INDENT 0.0 .TP .B \-\-versions\-report Show program\(aqs dependencies and version number, and then exit .UNINDENT .INDENT 0.0 .TP .B \-h, \-\-help Show the help message and exit .UNINDENT .INDENT 0.0 .TP .B \-c CONFIG_DIR, \-\-config\-dir=CONFIG_dir The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is \fB/etc/salt\fP. .UNINDENT .INDENT 0.0 .TP .B \-u USER, \-\-user=USER Specify user to run salt\-master .UNINDENT .INDENT 0.0 .TP .B \-d, \-\-daemon Run salt\-master as a daemon .UNINDENT .INDENT 0.0 .TP .B \-\-pid\-file PIDFILE Specify the location of the pidfile. Default: /var/run/salt\-master.pid .UNINDENT .SS Logging Options .sp Logging options which override any settings defined on the configuration files. .INDENT 0.0 .TP .B \-l LOG_LEVEL, \-\-log\-level=LOG_LEVEL Console logging log level. One of \fBall\fP, \fBgarbage\fP, \fBtrace\fP, \fBdebug\fP, \fBinfo\fP, \fBwarning\fP, \fBerror\fP, \fBquiet\fP. Default: \fBwarning\fP. .UNINDENT .INDENT 0.0 .TP .B \-\-log\-file=LOG_FILE Log file path. Default: /var/log/salt/master. .UNINDENT .INDENT 0.0 .TP .B \-\-log\-file\-level=LOG_LEVEL_LOGFILE Logfile logging log level. One of \fBall\fP, \fBgarbage\fP, \fBtrace\fP, \fBdebug\fP, \fBinfo\fP, \fBwarning\fP, \fBerror\fP, \fBquiet\fP. Default: \fBwarning\fP. .UNINDENT .SS See also .sp \fIsalt(1)\fP \fIsalt(7)\fP \fIsalt\-minion(1)\fP .SS salt\-minion .SS \fBsalt\-minion\fP .sp The Salt minion daemon, receives commands from a remote Salt master. .SS Synopsis .sp salt\-minion [ options ] .SS Description .sp The Salt minion receives commands from the central Salt master and replies with the results of said commands. .SS Options .INDENT 0.0 .TP .B \-\-version Print the version of Salt that is running. .UNINDENT .INDENT 0.0 .TP .B \-\-versions\-report Show program\(aqs dependencies and version number, and then exit .UNINDENT .INDENT 0.0 .TP .B \-h, \-\-help Show the help message and exit .UNINDENT .INDENT 0.0 .TP .B \-c CONFIG_DIR, \-\-config\-dir=CONFIG_dir The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is \fB/etc/salt\fP. .UNINDENT .INDENT 0.0 .TP .B \-u USER, \-\-user=USER Specify user to run salt\-minion .UNINDENT .INDENT 0.0 .TP .B \-d, \-\-daemon Run salt\-minion as a daemon .UNINDENT .INDENT 0.0 .TP .B \-\-pid\-file PIDFILE Specify the location of the pidfile. Default: /var/run/salt\-minion.pid .UNINDENT .SS Logging Options .sp Logging options which override any settings defined on the configuration files. .INDENT 0.0 .TP .B \-l LOG_LEVEL, \-\-log\-level=LOG_LEVEL Console logging log level. One of \fBall\fP, \fBgarbage\fP, \fBtrace\fP, \fBdebug\fP, \fBinfo\fP, \fBwarning\fP, \fBerror\fP, \fBquiet\fP. Default: \fBwarning\fP. .UNINDENT .INDENT 0.0 .TP .B \-\-log\-file=LOG_FILE Log file path. Default: /var/log/salt/minion. .UNINDENT .INDENT 0.0 .TP .B \-\-log\-file\-level=LOG_LEVEL_LOGFILE Logfile logging log level. One of \fBall\fP, \fBgarbage\fP, \fBtrace\fP, \fBdebug\fP, \fBinfo\fP, \fBwarning\fP, \fBerror\fP, \fBquiet\fP. Default: \fBwarning\fP. .UNINDENT .SS See also .sp \fIsalt(1)\fP \fIsalt(7)\fP \fIsalt\-master(1)\fP .SS salt\-run .SS \fBsalt\-run\fP .sp Execute a Salt runner .SS Synopsis .sp .nf .ft C salt\-run RUNNER .ft P .fi .SS Description .sp salt\-run is the frontend command for executing \fBSalt Runners\fP. Salt runners are simple modules used to execute convenience functions on the master .SS Options .INDENT 0.0 .TP .B \-\-version Print the version of Salt that is running. .UNINDENT .INDENT 0.0 .TP .B \-\-versions\-report Show program\(aqs dependencies and version number, and then exit .UNINDENT .INDENT 0.0 .TP .B \-h, \-\-help Show the help message and exit .UNINDENT .INDENT 0.0 .TP .B \-c CONFIG_DIR, \-\-config\-dir=CONFIG_dir The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is \fB/etc/salt\fP. .UNINDENT .INDENT 0.0 .TP .B \-t TIMEOUT, \-\-timeout=TIMEOUT The timeout in seconds to wait for replies from the Salt minions. The timeout number specifies how long the command line client will wait to query the minions and check on running jobs. Default: 1 .UNINDENT .INDENT 0.0 .TP .B \-d, \-\-doc, \-\-documentation Display documentation for runners, pass a module or a runner to see documentation on only that module/runner. .UNINDENT .SS Logging Options .sp Logging options which override any settings defined on the configuration files. .INDENT 0.0 .TP .B \-l LOG_LEVEL, \-\-log\-level=LOG_LEVEL Console logging log level. One of \fBall\fP, \fBgarbage\fP, \fBtrace\fP, \fBdebug\fP, \fBinfo\fP, \fBwarning\fP, \fBerror\fP, \fBquiet\fP. Default: \fBwarning\fP. .UNINDENT .INDENT 0.0 .TP .B \-\-log\-file=LOG_FILE Log file path. Default: /var/log/salt/master. .UNINDENT .INDENT 0.0 .TP .B \-\-log\-file\-level=LOG_LEVEL_LOGFILE Logfile logging log level. One of \fBall\fP, \fBgarbage\fP, \fBtrace\fP, \fBdebug\fP, \fBinfo\fP, \fBwarning\fP, \fBerror\fP, \fBquiet\fP. Default: \fBwarning\fP. .UNINDENT .SS See also .sp \fIsalt(1)\fP \fIsalt\-master(1)\fP \fIsalt\-minion(1)\fP .SS salt\-ssh .SS \fBsalt\-ssh\fP .SS Synopsis .INDENT 0.0 .INDENT 3.5 salt\-ssh \(aq*\(aq [ options ] sys.doc .sp salt\-ssh \-E \(aq.*\(aq [ options ] sys.doc cmd .UNINDENT .UNINDENT .SS Description .sp Salt SSH allows for salt routines to be executed using only SSH for transport .SS Options .INDENT 0.0 .TP .B \-r, \-\-raw, \-\-raw\-shell Execute a raw shell command. .UNINDENT .INDENT 0.0 .TP .B \-\-priv Specify the SSH private key file to be used for authentication. .UNINDENT .INDENT 0.0 .TP .B \-\-roster Define which roster system to use, this defines if a database backend, scanner, or custom roster system is used. Default is the flat file roster. .UNINDENT .INDENT 0.0 .TP .B \-\-roster\-file Define an alternative location for the default roster file location. The default roster file is called \fBroster\fP and is found in the same directory as the master config file. .sp New in version 2014.1.0: (Hydrogen) .UNINDENT .INDENT 0.0 .TP .B \-\-refresh, \-\-refresh\-cache Force a refresh of the master side data cache of the target\(aqs data. This is needed if a target\(aqs grains have been changed and the auto refresh timeframe has not been reached. .UNINDENT .INDENT 0.0 .TP .B \-\-max\-procs Set the number of concurrent minions to communicate with. This value defines how many processes are opened up at a time to manage connections, the more running process the faster communication should be, default is 25. .UNINDENT .INDENT 0.0 .TP .B \-i, \-\-ignore\-host\-keys Ignore the ssh host keys which by default are honored and connections would ask for approval. .UNINDENT .INDENT 0.0 .TP .B \-\-passwd Set the default password to attempt to use when authenticating. .UNINDENT .INDENT 0.0 .TP .B \-\-key\-deploy Set this flag to attempt to deploy the authorized ssh key with all minions. This combined with \-\-passwd can make initial deployment of keys very fast and easy. .UNINDENT .INDENT 0.0 .TP .B \-\-version Print the version of Salt that is running. .UNINDENT .INDENT 0.0 .TP .B \-\-versions\-report Show program\(aqs dependencies and version number, and then exit .UNINDENT .INDENT 0.0 .TP .B \-h, \-\-help Show the help message and exit .UNINDENT .INDENT 0.0 .TP .B \-c CONFIG_DIR, \-\-config\-dir=CONFIG_dir The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is \fB/etc/salt\fP. .UNINDENT .SS Target Selection .INDENT 0.0 .TP .B \-E, \-\-pcre The target expression will be interpreted as a PCRE regular expression rather than a shell glob. .UNINDENT .INDENT 0.0 .TP .B \-L, \-\-list The target expression will be interpreted as a comma\-delimited list; example: server1.foo.bar,server2.foo.bar,example7.quo.qux .UNINDENT .INDENT 0.0 .TP .B \-G, \-\-grain The target expression matches values returned by the Salt grains system on the minions. The target expression is in the format of \(aq:\(aq; example: \(aqos:Arch*\(aq .sp This was changed in version 0.9.8 to accept glob expressions instead of regular expression. To use regular expression matching with grains, use the \-\-grain\-pcre option. .UNINDENT .INDENT 0.0 .TP .B \-\-grain\-pcre The target expression matches values returned by the Salt grains system on the minions. The target expression is in the format of \(aq:< regular expression>\(aq; example: \(aqos:Arch.*\(aq .UNINDENT .INDENT 0.0 .TP .B \-N, \-\-nodegroup Use a predefined compound target defined in the Salt master configuration file. .UNINDENT .INDENT 0.0 .TP .B \-R, \-\-range Instead of using shell globs to evaluate the target, use a range expression to identify targets. Range expressions look like %cluster. .sp Using the Range option requires that a range server is set up and the location of the range server is referenced in the master configuration file. .UNINDENT .SS Logging Options .sp Logging options which override any settings defined on the configuration files. .INDENT 0.0 .TP .B \-l LOG_LEVEL, \-\-log\-level=LOG_LEVEL Console logging log level. One of \fBall\fP, \fBgarbage\fP, \fBtrace\fP, \fBdebug\fP, \fBinfo\fP, \fBwarning\fP, \fBerror\fP, \fBquiet\fP. Default: \fBwarning\fP. .UNINDENT .INDENT 0.0 .TP .B \-\-log\-file=LOG_FILE Log file path. Default: /var/log/salt/ssh. .UNINDENT .INDENT 0.0 .TP .B \-\-log\-file\-level=LOG_LEVEL_LOGFILE Logfile logging log level. One of \fBall\fP, \fBgarbage\fP, \fBtrace\fP, \fBdebug\fP, \fBinfo\fP, \fBwarning\fP, \fBerror\fP, \fBquiet\fP. Default: \fBwarning\fP. .UNINDENT .SS Output Options .INDENT 0.0 .TP .B \-\-out Pass in an alternative outputter to display the return of data. This outputter can be any of the available outputters: .INDENT 7.0 .INDENT 3.5 \fBgrains\fP, \fBhighstate\fP, \fBjson\fP, \fBkey\fP, \fBoverstatestage\fP, \fBpprint\fP, \fBraw\fP, \fBtxt\fP, \fByaml\fP .UNINDENT .UNINDENT .sp Some outputters are formatted only for data returned from specific functions; for instance, the \fBgrains\fP outputter will not work for non\-grains data. .sp If an outputter is used that does not support the data passed into it, then Salt will fall back on the \fBpprint\fP outputter and display the return data using the Python \fBpprint\fP standard library module. .IP Note If using \fB\-\-out=json\fP, you will probably want \fB\-\-static\fP as well. Without the static option, you will get a JSON string for each minion. This is due to using an iterative outputter. So if you want to feed it to a JSON parser, use \fB\-\-static\fP as well. .RE .UNINDENT .INDENT 0.0 .TP .B \-\-out\-indent OUTPUT_INDENT, \-\-output\-indent OUTPUT_INDENT Print the output indented by the provided value in spaces. Negative values disable indentation. Only applicable in outputters that support indentation. .UNINDENT .INDENT 0.0 .TP .B \-\-out\-file=OUTPUT_FILE, \-\-output\-file=OUTPUT_FILE Write the output to the specified file. .UNINDENT .INDENT 0.0 .TP .B \-\-no\-color Disable all colored output .UNINDENT .INDENT 0.0 .TP .B \-\-force\-color Force colored output .IP Note When using colored output the color codes are as follows: .sp \fBgreen\fP denotes success, \fBred\fP denotes failure, \fBblue\fP denotes changes and success and \fByellow\fP denotes a expected future change in configuration. .RE .UNINDENT .SS See also .sp \fIsalt(7)\fP \fIsalt\-master(1)\fP \fIsalt\-minion(1)\fP .SS salt\-syndic .SS \fBsalt\-syndic\fP .sp The Salt syndic daemon, a special minion that passes through commands from a higher master .SS Synopsis .sp salt\-syndic [ options ] .SS Description .sp The Salt syndic daemon, a special minion that passes through commands from a higher master. .SS Options .INDENT 0.0 .TP .B \-\-version Print the version of Salt that is running. .UNINDENT .INDENT 0.0 .TP .B \-\-versions\-report Show program\(aqs dependencies and version number, and then exit .UNINDENT .INDENT 0.0 .TP .B \-h, \-\-help Show the help message and exit .UNINDENT .INDENT 0.0 .TP .B \-c CONFIG_DIR, \-\-config\-dir=CONFIG_dir The location of the Salt configuration directory. This directory contains the configuration files for Salt master and minions. The default location on most systems is \fB/etc/salt\fP. .UNINDENT .INDENT 0.0 .TP .B \-u USER, \-\-user=USER Specify user to run salt\-syndic .UNINDENT .INDENT 0.0 .TP .B \-d, \-\-daemon Run salt\-syndic as a daemon .UNINDENT .INDENT 0.0 .TP .B \-\-pid\-file PIDFILE Specify the location of the pidfile. Default: /var/run/salt\-syndic.pid .UNINDENT .SS Logging Options .sp Logging options which override any settings defined on the configuration files. .INDENT 0.0 .TP .B \-l LOG_LEVEL, \-\-log\-level=LOG_LEVEL Console logging log level. One of \fBall\fP, \fBgarbage\fP, \fBtrace\fP, \fBdebug\fP, \fBinfo\fP, \fBwarning\fP, \fBerror\fP, \fBquiet\fP. Default: \fBwarning\fP. .UNINDENT .INDENT 0.0 .TP .B \-\-log\-file=LOG_FILE Log file path. Default: /var/log/salt/master. .UNINDENT .INDENT 0.0 .TP .B \-\-log\-file\-level=LOG_LEVEL_LOGFILE Logfile logging log level. One of \fBall\fP, \fBgarbage\fP, \fBtrace\fP, \fBdebug\fP, \fBinfo\fP, \fBwarning\fP, \fBerror\fP, \fBquiet\fP. Default: \fBwarning\fP. .UNINDENT .SS See also .sp \fIsalt(1)\fP \fIsalt\-master(1)\fP \fIsalt\-minion(1)\fP .SS salt\-api .SS \fBsalt\-api\fP .sp Start interfaces used to remotely connect to the salt master .SS Synopsis .sp .nf .ft C salt\-api .ft P .fi .SS Description .sp The Salt API system manages network api connectors for the Salt Master .SS Options .INDENT 0.0 .TP .B \-h, \-\-help Print a usage message briefly summarizing these command\-line options. .UNINDENT .INDENT 0.0 .TP .B \-C CONFIG, \-\-config=CONFIG Specify an alternative location for the salt master configuration file. .UNINDENT .SS See also .sp \fIsalt\-api(7)\fP \fIsalt(7)\fP \fIsalt\-master(1)\fP .SS Client ACL system .sp The salt client ACL system is a means to allow system users other than root to have access to execute select salt commands on minions from the master. .sp The client ACL system is configured in the master configuration file via the \fBclient_acl\fP configuration option. Under the \fBclient_acl\fP configuration option the users open to send commands are specified and then a list of regular expressions which specify the minion functions which will be made available to specified user. This configuration is much like the \fBpeer\fP configuration: .sp .nf .ft C # Allow thatch to execute anything and allow fred to use ping and pkg client_acl: thatch: \- .* fred: \- test.* \- pkg.* .ft P .fi .SS Permission Issues .sp Directories required for \fBclient_acl\fP must be modified to be readable by the users specified: .sp .nf .ft C chmod 755 /var/cache/salt /var/cache/salt/jobs /var/run/salt .ft P .fi .IP Note In addition to the changes above you will also need to modify the permissions of /var/log/salt and the existing log file. If you do not wish to do this then you must disable logging or Salt will generate errors as it cannot write to the logs as the system users. .RE .sp If you are upgrading from earlier versions of salt you must also remove any existing user keys and re\-start the Salt master: .sp .nf .ft C rm /var/cache/salt/.*key service salt\-master restart .ft P .fi .SS Python client API .sp Salt provides several entry points for interfacing with Python applications. These entry points are often referred to as \fB*Client()\fP APIs. Each client accesses different parts of Salt, either from the master or from a minion. Each client is detailed below. .IP "See also" .sp There are many ways to access Salt programmatically. .sp Salt can be used from CLI scripts as well as via a REST interface. .sp See Salt\(aqs \fIoutputter system\fP to retrive structured data from Salt as JSON, or as shell\-friendly text, or many other formats. .sp See the \fBstate.event\fP runner to utilize Salt\(aqs event bus from shell scripts. .sp See the \fI\%salt-api\fP project to access Salt externally via a REST interface. It uses Salt\(aqs Python interface documented below and is also useful as a reference implementation. .RE .SS Salt\(aqs \fBopts\fP dictionary .sp Some clients require access to Salt\(aqs \fBopts\fP dictionary. (The dictionary representation of the \fImaster\fP or \fIminion\fP config files.) .sp A common pattern for fetching the \fBopts\fP dictionary is to defer to environment variables if they exist or otherwise fetch the config from the default location. .INDENT 0.0 .TP .B salt.config.client_config(path, env_var=\(aqSALT_CLIENT_CONFIG\(aq, defaults=None) Load Master configuration data .sp Usage: .sp .nf .ft C import salt.config master_opts = salt.config.client_config(\(aq/etc/salt/master\(aq) .ft P .fi .sp Returns a dictionary of the Salt Master configuration file with necessary options needed to communicate with a locally\-running Salt Master daemon. This function searches for client specific configurations and adds them to the data from the master configuration. .sp This is useful for master\-side operations like \fBLocalClient\fP. .UNINDENT .INDENT 0.0 .TP .B salt.config.minion_config(path, env_var=\(aqSALT_MINION_CONFIG\(aq, defaults=None, check_dns=None, minion_id=False) Reads in the minion configuration file and sets up special options .sp This is useful for Minion\-side operations, such as the \fBCaller\fP class, and manually running the loader interface. .sp .nf .ft C import salt.client minion_opts = salt.config.minion_config(\(aq/etc/salt/minion\(aq) .ft P .fi .UNINDENT .SS Salt\(aqs Loader Interface .sp Modules in the Salt ecosystem are loaded into memory using a custom loader system. This allows modules to have conditional requirements (OS, OS version, installed libraries, etc) and allows Salt to inject special variables (\fB__salt__\fP, \fB__opts\fP, etc). .sp Most modules can be manually loaded. This is often useful in third\-party Python apps or when writing tests. However some modules require and expect a full, running Salt system underneath. Notably modules that facilitate master\-to\-minion communication such as the \fBmine\fP, \fBpublish\fP, and \fBpeer\fP execution modules. The error \fBKeyError: \(aqmaster_uri\(aq\fP is a likely indicator for this situation. In those instances use the \fBCaller\fP class to execute those modules instead. .sp Each module type has a corresponding loader function. .INDENT 0.0 .TP .B salt.loader.minion_mods(opts, context=None, whitelist=None) Load execution modules .sp Returns a dictionary of execution modules appropriate for the current system by evaluating the __virtual__() function in each module. .sp .nf .ft C import salt.config import salt.loader __opts__ = salt.config.minion_config(\(aq/etc/salt/minion\(aq) __salt__ = salt.loader.minion_mods(__opts__) __salt__[\(aqtest.ping\(aq]() .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.loader.raw_mod(opts, name, functions) Returns a single module loaded raw and bypassing the __virtual__ function .sp .nf .ft C import salt.config import salt.loader __opts__ = salt.config.minion_config(\(aq/etc/salt/minion\(aq) testmod = salt.loader.raw_mod(__opts__, \(aqtest\(aq, None) testmod[\(aqtest.ping\(aq]() .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.loader.states(opts, functions, whitelist=None) Returns the state modules .sp .nf .ft C import salt.config import salt.loader __opts__ salt.config.minion_config(\(aq/etc/salt/minion\(aq) statemods = salt.loader.states(__opts__, None) .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.loader.grains(opts, force_refresh=False) Return the functions for the dynamic grains and the values for the static grains. .sp .nf .ft C import salt.config import salt.loader __opts__ salt.config.minion_config(\(aq/etc/salt/minion\(aq) __grains__ = salt.loader.grains(__opts__) print __grains__[\(aqid\(aq] .ft P .fi .UNINDENT .SS Salt\(aqs Client Interfaces .SS LocalClient .INDENT 0.0 .TP .B class salt.client.LocalClient(c_path=\(aq/etc/salt/master\(aq, mopts=None) The interface used by the \fBsalt\fP CLI tool on the Salt Master .sp \fBLocalClient\fP is used to send a command to Salt minions to execute \fIexecution modules\fP and return the results to the Salt Master. .sp Importing and using \fBLocalClient\fP must be done on the same machine as the Salt Master and it must be done using the same user that the Salt Master is running as. (Unless \fBexternal_auth\fP is configured and authentication credentials are included in the execution). .sp .nf .ft C import salt.client local = salt.client.LocalClient() local.cmd(\(aq*\(aq, \(aqtest.fib\(aq, [10]) .ft P .fi .INDENT 7.0 .TP .B cmd(tgt, fun, arg=(), timeout=None, expr_form=\(aqglob\(aq, ret=\(aq\(aq, kwarg=None, **kwargs) Synchronously execute a command on targeted minions .sp The cmd method will execute and wait for the timeout period for all minions to reply, then it will return all minion data at once. .sp .nf .ft C >>> import salt.client >>> local = salt.client.LocalClient() >>> local.cmd(\(aq*\(aq, \(aqcmd.run\(aq, [\(aqwhoami\(aq]) {\(aqjerry\(aq: \(aqroot\(aq} .ft P .fi .sp With extra keyword arguments for the command function to be run: .sp .nf .ft C local.cmd(\(aq*\(aq, \(aqtest.arg\(aq, [\(aqarg1\(aq, \(aqarg2\(aq], kwarg={\(aqfoo\(aq: \(aqbar\(aq}) .ft P .fi .sp Compound commands can be used for multiple executions in a single publish. Function names and function arguments are provided in separate lists but the index values must correlate and an empty list must be used if no arguments are required. .sp .nf .ft C >>> local.cmd(\(aq*\(aq, [ \(aqgrains.items\(aq, \(aqsys.doc\(aq, \(aqcmd.run\(aq, ], [ [], [], [\(aquptime\(aq], ]) .ft P .fi .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBtgt\fP (\fIstring or list\fP) \-\- Which minions to target for the execution. Default is shell glob. Modified by the \fBexpr_form\fP option. .IP \(bu 2 \fBfun\fP (\fIstring or list of strings\fP) \-\- .sp The module and function to call on the specified minions of the form \fBmodule.function\fP. For example \fBtest.ping\fP or \fBgrains.items\fP. .INDENT 2.0 .TP .B Compound commands Multiple functions may be called in a single publish by passing a list of commands. This can dramatically lower overhead and speed up the application communicating with Salt. .sp This requires that the \fBarg\fP param is a list of lists. The \fBfun\fP list and the \fBarg\fP list must correlate by index meaning a function that does not take arguments must still have a corresponding empty list at the expected index. .UNINDENT .IP \(bu 2 \fBarg\fP (\fIlist or list\-of\-lists\fP) \-\- A list of arguments to pass to the remote function. If the function takes no arguments \fBarg\fP may be omitted except when executing a compound command. .IP \(bu 2 \fBtimeout\fP \-\- Seconds to wait after the last minion returns but before all minions return. .IP \(bu 2 \fBexpr_form\fP \-\- .sp The type of \fBtgt\fP. Allowed values: .INDENT 2.0 .IP \(bu 2 \fBglob\fP \- Bash glob completion \- Default .IP \(bu 2 \fBpcre\fP \- Perl style regular expression .IP \(bu 2 \fBlist\fP \- Python list of hosts .IP \(bu 2 \fBgrain\fP \- Match based on a grain comparison .IP \(bu 2 \fBgrain_pcre\fP \- Grain comparison with a regex .IP \(bu 2 \fBpillar\fP \- Pillar data comparison .IP \(bu 2 \fBnodegroup\fP \- Match on nodegroup .IP \(bu 2 \fBrange\fP \- Use a Range server for matching .IP \(bu 2 \fBcompound\fP \- Pass a compound match string .UNINDENT .IP \(bu 2 \fBret\fP \-\- The returner to use. The value passed can be single returner, or a comma delimited list of returners to call in order on the minions .IP \(bu 2 \fBkwarg\fP \-\- A dictionary with keyword arguments for the function. .IP \(bu 2 \fBkwargs\fP \-\- .sp Optional keyword arguments. Authentication credentials may be passed when using \fBexternal_auth\fP. .sp For example: \fBlocal.cmd(\(aq*\(aq, \(aqtest.ping\(aq, username=\(aqsaltdev\(aq, password=\(aqsaltdev\(aq, eauth=\(aqpam\(aq)\fP. Or: \fBlocal.cmd(\(aq*\(aq, \(aqtest.ping\(aq, token=\(aq5871821ea51754fdcea8153c1c745433\(aq)\fP .UNINDENT .TP .B Returns A dictionary with the result of the execution, keyed by minion ID. A compound command will return a sub\-dictionary keyed by function name. .UNINDENT .UNINDENT .INDENT 7.0 .TP .B run_job(tgt, fun, arg=(), expr_form=\(aqglob\(aq, ret=\(aq\(aq, timeout=None, kwarg=None, **kwargs) Asynchronously send a command to connected minions .sp Prep the job directory and publish a command to any targeted minions. .INDENT 7.0 .TP .B Returns A dictionary of (validated) \fBpub_data\fP or an empty dictionary on failure. The \fBpub_data\fP contains the job ID and a list of all minions that are expected to return data. .UNINDENT .sp .nf .ft C >>> local.run_job(\(aq*\(aq, \(aqtest.sleep\(aq, [300]) {\(aqjid\(aq: \(aq20131219215650131543\(aq, \(aqminions\(aq: [\(aqjerry\(aq]} .ft P .fi .UNINDENT .INDENT 7.0 .TP .B cmd_async(tgt, fun, arg=(), expr_form=\(aqglob\(aq, ret=\(aq\(aq, kwarg=None, **kwargs) Asynchronously send a command to connected minions .sp The function signature is the same as \fBcmd()\fP with the following exceptions. .INDENT 7.0 .TP .B Returns A job ID or 0 on failure. .UNINDENT .sp .nf .ft C >>> local.cmd_async(\(aq*\(aq, \(aqtest.sleep\(aq, [300]) \(aq20131219215921857715\(aq .ft P .fi .UNINDENT .INDENT 7.0 .TP .B cmd_subset(tgt, fun, arg=(), expr_form=\(aqglob\(aq, ret=\(aq\(aq, kwarg=None, sub=3, cli=False, **kwargs) Execute a command on a random subset of the targeted systems .sp The function signature is the same as \fBcmd()\fP with the following exceptions. .INDENT 7.0 .TP .B Parameters \fBsub\fP \-\- The number of systems to execute on .UNINDENT .sp .nf .ft C >>> SLC.cmd_subset(\(aq*\(aq, \(aqtest.ping\(aq, sub=1) {\(aqjerry\(aq: True} .ft P .fi .UNINDENT .INDENT 7.0 .TP .B cmd_batch(tgt, fun, arg=(), expr_form=\(aqglob\(aq, ret=\(aq\(aq, kwarg=None, batch=\(aq10%\(aq, **kwargs) Iteratively execute a command on subsets of minions at a time .sp The function signature is the same as \fBcmd()\fP with the following exceptions. .INDENT 7.0 .TP .B Parameters \fBbatch\fP \-\- The batch identifier of systems to execute on .TP .B Returns A generator of minion returns .UNINDENT .sp .nf .ft C >>> returns = local.cmd_batch(\(aq*\(aq, \(aqstate.highstate\(aq, bat=\(aq10%\(aq) >>> for return in returns: \&... print return {\(aqjerry\(aq: {...}} {\(aqdave\(aq: {...}} {\(aqstewart\(aq: {...}} .ft P .fi .UNINDENT .INDENT 7.0 .TP .B cmd_iter(tgt, fun, arg=(), timeout=None, expr_form=\(aqglob\(aq, ret=\(aq\(aq, kwarg=None, **kwargs) Yields the individual minion returns as they come in .sp The function signature is the same as \fBcmd()\fP with the following exceptions. .INDENT 7.0 .TP .B Returns A generator .UNINDENT .sp .nf .ft C >>> ret = local.cmd_iter(\(aq*\(aq, \(aqtest.ping\(aq) >>> for i in ret: \&... print i {\(aqjerry\(aq: {\(aqret\(aq: True}} {\(aqdave\(aq: {\(aqret\(aq: True}} {\(aqstewart\(aq: {\(aqret\(aq: True}} .ft P .fi .UNINDENT .INDENT 7.0 .TP .B cmd_iter_no_block(tgt, fun, arg=(), timeout=None, expr_form=\(aqglob\(aq, ret=\(aq\(aq, kwarg=None, **kwargs) Blocks while waiting for individual minions to return. .sp The function signature is the same as \fBcmd()\fP with the following exceptions. .INDENT 7.0 .TP .B Returns None until the next minion returns. This allows for actions to be injected in between minion returns. .UNINDENT .sp .nf .ft C >>> ret = local.cmd_iter(\(aq*\(aq, \(aqtest.ping\(aq) >>> for i in ret: \&... print i None {\(aqjerry\(aq: {\(aqret\(aq: True}} {\(aqdave\(aq: {\(aqret\(aq: True}} None {\(aqstewart\(aq: {\(aqret\(aq: True}} .ft P .fi .UNINDENT .INDENT 7.0 .TP .B get_cli_returns(jid, minions, timeout=None, tgt=\(aq*\(aq, tgt_type=\(aqglob\(aq, verbose=False, show_jid=False, **kwargs) Starts a watcher looking at the return data for a specified JID .INDENT 7.0 .TP .B Returns all of the information for the JID .UNINDENT .UNINDENT .INDENT 7.0 .TP .B get_event_iter_returns(jid, minions, timeout=None) Gather the return data from the event system, break hard when timeout is reached. .UNINDENT .UNINDENT .SS Salt Caller .INDENT 0.0 .TP .B class salt.client.Caller(c_path=\(aq/etc/salt/minion\(aq, mopts=None) \fBCaller\fP is the same interface used by the \fBsalt\-call\fP command\-line tool on the Salt Minion. .sp Importing and using \fBCaller\fP must be done on the same machine as a Salt Minion and it must be done using the same user that the Salt Minion is running as. .sp Usage: .sp .nf .ft C import salt.client caller = salt.client.Caller() caller.function(\(aqtest.ping\(aq) # Or call objects directly caller.sminion.functions[\(aqcmd.run\(aq](\(aqls \-l\(aq) .ft P .fi .sp Note, a running master or minion daemon is not required to use this class. Running \fBsalt\-call \-\-local\fP simply sets \fBfile_client\fP to \fB\(aqlocal\(aq\fP. The same can be achived at the Python level by including that setting in a minion config file. .sp Instantiate a new Caller() instance using a file system path to the minion config file: .sp .nf .ft C caller = salt.client.Caller(\(aq/path/to/custom/minion_config\(aq) caller.sminion.functions[\(aqgrains.items\(aq]() .ft P .fi .sp Instantiate a new Caller() instance using a dictionary of the minion config: .sp New in version Helium: Pass the minion config as a dictionary. .sp .nf .ft C import salt.client import salt.config opts = salt.config.minion_config(\(aq/etc/salt/minion\(aq) opts[\(aqfile_client\(aq] = \(aqlocal\(aq caller = salt.client.Caller(mopts=opts) caller.sminion.functions[\(aqgrains.items\(aq]() .ft P .fi .INDENT 7.0 .TP .B function(fun, *args, **kwargs) Call a single salt function .UNINDENT .UNINDENT .SS RunnerClient .INDENT 0.0 .TP .B class salt.runner.RunnerClient(opts) The interface used by the \fBsalt\-run\fP CLI tool on the Salt Master .sp It executes \fIrunner modules\fP which run on the Salt Master. .sp Importing and using \fBRunnerClient\fP must be done on the same machine as the Salt Master and it must be done using the same user that the Salt Master is running as. .sp Salt\(aqs \fBexternal_auth\fP can be used to authenticate calls. The eauth user must be authorized to execute runner modules: (\fB@runner\fP). Only the \fBmaster_call()\fP below supports eauth. .INDENT 7.0 .TP .B async(fun, low, user=\(aqUNKNOWN\(aq) Execute the runner in a multiprocess and return the event tag to use to watch for the return .UNINDENT .INDENT 7.0 .TP .B cmd(fun, arg, pub_data=None, kwarg=None) Execute a runner function .sp .nf .ft C >>> opts = salt.config.master_config(\(aq/etc/salt/master\(aq) >>> runner = salt.runner.RunnerClient(opts) >>> runner.cmd(\(aqjobs.list_jobs\(aq, []) { \(aq20131219215650131543\(aq: { \(aqArguments\(aq: [300], \(aqFunction\(aq: \(aqtest.sleep\(aq, \(aqStartTime\(aq: \(aq2013, Dec 19 21:56:50.131543\(aq, \(aqTarget\(aq: \(aq*\(aq, \(aqTarget\-type\(aq: \(aqglob\(aq, \(aqUser\(aq: \(aqsaltdev\(aq }, \(aq20131219215921857715\(aq: { \(aqArguments\(aq: [300], \(aqFunction\(aq: \(aqtest.sleep\(aq, \(aqStartTime\(aq: \(aq2013, Dec 19 21:59:21.857715\(aq, \(aqTarget\(aq: \(aq*\(aq, \(aqTarget\-type\(aq: \(aqglob\(aq, \(aqUser\(aq: \(aqsaltdev\(aq }, } .ft P .fi .UNINDENT .INDENT 7.0 .TP .B get_docs(arg=None) Return a dictionary of functions and the inline documentation for each .UNINDENT .INDENT 7.0 .TP .B low(fun, low) Pass in the runner function name and the low data structure .sp .nf .ft C runner.low({\(aqfun\(aq: \(aqjobs.lookup_jid\(aq, \(aqjid\(aq: \(aq20131219215921857715\(aq}) .ft P .fi .UNINDENT .INDENT 7.0 .TP .B master_call(**kwargs) Execute a runner function through the master network interface (eauth). .sp This function requires that \fBexternal_auth\fP is configured and the user is authorized to execute runner functions: (\fB@runner\fP). .sp .nf .ft C runner.master_call( fun=\(aqjobs.list_jobs\(aq, username=\(aqsaltdev\(aq, password=\(aqsaltdev\(aq, eauth=\(aqpam\(aq ) .ft P .fi .UNINDENT .UNINDENT .SS WheelClient .INDENT 0.0 .TP .B class salt.wheel.WheelClient(opts=None) An interface to Salt\(aqs wheel modules .sp \fIWheel modules\fP interact with various parts of the Salt Master. .sp Importing and using \fBWheelClient\fP must be done on the same machine as the Salt Master and it must be done using the same user that the Salt Master is running as. Unless \fBexternal_auth\fP is configured and the user is authorized to execute wheel functions: (\fB@wheel\fP). .INDENT 7.0 .TP .B call_func(fun, **kwargs) Execute a wheel function .sp .nf .ft C >>> opts = salt.config.master_config(\(aq/etc/salt/master\(aq) >>> wheel = salt.wheel.Wheel(opts) >>> wheel.call_func(\(aqkey.list_all\(aq) {\(aqlocal\(aq: [\(aqmaster.pem\(aq, \(aqmaster.pub\(aq], \(aqminions\(aq: [\(aqjerry\(aq], \(aqminions_pre\(aq: [], \(aqminions_rejected\(aq: []} .ft P .fi .UNINDENT .INDENT 7.0 .TP .B get_docs() Return a dictionary of functions and the inline documentation for each .UNINDENT .INDENT 7.0 .TP .B master_call(**kwargs) Execute a wheel function through the master network interface (eauth). .sp This function requires that \fBexternal_auth\fP is configured and the user is authorized to execute wheel functions: (\fB@wheel\fP). .sp .nf .ft C >>> wheel.master_call(**{ \(aqfun\(aq: \(aqkey.finger\(aq, \(aqmatch\(aq: \(aqjerry\(aq, \(aqeauth\(aq: \(aqauto\(aq, \(aqusername\(aq: \(aqsaltdev\(aq, \(aqpassword\(aq: \(aqsaltdev\(aq, }) {\(aqdata\(aq: { \(aq_stamp\(aq: \(aq2013\-12\-19_22:47:44.427338\(aq, \(aqfun\(aq: \(aqwheel.key.finger\(aq, \(aqjid\(aq: \(aq20131219224744416681\(aq, \(aqreturn\(aq: {\(aqminions\(aq: {\(aqjerry\(aq: \(aq5d:f6:79:43:5e:d4:42:3f:57:b8:45:a8:7e:a4:6e:ca\(aq}}, \(aqsuccess\(aq: True, \(aqtag\(aq: \(aqsalt/wheel/20131219224744416681\(aq, \(aquser\(aq: \(aqsaltdev\(aq }, \(aqtag\(aq: \(aqsalt/wheel/20131219224744416681\(aq} .ft P .fi .UNINDENT .UNINDENT .SS CloudClient .INDENT 0.0 .TP .B class salt.cloud.CloudClient(path=None, opts=None, config_dir=None, pillars=None) The client class to wrap cloud interactions .INDENT 7.0 .TP .B action(fun=None, cloudmap=None, names=None, provider=None, instance=None, kwargs=None) Execute a single action via the cloud plugin backend .sp Examples: .sp .nf .ft C client.action(fun=\(aqshow_instance\(aq, names=[\(aqmyinstance\(aq]) client.action(fun=\(aqshow_image\(aq, provider=\(aqmy\-ec2\-config\(aq, kwargs={\(aqimage\(aq: \(aqami\-10314d79\(aq} ) .ft P .fi .UNINDENT .INDENT 7.0 .TP .B create(provider, names, **kwargs) Create the named VMs, without using a profile .sp Example: .INDENT 7.0 .TP .B client.create(names=[\(aqmyinstance\(aq], provider=\(aqmy\-ec2\-config\(aq, .INDENT 7.0 .TP .B kwargs={\(aqimage\(aq: \(aqami\-1624987f\(aq, \(aqsize\(aq: \(aqMicro Instance\(aq, \(aqssh_username\(aq: \(aqec2\-user\(aq, \(aqsecuritygroup\(aq: \(aqdefault\(aq, \(aqdelvol_on_destroy\(aq: True}) .UNINDENT .UNINDENT .UNINDENT .INDENT 7.0 .TP .B destroy(names) Destroy the named VMs .UNINDENT .INDENT 7.0 .TP .B extra_action(names, provider, action, **kwargs) Perform actions with block storage devices .sp Example: .sp .nf .ft C client.extra_action(names=[\(aqmyblock\(aq], action=\(aqvolume_create\(aq, provider=\(aqmy\-nova\(aq, kwargs={\(aqvoltype\(aq: \(aqSSD\(aq, \(aqsize\(aq: 1000} ) client.extra_action(names=[\(aqsalt\-net\(aq], action=\(aqnetwork_create\(aq, provider=\(aqmy\-nova\(aq, kwargs={\(aqcidr\(aq: \(aq192.168.100.0/24\(aq} ) .ft P .fi .UNINDENT .INDENT 7.0 .TP .B full_query(query_type=\(aqlist_nodes_full\(aq) Query all instance information .UNINDENT .INDENT 7.0 .TP .B list_images(provider=None) List all available images in configured cloud systems .UNINDENT .INDENT 7.0 .TP .B list_locations(provider=None) List all available locations in configured cloud systems .UNINDENT .INDENT 7.0 .TP .B list_sizes(provider=None) List all available sizes in configured cloud systems .UNINDENT .INDENT 7.0 .TP .B low(fun, low) Pass the cloud function and low data structure to run .UNINDENT .INDENT 7.0 .TP .B map_run(path, **kwargs) Pass in a location for a map to execute .UNINDENT .INDENT 7.0 .TP .B min_query(query_type=\(aqlist_nodes_min\(aq) Query select instance information .UNINDENT .INDENT 7.0 .TP .B profile(profile, names, vm_overrides=None, **kwargs) Pass in a profile to create, names is a list of vm names to allocate .sp vm_overrides is a special dict that will be per node options overrides .UNINDENT .INDENT 7.0 .TP .B query(query_type=\(aqlist_nodes\(aq) Query basic instance information .UNINDENT .INDENT 7.0 .TP .B select_query(query_type=\(aqlist_nodes_select\(aq) Query select instance information .UNINDENT .UNINDENT .SS Full list of Salt Cloud modules .TS center; |l|l|. _ T{ \fBaliyun\fP T} T{ AliYun ECS Cloud Module T} _ T{ \fBbotocore_aws\fP T} T{ The AWS Cloud Module T} _ T{ \fBcloudstack\fP T} T{ CloudStack Cloud Module T} _ T{ \fBdigital_ocean\fP T} T{ Digital Ocean Cloud Module T} _ T{ \fBec2\fP T} T{ The EC2 Cloud Module T} _ T{ \fBgce\fP T} T{ Copyright 2013 Google Inc. All Rights Reserved. T} _ T{ \fBgogrid\fP T} T{ GoGrid Cloud Module T} _ T{ \fBjoyent\fP T} T{ Joyent Cloud Module T} _ T{ \fBlibcloud_aws\fP T} T{ The AWS Cloud Module T} _ T{ \fBlinode\fP T} T{ Linode Cloud Module T} _ T{ \fBlxc\fP T} T{ Install Salt on an LXC Container T} _ T{ \fBmsazure\fP T} T{ Azure Cloud Module T} _ T{ \fBnova\fP T} T{ OpenStack Nova Cloud Module T} _ T{ \fBopennebula\fP T} T{ OpenNebula Cloud Module T} _ T{ \fBopenstack\fP T} T{ OpenStack Cloud Module T} _ T{ \fBparallels\fP T} T{ Parallels Cloud Module T} _ T{ \fBproxmox\fP T} T{ Proxmox Cloud Module T} _ T{ \fBrackspace\fP T} T{ Rackspace Cloud Module T} _ T{ \fBsaltify\fP T} T{ Saltify Module T} _ T{ \fBsoftlayer\fP T} T{ SoftLayer Cloud Module T} _ T{ \fBsoftlayer_hw\fP T} T{ SoftLayer HW Cloud Module T} _ T{ \fBvsphere\fP T} T{ vSphere Cloud Module T} _ .TE .SS salt.cloud.clouds.aliyun .SS AliYun ECS Cloud Module .sp New in version Helium. .sp The Aliyun cloud module is used to control access to the aliyun ECS. \fI\%http://www.aliyun.com/\fP .sp Use of this module requires the \fBid\fP and \fBkey\fP parameter to be set. Set up the cloud configuration at \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/aliyun.conf\fP: .sp .nf .ft C my\-aliyun\-config: # aliyun Access Key ID id: wFGEwgregeqw3435gDger # aliyun Access Key Secret key: GDE43t43REGTrkilg43934t34qT43t4dgegerGEgg location: cn\-qingdao provider: aliyun .ft P .fi .INDENT 0.0 .TP .B depends requests .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.avail_images(kwargs=None, call=None) Return a list of the images that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.avail_locations(call=None) Return a dict of all available VM locations on the cloud provider with relevant data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.avail_sizes(call=None) Return a list of the image sizes that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.create(vm_) Create a single VM from a data dict .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.create_node(kwargs) Convenience function to make the rest api call for node creation. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.destroy(name, call=None) Destroy a node. .sp CLI Example: .sp .nf .ft C salt\-cloud \-a destroy myinstance salt\-cloud \-d myinstance .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.get_configured_provider() Return the first configured instance. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.get_image(vm_) Return the image object to use .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.get_location(vm_=None) Return the aliyun region to use, in this order: \- CLI parameter \- VM parameter \- Cloud profile setting .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.get_securitygroup(vm_) Return the security group .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.get_size(vm_) Return the VM\(aqs size. Used by create_node(). .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.list_availability_zones(call=None) List all availability zones in the current region .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.list_monitor_data(kwargs=None, call=None) Get monitor data of the instance. If instance name is missing, will show all the instance monitor data on the region. .sp CLI Examples: .sp .nf .ft C salt\-cloud \-f list_monitor_data aliyun salt\-cloud \-f list_monitor_data aliyun name=AY14051311071990225bd .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.list_nodes(call=None) Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.list_nodes_full(call=None) Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.list_nodes_min(call=None) Return a list of the VMs that are on the provider. Only a list of VM names, and their state, is returned. This is the minimum amount of information needed to check for existing VMs. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.list_nodes_select(call=None) Return a list of the VMs that are on the provider, with select fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.list_securitygroup(call=None) Return a list of security group .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.query(params=None) Make a web call to aliyun ECS REST API .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.reboot(name, call=None) Reboot a node .sp CLI Examples: .sp .nf .ft C salt\-cloud \-a reboot myinstance .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.script(vm_) Return the script deployment object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.show_disk(name, call=None) Show the disk details of the instance .sp CLI Examples: .sp .nf .ft C salt\-cloud \-a show_disk aliyun myinstance .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.show_image(kwargs, call=None) Show the details from aliyun image .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.show_instance(name, call=None) Show the details from aliyun instance .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.start(name, call=None) Start a node .sp CLI Examples: .sp .nf .ft C salt\-cloud \-a start myinstance .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.aliyun.stop(name, force=False, call=None) Stop a node .sp CLI Examples: .sp .nf .ft C salt\-cloud \-a stop myinstance salt\-cloud \-a stop myinstance force=True .ft P .fi .UNINDENT .SS salt.cloud.clouds.botocore_aws .SS The AWS Cloud Module .sp The AWS cloud module is used to interact with the Amazon Web Services system. .sp This module has been replaced by the EC2 cloud module, and is no longer supported. The documentation shown here is for reference only; it is highly recommended to change all usages of this driver over to the EC2 driver. .INDENT 0.0 .TP .B If this driver is still needed, set up the cloud configuration at \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/aws.conf\fP: .UNINDENT .sp .nf .ft C my\-aws\-botocore\-config: # The AWS API authentication id id: GKTADJGHEIQSXMKKRBJ08H # The AWS API authentication key key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs # The ssh keyname to use keyname: default # The amazon security group securitygroup: ssh_open # The location of the private key which corresponds to the keyname private_key: /root/default.pem provider: aws .ft P .fi .INDENT 0.0 .TP .B salt.cloud.clouds.botocore_aws.disable_term_protect(name, call=None) Disable termination protection on a node .sp CLI Example: .sp .nf .ft C salt\-cloud \-a disable_term_protect mymachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.botocore_aws.enable_term_protect(name, call=None) Enable termination protection on a node .sp CLI Example: .sp .nf .ft C salt\-cloud \-a enable_term_protect mymachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.botocore_aws.get_configured_provider() Return the first configured instance. .UNINDENT .SS salt.cloud.clouds.cloudstack .SS CloudStack Cloud Module .sp The CloudStack cloud module is used to control access to a CloudStack based Public Cloud. .sp Use of this module requires the \fBapikey\fP, \fBsecretkey\fP, \fBhost\fP and \fBpath\fP parameters. .sp .nf .ft C my\-cloudstack\-cloud\-config: apikey: secretkey: host: localhost path: /client/api provider: cloudstack .ft P .fi .INDENT 0.0 .TP .B salt.cloud.clouds.cloudstack.avail_images(conn=None, call=None) Return a dict of all available VM images on the cloud provider with relevant data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.cloudstack.avail_locations(conn=None, call=None) Return a dict of all available VM locations on the cloud provider with relevant data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.cloudstack.avail_sizes(conn=None, call=None) Return a dict of all available VM images on the cloud provider with relevant data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.cloudstack.create(vm_) Create a single VM from a data dict .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.cloudstack.destroy(name, conn=None, call=None) Delete a single VM .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.cloudstack.get_configured_provider() Return the first configured instance. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.cloudstack.get_conn() Return a conn object for the passed VM data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.cloudstack.get_image(conn, vm_) Return the image object to use .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.cloudstack.get_ip(data) Return the IP address of the VM If the VM has public IP as defined by libcloud module then use it Otherwise try to extract the private IP and use that one. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.cloudstack.get_key() Returns the ssk private key for VM access .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.cloudstack.get_keypair(vm_) Return the keypair to use .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.cloudstack.get_location(conn, vm_) Return the node location to use .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.cloudstack.get_networkid(vm_) Return the networkid to use, only valid for Advanced Zone .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.cloudstack.get_password(vm_) Return the password to use .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.cloudstack.get_size(conn, vm_) Return the VM\(aqs size object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.cloudstack.list_nodes(conn=None, call=None) Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.cloudstack.list_nodes_full(conn=None, call=None) Return a list of the VMs that are on the provider, with all fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.cloudstack.list_nodes_select(conn=None, call=None) Return a list of the VMs that are on the provider, with select fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.cloudstack.script(vm_) Return the script deployment object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.cloudstack.show_instance(name, call=None) Show the details from the provider concerning an instance .UNINDENT .SS salt.cloud.clouds.digital_ocean .SS Digital Ocean Cloud Module .sp The Digital Ocean cloud module is used to control access to the Digital Ocean VPS system. .sp Use of this module only requires the \fBapi_key\fP parameter to be set. Set up the cloud configuration at \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/digital_ocean.conf\fP: .sp .nf .ft C my\-digital\-ocean\-config: # Digital Ocean account keys client_key: wFGEwgregeqw3435gDger api_key: GDE43t43REGTrkilg43934t34qT43t4dgegerGEgg provider: digital_ocean .ft P .fi .INDENT 0.0 .TP .B depends requests .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.digital_ocean.avail_images(call=None) Return a list of the images that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.digital_ocean.avail_locations(call=None) Return a dict of all available VM locations on the cloud provider with relevant data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.digital_ocean.avail_sizes(call=None) Return a list of the image sizes that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.digital_ocean.create(vm_) Create a single VM from a data dict .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.digital_ocean.create_node(args) Create a node .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.digital_ocean.destroy(name, call=None) Destroy a node. Will check termination protection and warn if enabled. .sp CLI Example: .sp .nf .ft C salt\-cloud \-\-destroy mymachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.digital_ocean.get_configured_provider() Return the first configured instance. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.digital_ocean.get_image(vm_) Return the image object to use .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.digital_ocean.get_keyid(keyname) Return the ID of the keyname .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.digital_ocean.get_location(vm_) Return the VM\(aqs location .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.digital_ocean.get_size(vm_) Return the VM\(aqs size. Used by create_node(). .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.digital_ocean.list_keypairs(call=None) Return a dict of all available VM locations on the cloud provider with relevant data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.digital_ocean.list_nodes(call=None) Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.digital_ocean.list_nodes_full(call=None) Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.digital_ocean.list_nodes_select(call=None) Return a list of the VMs that are on the provider, with select fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.digital_ocean.query(method=\(aqdroplets\(aq, droplet_id=None, command=None, args=None) Make a web call to Digital Ocean .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.digital_ocean.script(vm_) Return the script deployment object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.digital_ocean.show_instance(name, call=None) Show the details from Digital Ocean concerning a droplet .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.digital_ocean.show_keypair(kwargs=None, call=None) Show the details of an SSH keypair .UNINDENT .SS salt.cloud.clouds.ec2 .SS The EC2 Cloud Module .sp The EC2 cloud module is used to interact with the Amazon Elastic Cloud Computing. This driver is highly experimental! Use at your own risk! .INDENT 0.0 .TP .B To use the EC2 cloud module, set up the cloud configuration at \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/ec2.conf\fP: .UNINDENT .sp .nf .ft C my\-ec2\-config: # The EC2 API authentication id id: GKTADJGHEIQSXMKKRBJ08H # The EC2 API authentication key key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs # The ssh keyname to use keyname: default # The amazon security group securitygroup: ssh_open # The location of the private key which corresponds to the keyname private_key: /root/default.pem # Be default, service_url is set to amazonaws.com. If you are using this # driver for something other than Amazon EC2, change it here: service_url: amazonaws.com # The endpoint that is ultimately used is usually formed using the region # and the service_url. If you would like to override that entirely, you # can explicitly define the endpoint: endpoint: myendpoint.example.com:1138/services/Cloud # SSH Gateways can be used with this provider. Gateways can be used # when a salt\-master is not on the same private network as the instance # that is being deployed. # Defaults to None # Required ssh_gateway: gateway.example.com # Defaults to port 22 # Optional ssh_gateway_port: 22 # Defaults to root # Optional ssh_gateway_username: root # One authentication method is required. If both # are specified, Private key wins. # Private key defaults to None ssh_gateway_private_key: /path/to/key.pem # Password defaults to None ssh_gateway_password: ExamplePasswordHere provider: ec2 .ft P .fi .INDENT 0.0 .TP .B depends requests .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.attach_volume(name=None, kwargs=None, instance_id=None, call=None) Attach a volume to an instance .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.avail_images(kwargs=None, call=None) Return a dict of all available VM images on the cloud provider. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.avail_locations(call=None) List all available locations .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.avail_sizes(call=None) Return a dict of all available VM sizes on the cloud provider with relevant data. Latest version can be found at: .sp \fI\%http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html\fP .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.block_device_mappings(vm_) Return the block device mapping: .sp .nf .ft C [{\(aqDeviceName\(aq: \(aq/dev/sdb\(aq, \(aqVirtualName\(aq: \(aqephemeral0\(aq}, {\(aqDeviceName\(aq: \(aq/dev/sdc\(aq, \(aqVirtualName\(aq: \(aqephemeral1\(aq}] .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.copy_snapshot(kwargs=None, call=None) Copy a snapshot .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.create(vm_=None, call=None) Create a single VM from a data dict .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.create_attach_volumes(name, kwargs, call=None) Create and attach volumes to created node .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.create_keypair(kwargs=None, call=None) Create an SSH keypair .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.create_snapshot(kwargs=None, call=None, wait_to_finish=False) Create a snapshot .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.create_volume(kwargs=None, call=None, wait_to_finish=False) Create a volume .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.del_tags(name=None, kwargs=None, call=None, instance_id=None, resource_id=None) Delete tags for a resource. Normally a VM name or instance_id is passed in, but a resource_id may be passed instead. If both are passed in, the instance_id will be used. .sp CLI Examples: .sp .nf .ft C salt\-cloud \-a del_tags mymachine tags=tag1,tag2,tag3 salt\-cloud \-a del_tags resource_id=vol\-3267ab32 tags=tag1,tag2,tag3 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.delete_keypair(kwargs=None, call=None) Delete an SSH keypair .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.delete_snapshot(kwargs=None, call=None) Delete a snapshot .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.delete_volume(name=None, kwargs=None, instance_id=None, call=None) Delete a volume .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.delvol_on_destroy(name, kwargs=None, call=None) Delete all/specified EBS volumes upon instance termination .sp CLI Example: .sp .nf .ft C salt\-cloud \-a delvol_on_destroy mymachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.describe_snapshots(kwargs=None, call=None) Describe a snapshot (or snapshots) .INDENT 7.0 .TP .B snapshot_id One or more snapshot IDs. Multiple IDs must be separated by ",". .TP .B owner Return the snapshots owned by the specified owner. Valid values include: self, amazon, . Multiple values must be separated by ",". .TP .B restorable_by One or more AWS accounts IDs that can create volumes from the snapshot. Multiple aws account IDs must be separated by ",". .UNINDENT .sp TODO: Add all of the filters. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.describe_volumes(kwargs=None, call=None) Describe a volume (or volumes) .INDENT 7.0 .TP .B volume_id One or more volume IDs. Multiple IDs must be separated by ",". .UNINDENT .sp TODO: Add all of the filters. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.destroy(name, call=None) Destroy a node. Will check termination protection and warn if enabled. .sp CLI Example: .sp .nf .ft C salt\-cloud \-\-destroy mymachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.detach_volume(name=None, kwargs=None, instance_id=None, call=None) Detach a volume from an instance .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.disable_term_protect(name, call=None) Disable termination protection on a node .sp CLI Example: .sp .nf .ft C salt\-cloud \-a disable_term_protect mymachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.enable_term_protect(name, call=None) Enable termination protection on a node .sp CLI Example: .sp .nf .ft C salt\-cloud \-a enable_term_protect mymachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.get_availability_zone(vm_) Return the availability zone to use .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.get_configured_provider() Return the first configured instance. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.get_console_output(name=None, instance_id=None, call=None, kwargs=None) Show the console output from the instance. .sp By default, returns decoded data, not the Base64\-encoded data that is actually returned from the EC2 API. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.get_location(vm_=None) Return the EC2 region to use, in this order: \- CLI parameter \- VM parameter \- Cloud profile setting .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.get_placementgroup(vm_) Returns the PlacementGroup to use .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.get_spot_config(vm_) Returns the spot instance configuration for the provided vm .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.get_ssh_gateway_config(vm_) Return the ssh_gateway configuration. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.get_subnetid(vm_) Returns the SubnetId to use .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.get_tags(name=None, instance_id=None, call=None, location=None, kwargs=None, resource_id=None) Retrieve tags for a resource. Normally a VM name or instance_id is passed in, but a resource_id may be passed instead. If both are passed in, the instance_id will be used. .sp CLI Examples: .sp .nf .ft C salt\-cloud \-a get_tags mymachine salt\-cloud \-a get_tags resource_id=vol\-3267ab32 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.get_tenancy(vm_) Returns the Tenancy to use. .sp Can be "dedicated" or "default". Cannot be present for spot instances. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.iam_profile(vm_) Return the IAM profile. .sp The IAM instance profile to associate with the instances. This is either the Amazon Resource Name (ARN) of the instance profile or the name of the role. .sp Type: String .sp Default: None .sp Required: No .sp Example: arn:aws:iam::111111111111:instance\-profile/s3access .sp Example: s3access .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.keepvol_on_destroy(name, kwargs=None, call=None) Do not delete all/specified EBS volumes upon instance termination .sp CLI Example: .sp .nf .ft C salt\-cloud \-a keepvol_on_destroy mymachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.keyname(vm_) Return the keyname .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.list_availability_zones() List all availability zones in the current region .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.list_nodes(call=None) Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.list_nodes_full(location=None, call=None) Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.list_nodes_min(location=None, call=None) Return a list of the VMs that are on the provider. Only a list of VM names, and their state, is returned. This is the minimum amount of information needed to check for existing VMs. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.list_nodes_select(call=None) Return a list of the VMs that are on the provider, with select fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.optimize_providers(providers) Return an optimized list of providers. .sp We want to reduce the duplication of querying the same region. .sp If a provider is using the same credentials for the same region the same data will be returned for each provider, thus causing un\-wanted duplicate data and API calls to EC2. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.query(params=None, setname=None, requesturl=None, location=None, return_url=False, return_root=False) .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.query_instance(vm_=None, call=None) Query an instance upon creation from the EC2 API .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.reboot(name, call=None) Reboot a node. .sp CLI Example: .sp .nf .ft C salt\-cloud \-a reboot mymachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.rename(name, kwargs, call=None) Properly rename a node. Pass in the new name as "new name". .sp CLI Example: .sp .nf .ft C salt\-cloud \-a rename mymachine newname=yourmachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.request_instance(vm_=None, call=None) Put together all of the information necessary to request an instance on EC2, and then fire off the request the instance. .sp Returns data about the instance .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.script(vm_) Return the script deployment object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.securitygroup(vm_) Return the security group .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.securitygroupid(vm_) Returns the SecurityGroupId .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.set_tags(name=None, tags=None, call=None, location=None, instance_id=None, resource_id=None, kwargs=None) Set tags for a resource. Normally a VM name or instance_id is passed in, but a resource_id may be passed instead. If both are passed in, the instance_id will be used. .sp CLI Examples: .sp .nf .ft C salt\-cloud \-a set_tags mymachine tag1=somestuff tag2=\(aqOther stuff\(aq salt\-cloud \-a set_tags resource_id=vol\-3267ab32 tag=somestuff .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.show_delvol_on_destroy(name, kwargs=None, call=None) Do not delete all/specified EBS volumes upon instance termination .sp CLI Example: .sp .nf .ft C salt\-cloud \-a show_delvol_on_destroy mymachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.show_image(kwargs, call=None) Show the details from EC2 concerning an AMI .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.show_instance(name, call=None) Show the details from EC2 concerning an AMI .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.show_keypair(kwargs=None, call=None) Show the details of an SSH keypair .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.show_term_protect(name=None, instance_id=None, call=None, quiet=False) Show the details from EC2 concerning an AMI .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.show_volume(kwargs=None, call=None) Wrapper around describe_volumes. Here just to keep functionality. Might be depreciated later. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.ssh_interface(vm_) Return the ssh_interface type to connect to. Either \(aqpublic_ips\(aq (default) or \(aqprivate_ips\(aq. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.start(name, call=None) Start a node .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.stop(name, call=None) Stop a node .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.ec2.wait_for_instance(vm_=None, data=None, ip_address=None, display_ssh_output=True, call=None) Wait for an instance upon creation from the EC2 API, to become available .UNINDENT .SS salt.cloud.clouds.gce .sp Copyright 2013 Google Inc. All Rights Reserved. .sp Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at .INDENT 0.0 .INDENT 3.5 \fI\%http://www.apache.org/licenses/LICENSE-2.0\fP .UNINDENT .UNINDENT .sp Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. .SS Google Compute Engine Module .sp The Google Compute Engine module. This module interfaces with Google Compute Engine. To authenticate to GCE, you will need to create a Service Account. .INDENT 0.0 .TP .B Setting up Service Account Authentication: .INDENT 7.0 .IP \(bu 2 Go to the Cloud Console at: \fI\%https://cloud.google.com/console\fP. .IP \(bu 2 Create or navigate to your desired Project. .IP \(bu 2 Make sure Google Compute Engine service is enabled under the Services section. .IP \(bu 2 Go to "APIs and auth" and then the "Registered apps" section. .IP \(bu 2 Click the "REGISTER APP" button and give it a meaningful name. .IP \(bu 2 Select "Web Application" and click "Register". .IP \(bu 2 Select Certificate, then "Generate Certificate" .IP \(bu 2 Copy the Email Address for inclusion in your /etc/salt/cloud file in the \(aqservice_account_email_address\(aq setting. .IP \(bu 2 Download the Private Key .IP \(bu 2 The key that you download is a PKCS12 key. It needs to be converted to the PEM format. .IP \(bu 2 Convert the key using OpenSSL (the default password is \(aqnotasecret\(aq): C{openssl pkcs12 \-in PRIVKEY.p12 \-passin pass:notasecret \-nodes \-nocerts | openssl rsa \-out ~/PRIVKEY.pem} .IP \(bu 2 Add the full path name of the converted private key to your /etc/salt/cloud file as \(aqservice_account_private_key\(aq setting. .IP \(bu 2 Consider using a more secure location for your private key. .UNINDENT .TP .B Supported commands: # Create a few instances fro profile_name in /etc/salt/cloud.profiles \- salt\-cloud \-p profile_name inst1 inst2 inst3 # Delete an instance \- salt\-cloud \-d inst1 # Look up data on an instance \- salt\-cloud \-a show_instance inst2 # List available locations (aka \(aqzones\(aq) for provider \(aqgce\(aq \- salt\-cloud \-\-list\-locations gce # List available instance sizes (aka \(aqmachine types\(aq) for provider \(aqgce\(aq \- salt\-cloud \-\-list\-sizes gce # List available images for provider \(aqgce\(aq \- salt\-cloud \-\-list\-images gce # Create a persistent disk \- salt\-cloud \-f create_disk gce disk_name=pd location=us\-central1\-b ima... # Permanently delete a persistent disk \- salt\-cloud \-f delete_disk gce disk_name=pd # Attach an existing disk to an existing instance \- salt\-cloud \-a attach_disk myinstance disk_name=mydisk mode=READ_ONLY # Detach a disk from an instance \- salt\-cloud \-a detach_disk myinstance disk_name=mydisk # Show information about the named disk \- salt\-cloud \-a show_disk myinstance disk_name=pd \- salt\-cloud \-f show_disk gce disk_name=pd # Create a snapshot of a persistent disk \- salt\-cloud \-f create_snapshot gce name=snap\-1 disk_name=pd # Permanently delete a disk snapshot \- salt\-cloud \-f delete_snapshot gce name=snap\-1 # Show information about the named snapshot \- salt\-cloud \-f show_snapshot gce name=snap\-1 # Create a network \- salt\-cloud \-f create_network gce name=mynet cidr=10.10.10.0/24 # Delete a network \- salt\-cloud \-f delete_network gce name=mynet # Show info for a network \- salt\-cloud \-f show_network gce name=mynet # Create a firewall rule \- salt\-cloud \-f create_fwrule gce name=fw1 network=mynet allow=tcp:80 # Delete a firewall rule \- salt\-cloud \-f delete_fwrule gce name=fw1 # Show info for a firewall rule \-salt\-cloud \-f show_fwrule gce name=fw1 # Create a load\-balancer HTTP health check \- salt\-cloud \-f create_hc gce name=hc path=/ port=80 # Delete a load\-balancer HTTP health check \- salt\-cloud \-f delete_hc gce name=hc # Show info about an HTTP health check \- salt\-cloud \-f show_hc gce name=hc # Create a load\-balancer configuration \- salt\-cloud \-f create_lb gce name=lb region=us\-central1 ports=80 ... # Delete a load\-balancer configuration \- salt\-cloud \-f delete_lb gce name=lb # Show details about load\-balancer \- salt\-cloud \-f show_lb gce name=lb # Add member to load\-balancer \- salt\-cloud \-f attach_lb gce name=lb member=www1 # Remove member from load\-balancer \- salt\-cloud \-f detach_lb gce name=lb member=www1 .UNINDENT .sp .nf .ft C my\-gce\-config: # The Google Cloud Platform Project ID project: google.com:erjohnso # The Service ACcount client ID service_account_email_address: 1234567890@developer.gserviceaccount.com # The location of the private key (PEM format) service_account_private_key: /home/erjohnso/PRIVKEY.pem provider: gce .ft P .fi .INDENT 0.0 .TP .B maintainer Eric Johnson <\fI\%erjohnso@google.com\fP> .TP .B maturity new .TP .B depends libcloud >= 0.14.1 .TP .B depends pycrypto >= 2.1 .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.attach_disk(name=None, kwargs=None, call=None) Attach an existing disk to an existing instance. .sp CLI Example: .sp .nf .ft C salt\-cloud \-a attach_disk myinstance disk_name=mydisk mode=READ_WRITE .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.attach_lb(kwargs=None, call=None) Add an existing node/member to an existing load\-balancer configuration. .sp CLI Example: .sp .nf .ft C salt\-cloud \-f attach_lb gce name=lb member=myinstance .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.avail_images(conn=None) Return a dict of all available VM images on the cloud provider with relevant data .sp Note that for GCE, there are custom images within the project, but the generic images are in other projects. This returns a dict of images in the project plus images in \(aqdebian\-cloud\(aq and \(aqcentos\-cloud\(aq (If there is overlap in names, the one in the current project is used.) .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.avail_locations(conn=None, call=None) Return a dict of all available VM locations on the cloud provider with relevant data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.avail_sizes(conn=None) Return a dict of available instances sizes (a.k.a machine types) and convert them to something more serializable. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.create(vm_=None, call=None) Create a single GCE instance from a data dict. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.create_disk(kwargs=None, call=None) Create a new persistent disk. Must specify \fIdisk_name\fP and \fIlocation\fP. Can also specify an \fIimage\fP or \fIsnapshot\fP but if neither of those are specified, a \fIsize\fP (in GB) is required. .sp CLI Example: .sp .nf .ft C salt\-cloud \-f create_disk gce disk_name=pd size=300 location=us\-central1\-b .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.create_fwrule(kwargs=None, call=None) Create a GCE firewall rule. The \(aqdefault\(aq network is used if not specified. .sp CLI Example: .sp .nf .ft C salt\-cloud \-f create_fwrule gce name=allow\-http allow=tcp:80 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.create_hc(kwargs=None, call=None) Create an HTTP health check configuration. .sp CLI Example: .sp .nf .ft C salt\-cloud \-f create_hc gce name=hc path=/healthy port=80 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.create_lb(kwargs=None, call=None) Create a load\-balancer configuration. .sp CLI Example: .sp .nf .ft C salt\-cloud \-f create_lb gce name=lb region=us\-central1 ports=80 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.create_network(kwargs=None, call=None) Create a GCE network. .sp CLI Example: .sp .nf .ft C salt\-cloud \-f create_network gce name=mynet cidr=10.10.10.0/24 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.create_snapshot(kwargs=None, call=None) Create a new disk snapshot. Must specify \fIname\fP and \fIdisk_name\fP. .sp CLI Example: .sp .nf .ft C salt\-cloud \-f create_snapshot gce name=snap1 disk_name=pd .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.delete_disk(kwargs=None, call=None) Permanently delete a persistent disk. .sp CLI Example: .sp .nf .ft C salt\-cloud \-f delete_disk gce disk_name=pd .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.delete_fwrule(kwargs=None, call=None) Permanently delete a firewall rule. .sp CLI Example: .sp .nf .ft C salt\-cloud \-f delete_fwrule gce name=allow\-http .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.delete_hc(kwargs=None, call=None) Permanently delete a health check. .sp CLI Example: .sp .nf .ft C salt\-cloud \-f delete_hc gce name=hc .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.delete_lb(kwargs=None, call=None) Permanently delete a load\-balancer. .sp CLI Example: .sp .nf .ft C salt\-cloud \-f delete_lb gce name=lb .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.delete_network(kwargs=None, call=None) Permanently delete a network. .sp CLI Example: .sp .nf .ft C salt\-cloud \-f delete_network gce name=mynet .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.delete_snapshot(kwargs=None, call=None) Permanently delete a disk snapshot. .sp CLI Example: .sp .nf .ft C salt\-cloud \-f delete_snapshot gce name=disk\-snap\-1 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.destroy(vm_name, call=None) Call \(aqdestroy\(aq on the instance. Can be called with "\-a destroy" or \-d .sp CLI Example: .sp .nf .ft C salt\-cloud \-a destroy myinstance1 myinstance2 ... salt\-cloud \-d myinstance1 myinstance2 ... .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.detach_disk(name=None, kwargs=None, call=None) Detach a disk from an instance. .sp CLI Example: .sp .nf .ft C salt\-cloud \-a detach_disk myinstance disk_name=mydisk .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.detach_lb(kwargs=None, call=None) Remove an existing node/member from an existing load\-balancer configuration. .sp CLI Example: .sp .nf .ft C salt\-cloud \-f detach_lb gce name=lb member=myinstance .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.get_configured_provider() Return the first configured instance. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.get_conn() Return a conn object for the passed VM data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.get_lb_conn(gce_driver=None) Return a load\-balancer conn object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.list_nodes(conn=None, call=None) Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.list_nodes_full(conn=None, call=None) Return a list of the VMs that are on the provider, with all fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.list_nodes_select(conn=None, call=None) Return a list of the VMs that are on the provider, with select fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.reboot(vm_name, call=None) Call GCE \(aqreset\(aq on the instance. .sp CLI Example: .sp .nf .ft C salt\-cloud \-a reboot myinstance .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.script(vm_) Return the script deployment object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.show_disk(name=None, kwargs=None, call=None) Show the details of an existing disk. .sp CLI Example: .sp .nf .ft C salt\-cloud \-a show_disk myinstance disk_name=mydisk salt\-cloud \-f show_disk gce disk_name=mydisk .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.show_fwrule(kwargs=None, call=None) Show the details of an existing firewall rule. .sp CLI Example: .sp .nf .ft C salt\-cloud \-f show_fwrule gce name=allow\-http .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.show_hc(kwargs=None, call=None) Show the details of an existing health check. .sp CLI Example: .sp .nf .ft C salt\-cloud \-f show_hc gce name=hc .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.show_instance(vm_name, call=None) Show the details of the existing instance. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.show_lb(kwargs=None, call=None) Show the details of an existing load\-balancer. .sp CLI Example: .sp .nf .ft C salt\-cloud \-f show_lb gce name=lb .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.show_network(kwargs=None, call=None) Show the details of an existing network. .sp CLI Example: .sp .nf .ft C salt\-cloud \-f show_network gce name=mynet .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gce.show_snapshot(kwargs=None, call=None) Show the details of an existing snapshot. .sp CLI Example: .sp .nf .ft C salt\-cloud \-f show_snapshot gce name=mysnapshot .ft P .fi .UNINDENT .SS salt.cloud.clouds.gogrid .SS GoGrid Cloud Module .sp The GoGrid cloud module. This module interfaces with the gogrid public cloud service. To use Salt Cloud with GoGrid log into the GoGrid web interface and create an api key. Do this by clicking on "My Account" and then going to the API Keys tab. .sp Set up the cloud configuration at \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/gogrid.conf\fP: .sp .nf .ft C my\-gogrid\-config: # The generated api key to use apikey: asdff7896asdh789 # The apikey\(aqs shared secret sharedsecret: saltybacon provider: gogrid .ft P .fi .INDENT 0.0 .TP .B salt.cloud.clouds.gogrid.avail_images(conn=None, call=None) Return a dict of all available VM images on the cloud provider with relevant data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gogrid.avail_sizes(conn=None, call=None) Return a dict of all available VM images on the cloud provider with relevant data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gogrid.create(vm_) Create a single VM from a data dict .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gogrid.destroy(name, conn=None, call=None) Delete a single VM .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gogrid.get_configured_provider() Return the first configured instance. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gogrid.get_conn() Return a conn object for the passed VM data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gogrid.get_image(conn, vm_) Return the image object to use .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gogrid.get_size(conn, vm_) Return the VM\(aqs size object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gogrid.list_nodes(conn=None, call=None) Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gogrid.list_nodes_full(conn=None, call=None) Return a list of the VMs that are on the provider, with all fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gogrid.list_nodes_select(conn=None, call=None) Return a list of the VMs that are on the provider, with select fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gogrid.script(vm_) Return the script deployment object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.gogrid.show_instance(name, call=None) Show the details from the provider concerning an instance .UNINDENT .SS salt.cloud.clouds.joyent .SS Joyent Cloud Module .sp The Joyent Cloud module is used to interact with the Joyent cloud system. .sp Set up the cloud configuration at \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/joyent.conf\fP: .sp .nf .ft C my\-joyent\-config: # The Joyent login user user: fred # The Joyent user\(aqs password password: saltybacon # The location of the ssh private key that can log into the new VM private_key: /root/joyent.pem provider: joyent .ft P .fi .sp When creating your profiles for the joyent cloud, add the location attribute to the profile, this will automatically get picked up when performing tasks associated with that vm. An example profile might look like: .sp .nf .ft C joyent_512: provider: my\-joyent\-config size: Extra Small 512 MB image: centos\-6 location: us\-east\-1 .ft P .fi .INDENT 0.0 .TP .B depends requests .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.avail_images(call=None) get list of available images .sp CLI Example: .sp .nf .ft C salt\-cloud \-\-list\-images .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.avail_locations(call=None) List all available locations .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.avail_sizes(call=None) get list of available packages .sp CLI Example: .sp .nf .ft C salt\-cloud \-\-list\-sizes .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.create(vm_) Create a single VM from a data dict .sp CLI Example: .sp .nf .ft C salt\-cloud \-p profile_name vm_name .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.create_node(**kwargs) convenience function to make the rest api call for node creation. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.delete_key(kwargs=None, call=None) List the keys available .sp CLI Example: .sp .nf .ft C salt\-cloud \-f delete_key joyent keyname=mykey .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.destroy(name, call=None) destroy a machine by name .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBname\fP \-\- name given to the machine .IP \(bu 2 \fBcall\fP \-\- call value in this case is \(aqaction\(aq .UNINDENT .TP .B Returns array of booleans , true if successful;ly stopped and true if successfully removed .UNINDENT .sp CLI Example: .sp .nf .ft C salt\-cloud \-d vm_name .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.get_configured_provider() Return the first configured instance. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.get_image(vm_) Return the image object to use .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.get_location(vm_=None) Return the joyent datacenter to use, in this order: \- CLI parameter \- VM parameter \- Cloud profile setting .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.get_location_path(location=\(aqus\-east\-1\(aq) create url from location variable :param location: joyent datacenter location :return: url .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.get_node(name) gets the node from the full node list by name :param name: name of the vm :return: node object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.get_size(vm_) Return the VM\(aqs size object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.has_method(obj, method_name) Find if the provided object has a specific method .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.import_key(kwargs=None, call=None) List the keys available .sp CLI Example: .sp .nf .ft C salt\-cloud \-f import_key joyent keyname=mykey keyfile=/tmp/mykey.pub .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.joyent_node_state(id_) Convert joyent returned state to state common to other datacenter return values for consistency .INDENT 7.0 .TP .B Parameters \fBid\fP \-\- joyent state value .TP .B Returns libcloudfuncs state value .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.key_list(key=\(aqname\(aq, items=None) convert list to dictionary using the key as the identifier :param key: identifier \- must exist in the arrays elements own dictionary :param items: array to iterate over :return: dictionary .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.list_keys(kwargs=None, call=None) List the keys available .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.list_nodes(full=False, call=None) list of nodes, keeping only a brief listing .sp CLI Example: .sp .nf .ft C salt\-cloud \-Q .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.list_nodes_full(call=None) list of nodes, maintaining all content provided from joyent listings .sp CLI Example: .sp .nf .ft C salt\-cloud \-F .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.list_nodes_select(call=None) Return a list of the VMs that are on the provider, with select fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.query(action=None, command=None, args=None, method=\(aqGET\(aq, location=None, data=None) Make a web call to Joyent .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.reboot(name, call=None) reboot a machine by name :param name: name given to the machine :param call: call value in this case is \(aqaction\(aq :return: true if successful .sp CLI Example: .sp .nf .ft C salt\-cloud \-a reboot vm_name .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.reformat_node(item=None, full=False) Reformat the returned data from joyent, determine public/private IPs and strip out fields if necessary to provide either full or brief content. .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBitem\fP \-\- node dictionary .IP \(bu 2 \fBfull\fP \-\- full or brief output .UNINDENT .TP .B Returns dict .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.show_key(kwargs=None, call=None) List the keys available .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.ssh_interface(vm_) Return the ssh_interface type to connect to. Either \(aqpublic_ips\(aq (default) or \(aqprivate_ips\(aq. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.start(name, call=None) start a machine by name :param name: name given to the machine :param call: call value in this case is \(aqaction\(aq :return: true if successful .sp CLI Example: .sp .nf .ft C salt\-cloud \-a start vm_name .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.stop(name, call=None) stop a machine by name :param name: name given to the machine :param call: call value in this case is \(aqaction\(aq :return: true if successful .sp CLI Example: .sp .nf .ft C salt\-cloud \-a stop vm_name .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.joyent.take_action(name=None, call=None, command=None, data=None, method=\(aqGET\(aq, location=\(aqus\-east\-1\(aq) take action call used by start,stop, reboot :param name: name given to the machine :param call: call value in this case is \(aqaction\(aq :command: api path :data: any data to be passed to the api, must be in json format :method: GET,POST,or DELETE :location: datacenter to execute the command on :return: true if successful .UNINDENT .SS salt.cloud.clouds.libcloud_aws .SS The AWS Cloud Module .sp The AWS cloud module is used to interact with the Amazon Web Services system. .sp This module has been replaced by the EC2 cloud module, and is no longer supported. The documentation shown here is for reference only; it is highly recommended to change all usages of this driver over to the EC2 driver. .INDENT 0.0 .TP .B If this driver is still needed, set up the cloud configuration at \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/aws.conf\fP: .UNINDENT .sp .nf .ft C my\-aws\-config: # The AWS API authentication id id: GKTADJGHEIQSXMKKRBJ08H # The AWS API authentication key key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs # The ssh keyname to use keyname: default # The amazon security group securitygroup: ssh_open # The location of the private key which corresponds to the keyname private_key: /root/default.pem provider: aws .ft P .fi .INDENT 0.0 .TP .B salt.cloud.clouds.libcloud_aws.block_device_mappings(vm_) Return the block device mapping: .sp .nf .ft C [{\(aqDeviceName\(aq: \(aq/dev/sdb\(aq, \(aqVirtualName\(aq: \(aqephemeral0\(aq}, {\(aqDeviceName\(aq: \(aq/dev/sdc\(aq, \(aqVirtualName\(aq: \(aqephemeral1\(aq}] .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.libcloud_aws.create(vm_) Create a single VM from a data dict .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.libcloud_aws.create_attach_volumes(volumes, location, data) Create and attach volumes to created node .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.libcloud_aws.del_tags(name, kwargs, call=None) Delete tags for a node .sp CLI Example: .sp .nf .ft C salt\-cloud \-a del_tags mymachine tag1,tag2,tag3 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.libcloud_aws.destroy(name) Wrap core libcloudfuncs destroy method, adding check for termination protection .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.libcloud_aws.get_availability_zone(conn, vm_) Return the availability zone to use .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.libcloud_aws.get_configured_provider() Return the first configured instance. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.libcloud_aws.get_conn(**kwargs) Return a conn object for the passed VM data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.libcloud_aws.get_location(vm_=None) Return the AWS region to use, in this order: \- CLI parameter \- Cloud profile setting \- Global salt\-cloud config .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.libcloud_aws.get_tags(name, call=None) Retrieve tags for a node .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.libcloud_aws.iam_profile(vm_) Return the IAM role .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.libcloud_aws.keyname(vm_) Return the keyname .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.libcloud_aws.rename(name, kwargs, call=None) Properly rename a node. Pass in the new name as "new name". .sp CLI Example: .sp .nf .ft C salt\-cloud \-a rename mymachine newname=yourmachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.libcloud_aws.securitygroup(vm_) Return the security group .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.libcloud_aws.set_tags(name, tags, call=None) Set tags for a node .sp CLI Example: .sp .nf .ft C salt\-cloud \-a set_tags mymachine tag1=somestuff tag2=\(aqOther stuff\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.libcloud_aws.ssh_interface(vm_) Return the ssh_interface type to connect to. Either \(aqpublic_ips\(aq (default) or \(aqprivate_ips\(aq. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.libcloud_aws.ssh_username(vm_) Return the ssh_username. Defaults to \(aqec2\-user\(aq. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.libcloud_aws.start(name, call=None) Start a node .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.libcloud_aws.stop(name, call=None) Stop a node .UNINDENT .SS salt.cloud.clouds.linode .SS Linode Cloud Module .sp The Linode cloud module is used to control access to the Linode VPS system .sp Use of this module only requires the \fBapikey\fP parameter. .sp Set up the cloud configuration at \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/linode.conf\fP: .sp .nf .ft C my\-linode\-config: # Linode account api key apikey: JVkbSJDGHSDKUKSDJfhsdklfjgsjdkflhjlsdfffhgdgjkenrtuinv provider: linode .ft P .fi .INDENT 0.0 .TP .B salt.cloud.clouds.linode.avail_images(conn=None, call=None) Return a dict of all available VM images on the cloud provider with relevant data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.linode.avail_locations(conn=None, call=None) Return a dict of all available VM locations on the cloud provider with relevant data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.linode.avail_sizes(conn=None, call=None) Return a dict of all available VM images on the cloud provider with relevant data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.linode.create(vm_) Create a single VM from a data dict .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.linode.destroy(name, conn=None, call=None) Delete a single VM .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.linode.get_configured_provider() Return the first configured instance. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.linode.get_conn() Return a conn object for the passed VM data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.linode.get_disk_size(vm_, size, swap) Return the size of of the root disk in MB .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.linode.get_image(conn, vm_) Return the image object to use .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.linode.get_location(conn, vm_) Return the node location to use .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.linode.get_node(conn, name) Return a libcloud node for the named VM .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.linode.get_password(vm_) Return the password to use .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.linode.get_private_ip(vm_) Return True if a private ip address is requested .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.linode.get_size(conn, vm_) Return the VM\(aqs size object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.linode.get_swap(vm_) Return the amount of swap space to use in MB .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.linode.list_nodes(conn=None, call=None) Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.linode.list_nodes_full(conn=None, call=None) Return a list of the VMs that are on the provider, with all fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.linode.list_nodes_select(conn=None, call=None) Return a list of the VMs that are on the provider, with select fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.linode.script(vm_) Return the script deployment object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.linode.show_instance(name, call=None) Show the details from the provider concerning an instance .UNINDENT .SS salt.cloud.clouds.lxc .SS Install Salt on an LXC Container .sp New in version Helium. .sp Please read \fIcore config documentation\fP. .INDENT 0.0 .TP .B salt.cloud.clouds.lxc.avail_images() .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.lxc.create(vm_, call=None) Create an lxc Container. This function is idempotent and will try to either provision or finish the provision of an lxc container. .sp NOTE: Most of the initialisation code has been moved and merged with the lxc runner and lxc.init functions .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.lxc.destroy(vm_, call=None) Destroy a lxc container .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.lxc.get_configured_provider(vm_=None) Return the contextual provider of None if no configured one can be found. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.lxc.get_provider(name) .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.lxc.list_nodes(conn=None, call=None) .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.lxc.list_nodes_full(conn=None, call=None) .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.lxc.list_nodes_select(call=None) Return a list of the VMs that are on the provider, with select fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.lxc.show_instance(name, call=None) Show the details from the provider concerning an instance .UNINDENT .SS salt.cloud.clouds.msazure .SS Azure Cloud Module .sp The Azure cloud module is used to control access to Microsoft Azure .sp Use of this module only requires the \fBapikey\fP parameter. Set up the cloud configuration at \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/azure.conf\fP: .sp .nf .ft C my\-azure\-config: provider: azure subscription_id: 3287abc8\-f98a\-c678\-3bde\-326766fd3617 certificate_path: /etc/salt/azure.pem management_host: management.core.windows.net .ft P .fi .sp Information on creating the pem file to use, and uploading the associated cer file can be found at: .sp \fI\%http://www.windowsazure.com/en-us/develop/python/how-to-guides/service-management/\fP .INDENT 0.0 .TP .B salt.cloud.clouds.msazure.avail_images(conn=None, call=None) List available images for Azure .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.msazure.avail_locations(conn=None, call=None) List available locations for Azure .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.msazure.avail_sizes(call=None) Because sizes are built into images with Azure, there will be no sizes to return here .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.msazure.create(vm_) Create a single VM from a data dict .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.msazure.destroy(name, conn=None, call=None) Destroy a VM .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.msazure.get_configured_provider() Return the first configured instance. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.msazure.get_conn() Return a conn object for the passed VM data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.msazure.list_disks(conn=None, call=None) Destroy a VM .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.msazure.list_hosted_services(conn=None, call=None) List VMs on this Azure account, with full information .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.msazure.list_nodes(conn=None, call=None) List VMs on this Azure account .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.msazure.list_nodes_full(conn=None, call=None) List VMs on this Azure account, with full information .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.msazure.list_nodes_select(conn=None, call=None) Return a list of the VMs that are on the provider, with select fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.msazure.list_storage_services(conn=None, call=None) List VMs on this Azure account, with full information .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.msazure.script(vm_) Return the script deployment object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.msazure.show_instance(name, call=None) Show the details from the provider concerning an instance .UNINDENT .SS salt.cloud.clouds.nova .SS OpenStack Nova Cloud Module .sp PLEASE NOTE: This module is currently in early development, and considered to be experimental and unstable. It is not recommended for production use. Unless you are actively developing code in this module, you should use the OpenStack module instead. .sp OpenStack is an open source project that is in use by a number a cloud providers, each of which have their own ways of using it. .sp The OpenStack Nova module for Salt Cloud was bootstrapped from the OpenStack module for Salt Cloud, which uses a libcloud\-based connection. The Nova module is designed to use the nova and glance modules already built into Salt. .sp These modules use the Python novaclient and glanceclient libraries, respectively. In order to use this module, the proper salt configuration must also be in place. This can be specified in the master config, the minion config, a set of grains or a set of pillars. .sp .nf .ft C my_openstack_profile: keystone.user: admin keystone.password: verybadpass keystone.tenant: admin keystone.auth_url: \(aqhttp://127.0.0.1:5000/v2.0/\(aq .ft P .fi .sp Note that there is currently a dependency upon netaddr. This can be installed on Debian\-based systems by means of the python\-netaddr package. .sp This module currently requires the latest develop branch of Salt to be installed. .sp This module has been tested to work with HP Cloud and Rackspace. See the documentation for specific options for either of these providers. These examples could be set up in the cloud configuration at \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/openstack.conf\fP: .sp .nf .ft C my\-openstack\-config: # The ID of the minion that will execute the salt nova functions auth_minion: myminion # The name of the configuration profile to use on said minion config_profile: my_openstack_profile ssh_key_name: mykey provider: nova userdata_file: /tmp/userdata.txt .ft P .fi .sp For local installations that only use private IP address ranges, the following option may be useful. Using the old syntax: .sp Note: For api use, you will need an auth plugin. The base novaclient does not support apikeys, but some providers such as rackspace have extended keystone to accept them .sp .nf .ft C my\-openstack\-config: # Ignore IP addresses on this network for bootstrap ignore_cidr: 192.168.50.0/24 my\-nova: identity_url: \(aqhttps://identity.api.rackspacecloud.com/v2.0/\(aq compute_region: IAD user: myusername password: mypassword tenant: provider: nova my\-api: identity_url: \(aqhttps://identity.api.rackspacecloud.com/v2.0/\(aq compute_region: IAD user: myusername api_key: os_auth_plugin: rackspace tenant: provider: nova .ft P .fi .INDENT 0.0 .TP .B salt.cloud.clouds.nova.avail_images() Return a dict of all available VM images on the cloud provider. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.avail_locations(conn=None, call=None) Return a list of locations .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.avail_sizes() Return a dict of all available VM sizes on the cloud provider. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.create(vm_) Create a single VM from a data dict .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.destroy(name, conn=None, call=None) Delete a single VM .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.get_configured_provider() Return the first configured instance. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.get_conn() Return a conn object for the passed VM data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.get_image(conn, vm_) Return the image object to use .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.get_size(conn, vm_) Return the VM\(aqs size object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.ignore_cidr(vm_, ip) Return True if we are to ignore the specified IP. Compatible with IPv4. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.list_nodes(call=None, **kwargs) Return a list of the VMs that in this location .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.list_nodes_full(call=None, **kwargs) Return a list of the VMs that in this location .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.list_nodes_select(call=None) Return a list of the VMs that are on the provider, with select fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.managedcloud(vm_) Determine if we should wait for the managed cloud automation before running. Either \(aqFalse\(aq (default) or \(aqTrue\(aq. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.network_create(name, **kwargs) Create private networks .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.network_list(call=None, **kwargs) List private networks .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.preferred_ip(vm_, ips) Return the preferred Internet protocol. Either \(aqipv4\(aq (default) or \(aqipv6\(aq. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.rackconnect(vm_) Determine if we should wait for rackconnect automation before running. Either \(aqFalse\(aq (default) or \(aqTrue\(aq. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.reboot(name, conn=None) Reboot a single VM .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.request_instance(vm_=None, call=None) Put together all of the information necessary to request an instance through Novaclient and then fire off the request the instance. .sp Returns data about the instance .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.script(vm_) Return the script deployment object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.show_instance(name, call=None) Show the details from the provider concerning an instance .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.ssh_interface(vm_) Return the ssh_interface type to connect to. Either \(aqpublic_ips\(aq (default) or \(aqprivate_ips\(aq. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.virtual_interface_create(name, net_name, **kwargs) Create private networks .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.virtual_interface_list(name, **kwargs) Create private networks .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.volume_attach(name, server_name, device=\(aq/dev/xvdb\(aq, **kwargs) Attach block volume .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.volume_create(name, size=100, snapshot=None, voltype=None, **kwargs) Create block storage device .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.volume_delete(name, **kwargs) Delete block storage device .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.volume_detach(name, **kwargs) Detach block volume .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.nova.volume_list(**kwargs) List block devices .UNINDENT .SS salt.cloud.clouds.opennebula .SS OpenNebula Cloud Module .sp The OpenNebula cloud module is used to control access to an OpenNebula cloud. .sp Use of this module requires the \fBxml_rpc\fP, \fBuser\fP and \fBpassword\fP parameter to be set. Set up the cloud configuration at \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/opennebula.conf\fP: .sp .nf .ft C my\-opennebula\-config: xml_rpc: http://localhost:2633/RPC2 user: oneadmin password: JHGhgsayu32jsa provider: opennebula .ft P .fi .INDENT 0.0 .TP .B salt.cloud.clouds.opennebula.avail_images(call=None) Return a list of the templates that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.opennebula.avail_locations(call=None) Return a dict of all available VM locations on the cloud provider with relevant data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.opennebula.avail_sizes(call=None) Because sizes are built into templates with OpenNebula, there will be no sizes to return here .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.opennebula.create(vm_) Create a single VM from a data dict .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.opennebula.destroy(name, call=None) Destroy a node. Will check termination protection and warn if enabled. .sp CLI Example: .sp .nf .ft C salt\-cloud \-\-destroy mymachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.opennebula.get_configured_provider() Return the first configured instance. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.opennebula.get_image(vm_) Return the image object to use .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.opennebula.get_location(vm_) Return the VM\(aqs location .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.opennebula.list_nodes(call=None) Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.opennebula.list_nodes_full(call=None) Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.opennebula.list_nodes_select(call=None) Return a list of the VMs that are on the provider, with select fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.opennebula.script(vm_) Return the script deployment object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.opennebula.show_instance(name, call=None) Show the details from OpenNebula concerning a VM .UNINDENT .SS salt.cloud.clouds.openstack .SS OpenStack Cloud Module .sp OpenStack is an open source project that is in use by a number a cloud providers, each of which have their own ways of using it. .sp OpenStack provides a number of ways to authenticate. This module uses password\- based authentication, using auth v2.0. It is likely to start supporting other methods of authentication provided by OpenStack in the future. .sp Note that there is currently a dependency upon netaddr. This can be installed on Debian\-based systems by means of the python\-netaddr package. .sp This module has been tested to work with HP Cloud and Rackspace. See the documentation for specific options for either of these providers. Some examples, using the old cloud configuration syntax, are provided below: .sp Set up in the cloud configuration at \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/openstack.conf\fP: .sp .nf .ft C my\-openstack\-config: # The OpenStack identity service url identity_url: https://region\-b.geo\-1.identity.hpcloudsvc.com:35357/v2.0/ # The OpenStack compute region compute_region: region\-b.geo\-1 # The OpenStack compute service name compute_name: Compute # The OpenStack tenant name (not tenant ID) tenant: myuser\-tenant1 # The OpenStack user name user: myuser # The OpenStack keypair name ssh_key_name: mykey # Skip SSL certificate validation insecure: false # The ssh key file ssh_key_file: /path/to/keyfile/test.pem # The OpenStack network UUIDs networks: \- fixed: \- 4402cd51\-37ee\-435e\-a966\-8245956dc0e6 \- floating: \- Ext\-Net files: /path/to/dest.txt: /local/path/to/src.txt # Skips the service catalog API endpoint, and uses the following base_url: http://192.168.1.101:3000/v2/12345 provider: openstack userdata_file: /tmp/userdata.txt .ft P .fi .sp For in\-house Openstack Essex installation, libcloud needs the service_type : .sp .nf .ft C my\-openstack\-config: identity_url: \(aqhttp://control.openstack.example.org:5000/v2.0/\(aq compute_name : Compute Service service_type : compute .ft P .fi .sp Either a password or an API key must also be specified: .sp .nf .ft C my\-openstack\-password\-or\-api\-config: # The OpenStack password password: letmein # The OpenStack API key apikey: 901d3f579h23c8v73q9 .ft P .fi .sp Optionally, if you don\(aqt want to save plain\-text password in your configuration file, you can use keyring: .sp .nf .ft C my\-openstack\-keyring\-config: # The OpenStack password is stored in keyring # don\(aqt forget to set the password by running something like: # salt\-cloud \-\-set\-password=myuser my\-openstack\-keyring\-config password: USE_KEYRING .ft P .fi .sp For local installations that only use private IP address ranges, the following option may be useful. Using the old syntax: .sp .nf .ft C my\-openstack\-config: # Ignore IP addresses on this network for bootstrap ignore_cidr: 192.168.50.0/24 .ft P .fi .sp It is possible to upload a small set of files (no more than 5, and nothing too large) to the remote server. Generally this should not be needed, as salt itself can upload to the server after it is spun up, with nowhere near the same restrictions. .sp .nf .ft C my\-openstack\-config: files: /path/to/dest.txt: /local/path/to/src.txt .ft P .fi .sp Alternatively, one could use the private IP to connect by specifying: .sp .nf .ft C my\-openstack\-config: ssh_interface: private_ips .ft P .fi .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.avail_images(conn=None, call=None) Return a dict of all available VM images on the cloud provider with relevant data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.avail_locations(conn=None, call=None) Return a dict of all available VM locations on the cloud provider with relevant data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.avail_sizes(conn=None, call=None) Return a dict of all available VM images on the cloud provider with relevant data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.create(vm_) Create a single VM from a data dict .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.destroy(name, conn=None, call=None) Delete a single VM .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.get_configured_provider() Return the first configured instance. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.get_conn() Return a conn object for the passed VM data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.get_image(conn, vm_) Return the image object to use .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.get_node(conn, name) Return a libcloud node for the named VM .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.get_size(conn, vm_) Return the VM\(aqs size object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.ignore_cidr(vm_, ip) Return True if we are to ignore the specified IP. Compatible with IPv4. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.list_nodes(conn=None, call=None) Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.list_nodes_full(conn=None, call=None) Return a list of the VMs that are on the provider, with all fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.list_nodes_select(conn=None, call=None) Return a list of the VMs that are on the provider, with select fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.managedcloud(vm_) Determine if we should wait for the managed cloud automation before running. Either \(aqFalse\(aq (default) or \(aqTrue\(aq. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.preferred_ip(vm_, ips) Return the preferred Internet protocol. Either \(aqipv4\(aq (default) or \(aqipv6\(aq. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.rackconnect(vm_) Determine if we should wait for rackconnect automation before running. Either \(aqFalse\(aq (default) or \(aqTrue\(aq. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.reboot(name, conn=None) Reboot a single VM .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.request_instance(vm_=None, call=None) Put together all of the information necessary to request an instance on Openstack and then fire off the request the instance. .sp Returns data about the instance .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.script(vm_) Return the script deployment object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.show_instance(name, call=None) Show the details from the provider concerning an instance .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.openstack.ssh_interface(vm_) Return the ssh_interface type to connect to. Either \(aqpublic_ips\(aq (default) or \(aqprivate_ips\(aq. .UNINDENT .SS salt.cloud.clouds.parallels .SS Parallels Cloud Module .sp The Parallels cloud module is used to control access to cloud providers using the Parallels VPS system. .INDENT 0.0 .TP .B Set up the cloud configuration at \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/parallels.conf\fP: .UNINDENT .sp .nf .ft C my\-parallels\-config: # Parallels account information user: myuser password: mypassword url: https://api.cloud.xmission.com:4465/paci/v1.0/ provider: parallels .ft P .fi .INDENT 0.0 .TP .B salt.cloud.clouds.parallels.avail_images(call=None) Return a list of the images that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.parallels.create(vm_) Create a single VM from a data dict .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.parallels.create_node(vm_) Build and submit the XML to create a node .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.parallels.destroy(name, call=None) Destroy a node. .sp CLI Example: .sp .nf .ft C salt\-cloud \-\-destroy mymachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.parallels.get_configured_provider() Return the first configured instance. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.parallels.get_image(vm_) Return the image object to use .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.parallels.list_nodes(call=None) Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.parallels.list_nodes_full(call=None) Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.parallels.list_nodes_select(call=None) Return a list of the VMs that are on the provider, with select fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.parallels.query(action=None, command=None, args=None, method=\(aqGET\(aq, data=None) Make a web call to a Parallels provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.parallels.script(vm_) Return the script deployment object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.parallels.show_image(kwargs, call=None) Show the details from Parallels concerning an image .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.parallels.show_instance(name, call=None) Show the details from Parallels concerning an instance .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.parallels.start(name, call=None) Start a node. .sp CLI Example: .sp .nf .ft C salt\-cloud \-a start mymachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.parallels.stop(name, call=None) Stop a node. .sp CLI Example: .sp .nf .ft C salt\-cloud \-a stop mymachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.parallels.wait_until(name, state, timeout=300) Wait until a specific state has been reached on a node .UNINDENT .SS salt.cloud.clouds.proxmox .SS Proxmox Cloud Module .sp New in version Helium. .sp The Proxmox cloud module is used to control access to cloud providers using the Proxmox system (KVM / OpenVZ). .INDENT 0.0 .TP .B Set up the cloud configuration at \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/proxmox.conf\fP: .UNINDENT .sp .nf .ft C my\-proxmox\-config: # Proxmox account information user: myuser@pam or myuser@pve password: mypassword url: hypervisor.domain.tld provider: proxmox .ft P .fi .INDENT 0.0 .TP .B maintainer Frank Klaassen <\fI\%frank@cloudright.nl\fP> .TP .B maturity new .TP .B depends requests >= 2.2.1 .TP .B depends IPy >= 0.81 .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.avail_images(call=None, location=\(aqlocal\(aq, img_type=\(aqvztpl\(aq) Return a list of the images that are on the provider .sp CLI Example: .sp .nf .ft C salt\-cloud \-\-list\-images my\-proxmox\-config .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.avail_locations(call=None) Return a list of the hypervisors (nodes) which this Proxmox PVE machine manages .sp CLI Example: .sp .nf .ft C salt\-cloud \-\-list\-locations my\-proxmox\-config .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.create(vm_) Create a single VM from a data dict .sp CLI Example: .sp .nf .ft C salt\-cloud \-p proxmox\-ubuntu vmhostname .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.create_node(vm_) Build and submit the requestdata to create a new node .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.destroy(name, call=None) Destroy a node. .sp CLI Example: .sp .nf .ft C salt\-cloud \-\-destroy mymachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.get_configured_provider() Return the first configured instance. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.get_resources_nodes(call=None, resFilter=None) Retrieve all hypervisors (nodes) available on this environment CLI Example: .sp .nf .ft C salt\-cloud \-f get_resources_nodes my\-proxmox\-config .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.get_resources_vms(call=None, resFilter=None, includeConfig=True) Retrieve all VMs available on this environment .sp CLI Example: .sp .nf .ft C salt\-cloud \-f get_resources_vms my\-proxmox\-config .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.get_vm_status(vmid=None, name=None, host=None, nodeType=None) Get the status for a VM, either via the ID or the hostname .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.get_vmconfig(vmid, node=None, node_type=\(aqopenvz\(aq) Get VM configuration .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.list_nodes(call=None) Return a list of the VMs that are managed by the provider .sp CLI Example: .sp .nf .ft C salt\-cloud \-Q my\-proxmox\-config .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.list_nodes_full(call=None) Return a list of the VMs that are on the provider .sp CLI Example: .sp .nf .ft C salt\-cloud \-F my\-proxmox\-config .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.list_nodes_select(call=None) Return a list of the VMs that are on the provider, with select fields .sp CLI Example: .sp .nf .ft C salt\-cloud \-S my\-proxmox\-config .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.query(conn_type, option, post_data=None) Execute the HTTP request to the API .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.script(vm_) Return the script deployment object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.set_vm_status(status, name=None, vmid=None) Convenience function for setting VM status .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.show_instance(name, call=None, instance_type=None) Show the details from Proxmox concerning an instance .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.shutdown(name=None, vmid=None, call=None) Shutdown a node via ACPI. .sp CLI Example: .sp .nf .ft C salt\-cloud \-a shutdown mymachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.start(name, vmid=None, call=None) Start a node. .sp CLI Example: .sp .nf .ft C salt\-cloud \-a start mymachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.stop(name, vmid=None, call=None) Stop a node ("pulling the plug"). .sp CLI Example: .sp .nf .ft C salt\-cloud \-a stop mymachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.wait_for_created(upid, timeout=300) Wait until a the vm has been created succesfully .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.proxmox.wait_for_state(vmid, node, nodeType, state, timeout=300) Wait until a specific state has been reached on a node .UNINDENT .SS salt.cloud.clouds.rackspace .SS Rackspace Cloud Module .sp The Rackspace cloud module. This module uses the preferred means to set up a libcloud based cloud module and should be used as the general template for setting up additional libcloud based modules. .sp Please note that the \fIrackspace\fP driver is only intended for 1st gen instances, aka, "the old cloud" at Rackspace. It is required for 1st gen instances, but will \fInot\fP work with OpenStack\-based instances. Unless you explicitly have a reason to use it, it is highly recommended that you use the \fIopenstack\fP driver instead. .sp The rackspace cloud module interfaces with the Rackspace public cloud service and requires that two configuration parameters be set for use, \fBuser\fP and \fBapikey\fP. .sp Set up the cloud configuration at \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/rackspace.conf\fP: .sp .nf .ft C my\-rackspace\-config: provider: rackspace # The Rackspace login user user: fred # The Rackspace user\(aqs apikey apikey: 901d3f579h23c8v73q9 .ft P .fi .INDENT 0.0 .TP .B salt.cloud.clouds.rackspace.avail_images(conn=None, call=None) Return a dict of all available VM images on the cloud provider with relevant data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.rackspace.avail_locations(conn=None, call=None) Return a dict of all available VM locations on the cloud provider with relevant data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.rackspace.avail_sizes(conn=None, call=None) Return a dict of all available VM images on the cloud provider with relevant data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.rackspace.create(vm_) Create a single VM from a data dict .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.rackspace.destroy(name, conn=None, call=None) Delete a single VM .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.rackspace.get_configured_provider() Return the first configured instance. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.rackspace.get_conn() Return a conn object for the passed VM data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.rackspace.get_image(conn, vm_) Return the image object to use .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.rackspace.get_size(conn, vm_) Return the VM\(aqs size object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.rackspace.list_nodes(conn=None, call=None) Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.rackspace.list_nodes_full(conn=None, call=None) Return a list of the VMs that are on the provider, with all fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.rackspace.list_nodes_select(conn=None, call=None) Return a list of the VMs that are on the provider, with select fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.rackspace.preferred_ip(vm_, ips) Return the preferred Internet protocol. Either \(aqipv4\(aq (default) or \(aqipv6\(aq. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.rackspace.script(vm_) Return the script deployment object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.rackspace.show_instance(name, call=None) Show the details from the provider concerning an instance .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.rackspace.ssh_interface(vm_) Return the ssh_interface type to connect to. Either \(aqpublic_ips\(aq (default) or \(aqprivate_ips\(aq. .UNINDENT .SS salt.cloud.clouds.saltify .SS Saltify Module .sp The Saltify module is designed to install Salt on a remote machine, virtual or bare metal, using SSH. This module is useful for provisioning machines which are already installed, but not Salted. .sp Use of this module requires no configuration in the main cloud configuration file. However, profiles must still be configured, as described in the \fIcore config documentation\fP. .INDENT 0.0 .TP .B salt.cloud.clouds.saltify.create(vm_) Provision a single machine .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.saltify.get_configured_provider() Return the first configured instance. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.saltify.list_nodes() Because this module is not specific to any cloud providers, there will be no nodes to list. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.saltify.list_nodes_full() Because this module is not specific to any cloud providers, there will be no nodes to list. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.saltify.list_nodes_select() Because this module is not specific to any cloud providers, there will be no nodes to list. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.saltify.script(vm_) Return the script deployment object .UNINDENT .SS salt.cloud.clouds.softlayer .SS SoftLayer Cloud Module .sp The SoftLayer cloud module is used to control access to the SoftLayer VPS system. .sp Use of this module only requires the \fBapikey\fP parameter. Set up the cloud configuration at: .sp \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/softlayer.conf\fP: .sp .nf .ft C my\-softlayer\-config: # SoftLayer account api key user: MYLOGIN apikey: JVkbSJDGHSDKUKSDJfhsdklfjgsjdkflhjlsdfffhgdgjkenrtuinv provider: softlayer .ft P .fi .sp The SoftLayer Python Library needs to be installed in ordere to use the SoftLayer salt.cloud modules. See: \fI\%https://pypi.python.org/pypi/SoftLayer\fP .INDENT 0.0 .TP .B depends softlayer .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer.avail_images(call=None) Return a dict of all available VM images on the cloud provider. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer.avail_locations(call=None) List all available locations .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer.avail_sizes(call=None) Return a dict of all available VM sizes on the cloud provider with relevant data. This data is provided in three dicts. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer.create(vm_) Create a single VM from a data dict .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer.destroy(name, call=None) Destroy a node. .sp CLI Example: .sp .nf .ft C salt\-cloud \-\-destroy mymachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer.get_configured_provider() Return the first configured instance. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer.get_conn(service=\(aqSoftLayer_Virtual_Guest\(aq) Return a conn object for the passed VM data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer.get_location(vm_=None) Return the location to use, in this order: \- CLI parameter \- VM parameter \- Cloud profile setting .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer.list_custom_images(call=None) Return a dict of all custom VM images on the cloud provider. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer.list_nodes(call=None) Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer.list_nodes_full(mask=\(aqmask[id]\(aq, call=None) Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer.list_nodes_select(call=None) Return a list of the VMs that are on the provider, with select fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer.list_vlans(call=None) List all VLANs associated with the account .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer.script(vm_) Return the script deployment object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer.show_instance(name, call=None) Show the details from SoftLayer concerning a guest .UNINDENT .SS salt.cloud.clouds.softlayer_hw .SS SoftLayer HW Cloud Module .sp The SoftLayer HW cloud module is used to control access to the SoftLayer hardware cloud system .sp Use of this module only requires the \fBapikey\fP parameter. Set up the cloud configuration at: .sp \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/softlayer.conf\fP: .sp .nf .ft C my\-softlayer\-config: # SoftLayer account api key user: MYLOGIN apikey: JVkbSJDGHSDKUKSDJfhsdklfjgsjdkflhjlsdfffhgdgjkenrtuinv provider: softlayer_hw .ft P .fi .sp The SoftLayer Python Library needs to be installed in ordere to use the SoftLayer salt.cloud modules. See: \fI\%https://pypi.python.org/pypi/SoftLayer\fP .INDENT 0.0 .TP .B depends softlayer .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer_hw.avail_images(call=None) Return a dict of all available VM images on the cloud provider. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer_hw.avail_locations(call=None) List all available locations .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer_hw.avail_sizes(call=None) Return a dict of all available VM sizes on the cloud provider with relevant data. This data is provided in three dicts. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer_hw.create(vm_) Create a single VM from a data dict .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer_hw.destroy(name, call=None) Destroy a node. .sp CLI Example: .sp .nf .ft C salt\-cloud \-\-destroy mymachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer_hw.get_configured_provider() Return the first configured instance. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer_hw.get_conn(service=\(aqSoftLayer_Hardware\(aq) Return a conn object for the passed VM data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer_hw.get_location(vm_=None) Return the location to use, in this order: \- CLI parameter \- VM parameter \- Cloud profile setting .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer_hw.list_nodes(call=None) Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer_hw.list_nodes_full(mask=\(aqmask[id, hostname, primaryIpAddress, primaryBackendIpAddress, processorPhysicalCoreAmount, memoryCount]\(aq, call=None) Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer_hw.list_nodes_select(call=None) Return a list of the VMs that are on the provider, with select fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer_hw.list_vlans(call=None) List all VLANs associated with the account .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer_hw.script(vm_) Return the script deployment object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.softlayer_hw.show_instance(name, call=None) Show the details from SoftLayer concerning a guest .UNINDENT .SS salt.cloud.clouds.vsphere .SS vSphere Cloud Module .sp New in version Helium. .sp The vSphere cloud module is used to control access to VMWare vSphere. .INDENT 0.0 .TP .B depends .INDENT 7.0 .IP \(bu 2 PySphere Python module .UNINDENT .UNINDENT .sp Use of this module only requires a URL, username and password. Set up the cloud configuration at: .sp \fB/etc/salt/cloud.providers\fP or \fB/etc/salt/cloud.providers.d/vsphere.conf\fP: .sp .nf .ft C my\-vsphere\-config: provider: vsphere user: myuser password: verybadpass url: \(aqhttps://10.1.1.1:443\(aq .ft P .fi .INDENT 0.0 .TP .B folder: Name of the folder that will contain the new VM. If not set, the VM will be added to the folder the original VM belongs to. .TP .B resourcepool: MOR of the resourcepool to be used for the new vm. If not set, it uses the same resourcepool than the original vm. .TP .B datastore: MOR of the datastore where the virtual machine should be located. If not specified, the current datastore is used. .TP .B host: MOR of the host where the virtual machine should be registered. .INDENT 7.0 .TP .B IF not specified: .INDENT 7.0 .IP \(bu 2 if resourcepool is not specified, current host is used. .IP \(bu 2 if resourcepool is specified, and the target pool represents a stand\-alone host, the host is used. .IP \(bu 2 if resourcepool is specified, and the target pool represents a DRS\-enabled cluster, a host selected by DRS is used. .IP \(bu 2 if resourcepool is specified and the target pool represents a cluster without DRS enabled, an InvalidArgument exception will be thrown. .UNINDENT .UNINDENT .TP .B template: Specifies whether or not the new virtual machine should be marked as a template. Default is False. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.vsphere.avail_images() Return a dict of all available VM images on the cloud provider. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.vsphere.avail_locations() Return a dict of all available VM locations on the cloud provider with relevant data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.vsphere.create(vm_) Create a single VM from a data dict .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.vsphere.destroy(name, call=None) Destroy a node. .sp CLI Example: .sp .nf .ft C salt\-cloud \-\-destroy mymachine .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.vsphere.get_configured_provider() Return the first configured instance. .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.vsphere.get_conn() Return a conn object for the passed VM data .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.vsphere.list_clusters(kwargs=None, call=None) List the clusters for this VMware environment .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.vsphere.list_datacenters(kwargs=None, call=None) List the datacenters for this VMware environment .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.vsphere.list_datastores(kwargs=None, call=None) List the datastores for this VMware environment .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.vsphere.list_folders(kwargs=None, call=None) List the folders for this VMWare environment .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.vsphere.list_hosts(kwargs=None, call=None) List the hosts for this VMware environment .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.vsphere.list_nodes() Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.vsphere.list_nodes_full() Return a list of the VMs that are on the provider .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.vsphere.list_nodes_min() Return a list of the nodes in the provider, with no details .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.vsphere.list_nodes_select() Return a list of the VMs that are on the provider, with select fields .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.vsphere.list_resourcepools(kwargs=None, call=None) List the hosts for this VMware environment .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.vsphere.script(vm_) Return the script deployment object .UNINDENT .INDENT 0.0 .TP .B salt.cloud.clouds.vsphere.show_instance(name, call=None) Show the details from vSphere concerning a guest .UNINDENT .SS Configuration file examples .INDENT 0.0 .IP \(bu 2 \fI\%Example master configuration file\fP .IP \(bu 2 \fI\%Example minion configuration file\fP .UNINDENT .SS Example master configuration file .sp .nf .ft C ##### Primary configuration settings ##### ########################################## # This configuration file is used to manage the behavior of the Salt Master # Values that are commented out but have no space after the comment are # defaults that need not be set in the config. If there is a space after the # comment that the value is presented as an example and is not the default. # Per default, the master will automatically include all config files # from master.d/*.conf (master.d is a directory in the same directory # as the main master config file) #default_include: master.d/*.conf # The address of the interface to bind to #interface: 0.0.0.0 # Whether the master should listen for IPv6 connections. If this is set to True, # the interface option must be adjusted too (for example: "interface: \(aq::\(aq") #ipv6: False # The tcp port used by the publisher #publish_port: 4505 # The user under which the salt master will run. Salt will update all # permissions to allow the specified user to run the master. The exception is # the job cache, which must be deleted if this user is changed. If the # modified files cause conflicts set verify_env to False. #user: root # Max open files # Each minion connecting to the master uses AT LEAST one file descriptor, the # master subscription connection. If enough minions connect you might start # seeing on the console(and then salt\-master crashes): # Too many open files (tcp_listener.cpp:335) # Aborted (core dumped) # # By default this value will be the one of \(gaulimit \-Hn\(ga, ie, the hard limit for # max open files. # # If you wish to set a different value than the default one, uncomment and # configure this setting. Remember that this value CANNOT be higher than the # hard limit. Raising the hard limit depends on your OS and/or distribution, # a good way to find the limit is to search the internet for(for example): # raise max open files hard limit debian # #max_open_files: 100000 # The number of worker threads to start, these threads are used to manage # return calls made from minions to the master, if the master seems to be # running slowly, increase the number of threads #worker_threads: 5 # The port used by the communication interface. The ret (return) port is the # interface used for the file server, authentication, job returnes, etc. #ret_port: 4506 # Specify the location of the daemon process ID file #pidfile: /var/run/salt\-master.pid # The root directory prepended to these options: pki_dir, cachedir, # sock_dir, log_file, autosign_file, autoreject_file, extension_modules, # key_logfile, pidfile. #root_dir: / # Directory used to store public key data #pki_dir: /etc/salt/pki/master # Directory to store job and cache data #cachedir: /var/cache/salt/master # Directory for custom modules. This directory can contain subdirectories for # each of Salt\(aqs module types such as "runners", "output", "wheel", "modules", # "states", "returners", etc. #extension_modules: # Verify and set permissions on configuration directories at startup #verify_env: True # Set the number of hours to keep old job information in the job cache #keep_jobs: 24 # Set the default timeout for the salt command and api, the default is 5 # seconds #timeout: 5 # The loop_interval option controls the seconds for the master\(aqs maintenance # process check cycle. This process updates file server backends, cleans the # job cache and executes the scheduler. #loop_interval: 60 # Set the default outputter used by the salt command. The default is "nested" #output: nested # By default output is colored, to disable colored output set the color value # to False #color: True # Do not strip off the colored output from nested results and states outputs # (true by default) # strip_colors: false # Set the directory used to hold unix sockets #sock_dir: /var/run/salt/master # The master can take a while to start up when lspci and/or dmidecode is used # to populate the grains for the master. Enable if you want to see GPU hardware # data for your master. # # enable_gpu_grains: False # The master maintains a job cache, while this is a great addition it can be # a burden on the master for larger deployments (over 5000 minions). # Disabling the job cache will make previously executed jobs unavailable to # the jobs system and is not generally recommended. # #job_cache: True # Cache minion grains and pillar data in the cachedir. #minion_data_cache: True # Passing very large events can cause the minion to consume large amounts of # memory. This value tunes the maximum size of a message allowed onto the # master event bus. The value is expressed in bytes. #max_event_size: 1048576 # The master can include configuration from other files. To enable this, # pass a list of paths to this option. The paths can be either relative or # absolute; if relative, they are considered to be relative to the directory # the main master configuration file lives in (this file). Paths can make use # of shell\-style globbing. If no files are matched by a path passed to this # option then the master will log a warning message. # # # Include a config file from some other path: #include: /etc/salt/extra_config # # Include config from several files and directories: #include: # \- /etc/salt/extra_config ##### Security settings ##### ########################################## # Enable "open mode", this mode still maintains encryption, but turns off # authentication, this is only intended for highly secure environments or for # the situation where your keys end up in a bad state. If you run in open mode # you do so at your own risk! #open_mode: False # Enable auto_accept, this setting will automatically accept all incoming # public keys from the minions. Note that this is insecure. #auto_accept: False # Time in minutes that a incomming public key with a matching name found in # pki_dir/minion_autosign/keyid is automatically accepted. Expired autosign keys # are removed when the master checks the minion_autosign directory. # 0 equals no timeout # autosign_timeout: 120 # If the autosign_file is specified, incoming keys specified in the # autosign_file will be automatically accepted. This is insecure. Regular # expressions as well as globing lines are supported. #autosign_file: /etc/salt/autosign.conf # Works like autosign_file, but instead allows you to specify minion IDs for # which keys will automatically be rejected. Will override both membership in # the autosign_file and the auto_accept setting. #autoreject_file: /etc/salt/autoreject.conf # Enable permissive access to the salt keys. This allows you to run the # master or minion as root, but have a non\-root group be given access to # your pki_dir. To make the access explicit, root must belong to the group # you\(aqve given access to. This is potentially quite insecure. # If an autosign_file is specified, enabling permissive_pki_access will allow group access # to that specific file. #permissive_pki_access: False # Allow users on the master access to execute specific commands on minions. # This setting should be treated with care since it opens up execution # capabilities to non root users. By default this capability is completely # disabled. # #client_acl: # larry: # \- test.ping # \- network.* # # Blacklist any of the following users or modules # # This example would blacklist all non sudo users, including root from # running any commands. It would also blacklist any use of the "cmd" # module. # This is completely disabled by default. # #client_acl_blacklist: # users: # \- root # \- \(aq^(?!sudo_).*$\(aq # all non sudo users # modules: # \- cmd # The external auth system uses the Salt auth modules to authenticate and # validate users to access areas of the Salt system. # #external_auth: # pam: # fred: # \- test.* # # Time (in seconds) for a newly generated token to live. Default: 12 hours #token_expire: 43200 # Allow minions to push files to the master. This is disabled by default, for # security purposes. #file_recv: False # Set a hard\-limit on the size of the files that can be pushed to the master. # It will be interpreted as megabytes. # Default: 100 #file_recv_max_size: 100 # Signature verification on messages published from the master. # This causes the master to cryptographically sign all messages published to its event # bus, and minions then verify that signature before acting on the message. # # This is False by default. # # Note that to facilitate interoperability with masters and minions that are different # versions, if sign_pub_messages is True but a message is received by a minion with # no signature, it will still be accepted, and a warning message will be logged. # Conversely, if sign_pub_messages is False, but a minion receives a signed # message it will be accepted, the signature will not be checked, and a warning message # will be logged. This behavior will go away in Salt 0.17.6 (or Hydrogen RC1, whichever # comes first) and these two situations will cause minion to throw an exception and # drop the message. # # sign_pub_messages: False ##### Master Module Management ##### ########################################## # Manage how master side modules are loaded # Add any additional locations to look for master runners #runner_dirs: [] # Enable Cython for master side modules #cython_enable: False ##### State System settings ##### ########################################## # The state system uses a "top" file to tell the minions what environment to # use and what modules to use. The state_top file is defined relative to the # root of the base environment as defined in "File Server settings" below. #state_top: top.sls # The master_tops option replaces the external_nodes option by creating # a plugable system for the generation of external top data. The external_nodes # option is deprecated by the master_tops option. # To gain the capabilities of the classic external_nodes system, use the # following configuration: # master_tops: # ext_nodes: # #master_tops: {} # The external_nodes option allows Salt to gather data that would normally be # placed in a top file. The external_nodes option is the executable that will # return the ENC data. Remember that Salt will look for external nodes AND top # files and combine the results if both are enabled! #external_nodes: None # The renderer to use on the minions to render the state data #renderer: yaml_jinja # The Jinja renderer can strip extra carriage returns and whitespace # See http://jinja.pocoo.org/docs/api/#high\-level\-api # # If this is set to True the first newline after a Jinja block is removed # (block, not variable tag!). Defaults to False, corresponds to the Jinja # environment init variable "trim_blocks". # jinja_trim_blocks: False # # If this is set to True leading spaces and tabs are stripped from the start # of a line to a block. Defaults to False, corresponds to the Jinja # environment init variable "lstrip_blocks". # jinja_lstrip_blocks: False # The failhard option tells the minions to stop immediately after the first # failure detected in the state execution, defaults to False #failhard: False # The state_verbose and state_output settings can be used to change the way # state system data is printed to the display. By default all data is printed. # The state_verbose setting can be set to True or False, when set to False # all data that has a result of True and no changes will be suppressed. #state_verbose: True # The state_output setting changes if the output is the full multi line # output for each changed state if set to \(aqfull\(aq, but if set to \(aqterse\(aq # the output will be shortened to a single line. If set to \(aqmixed\(aq, the output # will be terse unless a state failed, in which case that output will be full. # If set to \(aqchanges\(aq, the output will be full unless the state didn\(aqt change. #state_output: full ##### File Server settings ##### ########################################## # Salt runs a lightweight file server written in zeromq to deliver files to # minions. This file server is built into the master daemon and does not # require a dedicated port. # The file server works on environments passed to the master, each environment # can have multiple root directories, the subdirectories in the multiple file # roots cannot match, otherwise the downloaded files will not be able to be # reliably ensured. A base environment is required to house the top file. # Example: # file_roots: # base: # \- /srv/salt/ # dev: # \- /srv/salt/dev/services # \- /srv/salt/dev/states # prod: # \- /srv/salt/prod/services # \- /srv/salt/prod/states #file_roots: # base: # \- /srv/salt # The hash_type is the hash to use when discovering the hash of a file on # the master server. The default is md5, but sha1, sha224, sha256, sha384 # and sha512 are also supported. # # Prior to changing this value, the master should be stopped and all Salt # caches should be cleared. # #hash_type: md5 # The buffer size in the file server can be adjusted here: #file_buffer_size: 1048576 # A regular expression (or a list of expressions) that will be matched # against the file path before syncing the modules and states to the minions. # This includes files affected by the file.recurse state. # For example, if you manage your custom modules and states in subversion # and don\(aqt want all the \(aq.svn\(aq folders and content synced to your minions, # you could set this to \(aq/\e.svn($|/)\(aq. By default nothing is ignored. # #file_ignore_regex: # \- \(aq/\e.svn($|/)\(aq # \- \(aq/\e.git($|/)\(aq # A file glob (or list of file globs) that will be matched against the file # path before syncing the modules and states to the minions. This is similar # to file_ignore_regex above, but works on globs instead of regex. By default # nothing is ignored. # # file_ignore_glob: # \- \(aq*.pyc\(aq # \- \(aq*/somefolder/*.bak\(aq # \- \(aq*.swp\(aq # File Server Backend # Salt supports a modular fileserver backend system, this system allows # the salt master to link directly to third party systems to gather and # manage the files available to minions. Multiple backends can be # configured and will be searched for the requested file in the order in which # they are defined here. The default setting only enables the standard backend # "roots" which uses the "file_roots" option. # #fileserver_backend: # \- roots # # To use multiple backends list them in the order they are searched: # #fileserver_backend: # \- git # \- roots # # Uncomment the line below if you do not want the file_server to follow # symlinks when walking the filesystem tree. This is set to True # by default. Currently this only applies to the default roots # fileserver_backend. # #fileserver_followsymlinks: False # # Uncomment the line below if you do not want symlinks to be # treated as the files they are pointing to. By default this is set to # False. By uncommenting the line below, any detected symlink while listing # files on the Master will not be returned to the Minion. # #fileserver_ignoresymlinks: True # # By default, the Salt fileserver recurses fully into all defined environments # to attempt to find files. To limit this behavior so that the fileserver only # traverses directories with SLS files and special Salt directories like _modules, # enable the option below. This might be useful for installations where a file root # has a very large number of files and performance is impacted. Default is False. # # fileserver_limit_traversal: False # # The fileserver can fire events off every time the fileserver is updated, # these are disabled by default, but can be easily turned on by setting this # flag to True #fileserver_events: False # # Git fileserver backend configuration # # Gitfs can be provided by one of two python modules: GitPython or pygit2. If # using pygit2, both libgit2 and git must also be installed. #gitfs_provider: gitpython # # When using the git fileserver backend at least one git remote needs to be # defined. The user running the salt master will need read access to the repo. # # The repos will be searched in order to find the file requested by a client # and the first repo to have the file will return it. # When using the git backend branches and tags are translated into salt # environments. # Note: file:// repos will be treated as a remote, so refs you want used must # exist in that repo as *local* refs. # #gitfs_remotes: # \- git://github.com/saltstack/salt\-states.git # \- file:///var/git/saltmaster # # The gitfs_ssl_verify option specifies whether to ignore ssl certificate # errors when contacting the gitfs backend. You might want to set this to # false if you\(aqre using a git backend that uses a self\-signed certificate but # keep in mind that setting this flag to anything other than the default of True # is a security concern, you may want to try using the ssh transport. #gitfs_ssl_verify: True # # # The gitfs_root option gives the ability to serve files from a subdirectory # within the repository. The path is defined relative to the root of the # repository and defaults to the repository root. #gitfs_root: somefolder/otherfolder ##### Pillar settings ##### ########################################## # Salt Pillars allow for the building of global data that can be made selectively # available to different minions based on minion grain filtering. The Salt # Pillar is laid out in the same fashion as the file server, with environments, # a top file and sls files. However, pillar data does not need to be in the # highstate format, and is generally just key/value pairs. #pillar_roots: # base: # \- /srv/pillar #ext_pillar: # \- hiera: /etc/hiera.yaml # \- cmd_yaml: cat /etc/salt/yaml # The pillar_gitfs_ssl_verify option specifies whether to ignore ssl certificate # errors when contacting the pillar gitfs backend. You might want to set this to # false if you\(aqre using a git backend that uses a self\-signed certificate but # keep in mind that setting this flag to anything other than the default of True # is a security concern, you may want to try using the ssh transport. #pillar_gitfs_ssl_verify: True # The pillar_opts option adds the master configuration file data to a dict in # the pillar called "master". This is used to set simple configurations in the # master config file that can then be used on minions. #pillar_opts: True ##### Syndic settings ##### ########################################## # The Salt syndic is used to pass commands through a master from a higher # master. Using the syndic is simple, if this is a master that will have # syndic servers(s) below it set the "order_masters" setting to True, if this # is a master that will be running a syndic daemon for passthrough the # "syndic_master" setting needs to be set to the location of the master server # to receive commands from. # Set the order_masters setting to True if this master will command lower # masters\(aq syndic interfaces. #order_masters: False # If this master will be running a salt syndic daemon, syndic_master tells # this master where to receive commands from. #syndic_master: masterofmaster # This is the \(aqret_port\(aq of the MasterOfMaster #syndic_master_port: 4506 # PID file of the syndic daemon #syndic_pidfile: /var/run/salt\-syndic.pid # LOG file of the syndic daemon #syndic_log_file: syndic.log ##### Peer Publish settings ##### ########################################## # Salt minions can send commands to other minions, but only if the minion is # allowed to. By default "Peer Publication" is disabled, and when enabled it # is enabled for specific minions and specific commands. This allows secure # compartmentalization of commands based on individual minions. # The configuration uses regular expressions to match minions and then a list # of regular expressions to match functions. The following will allow the # minion authenticated as foo.example.com to execute functions from the test # and pkg modules. # #peer: # foo.example.com: # \- test.* # \- pkg.* # # This will allow all minions to execute all commands: # #peer: # .*: # \- .* # # This is not recommended, since it would allow anyone who gets root on any # single minion to instantly have root on all of the minions! # Minions can also be allowed to execute runners from the salt master. # Since executing a runner from the minion could be considered a security risk, # it needs to be enabled. This setting functions just like the peer setting # except that it opens up runners instead of module functions. # # All peer runner support is turned off by default and must be enabled before # using. This will enable all peer runners for all minions: # #peer_run: # .*: # \- .* # # To enable just the manage.up runner for the minion foo.example.com: # #peer_run: # foo.example.com: # \- manage.up ##### Mine settings ##### ########################################## # Restrict mine.get access from minions. By default any minion has a full access # to get all mine data from master cache. In acl definion below, only pcre matches # are allowed. # # mine_get: # .*: # \- .* # # Example below enables minion foo.example.com to get \(aqnetwork.interfaces\(aq mine data only # , minions web* to get all network.* and disk.* mine data and all other minions won\(aqt get # any mine data. # # mine_get: # foo.example.com: # \- network.interfaces # web.*: # \- network.* # \- disk.* ##### Logging settings ##### ########################################## # The location of the master log file # The master log can be sent to a regular file, local path name, or network # location. Remote logging works best when configured to use rsyslogd(8) (e.g.: # \(ga\(gafile:///dev/log\(ga\(ga), with rsyslogd(8) configured for network logging. The URI # format is: ://:/ #log_file: /var/log/salt/master #log_file: file:///dev/log #log_file: udp://loghost:10514 #log_file: /var/log/salt/master #key_logfile: /var/log/salt/key # The level of messages to send to the console. # One of \(aqgarbage\(aq, \(aqtrace\(aq, \(aqdebug\(aq, info\(aq, \(aqwarning\(aq, \(aqerror\(aq, \(aqcritical\(aq. #log_level: warning # The level of messages to send to the log file. # One of \(aqgarbage\(aq, \(aqtrace\(aq, \(aqdebug\(aq, info\(aq, \(aqwarning\(aq, \(aqerror\(aq, \(aqcritical\(aq. #log_level_logfile: warning # The date and time format used in log messages. Allowed date/time formating # can be seen here: http://docs.python.org/library/time.html#time.strftime #log_datefmt: \(aq%H:%M:%S\(aq #log_datefmt_logfile: \(aq%Y\-%m\-%d %H:%M:%S\(aq # The format of the console logging messages. Allowed formatting options can # be seen here: http://docs.python.org/library/logging.html#logrecord\-attributes #log_fmt_console: \(aq[%(levelname)\-8s] %(message)s\(aq #log_fmt_logfile: \(aq%(asctime)s,%(msecs)03.0f [%(name)\-17s][%(levelname)\-8s] %(message)s\(aq # This can be used to control logging levels more specificically. This # example sets the main salt library at the \(aqwarning\(aq level, but sets # \(aqsalt.modules\(aq to log at the \(aqdebug\(aq level: # log_granular_levels: # \(aqsalt\(aq: \(aqwarning\(aq, # \(aqsalt.modules\(aq: \(aqdebug\(aq # #log_granular_levels: {} ##### Node Groups ##### ########################################## # Node groups allow for logical groupings of minion nodes. # A group consists of a group name and a compound target. # #nodegroups: # group1: \(aqL@foo.domain.com,bar.domain.com,baz.domain.com and bl*.domain.com\(aq # group2: \(aqG@os:Debian and foo.domain.com\(aq ##### Range Cluster settings ##### ########################################## # The range server (and optional port) that serves your cluster information # https://github.com/grierj/range/wiki/Introduction\-to\-Range\-with\-YAML\-files # #range_server: range:80 ##### Windows Software Repo settings ##### ############################################## # Location of the repo on the master #win_repo: \(aq/srv/salt/win/repo\(aq # Location of the master\(aqs repo cache file #win_repo_mastercachefile: \(aq/srv/salt/win/repo/winrepo.p\(aq # List of git repositories to include with the local repo #win_gitrepos: # \- \(aqhttps://github.com/saltstack/salt\-winrepo.git\(aq .ft P .fi .SS Example minion configuration file .sp .nf .ft C ##### Primary configuration settings ##### ########################################## # Per default the minion will automatically include all config files # from minion.d/*.conf (minion.d is a directory in the same directory # as the main minion config file). #default_include: minion.d/*.conf # Set the location of the salt master server, if the master server cannot be # resolved, then the minion will fail to start. #master: salt # If multiple masters are specified in the \(aqmaster\(aq setting, the default behavior # is to always try to connect to them in the order they are listed. If random_master is # set to True, the order will be randomized instead. This can be helpful in distributing # the load of many minions executing salt\-call requests, for example from a cron job. # If only one master is listed, this setting is ignored and a warning will be logged. #random_master: False # Set whether the minion should connect to the master via IPv6 #ipv6: False # Set the number of seconds to wait before attempting to resolve # the master hostname if name resolution fails. Defaults to 30 seconds. # Set to zero if the minion should shutdown and not retry. # retry_dns: 30 # Set the port used by the master reply and authentication server #master_port: 4506 # The user to run salt #user: root # Specify the location of the daemon process ID file #pidfile: /var/run/salt\-minion.pid # The root directory prepended to these options: pki_dir, cachedir, log_file, # sock_dir, pidfile. #root_dir: / # The directory to store the pki information in #pki_dir: /etc/salt/pki/minion # Explicitly declare the id for this minion to use, if left commented the id # will be the hostname as returned by the python call: socket.getfqdn() # Since salt uses detached ids it is possible to run multiple minions on the # same machine but with different ids, this can be useful for salt compute # clusters. #id: # Append a domain to a hostname in the event that it does not exist. This is # useful for systems where socket.getfqdn() does not actually result in a # FQDN (for instance, Solaris). #append_domain: # Custom static grains for this minion can be specified here and used in SLS # files just like all other grains. This example sets 4 custom grains, with # the \(aqroles\(aq grain having two values that can be matched against: #grains: # roles: # \- webserver # \- memcache # deployment: datacenter4 # cabinet: 13 # cab_u: 14\-15 # Where cache data goes #cachedir: /var/cache/salt/minion # Verify and set permissions on configuration directories at startup #verify_env: True # The minion can locally cache the return data from jobs sent to it, this # can be a good way to keep track of jobs the minion has executed # (on the minion side). By default this feature is disabled, to enable # set cache_jobs to True #cache_jobs: False # set the directory used to hold unix sockets #sock_dir: /var/run/salt/minion # Set the default outputter used by the salt\-call command. The default is # "nested" #output: nested # # By default output is colored, to disable colored output set the color value # to False #color: True # Do not strip off the colored output from nested results and states outputs # (true by default) # strip_colors: false # Backup files that are replaced by file.managed and file.recurse under # \(aqcachedir\(aq/file_backups relative to their original location and appended # with a timestamp. The only valid setting is "minion". Disabled by default. # # Alternatively this can be specified for each file in state files: # # /etc/ssh/sshd_config: # file.managed: # \- source: salt://ssh/sshd_config # \- backup: minion # #backup_mode: minion # When waiting for a master to accept the minion\(aqs public key, salt will # continuously attempt to reconnect until successful. This is the time, in # seconds, between those reconnection attempts. #acceptance_wait_time: 10 # If this is nonzero, the time between reconnection attempts will increase by # acceptance_wait_time seconds per iteration, up to this maximum. If this is # set to zero, the time between reconnection attempts will stay constant. #acceptance_wait_time_max: 0 # If the master rejects the minion\(aqs public key, retry instead exiting. Rejected keys # # will be handled the same as waiting on acceptance. #rejected_retry: False # When the master key changes, the minion will try to re\-auth itself to receive # the new master key. In larger environments this can cause a SYN flood on the # master because all minions try to re\-auth immediately. To prevent this and # have a minion wait for a random amount of time, use this optional parameter. # The wait\-time will be a random number of seconds between # 0 and the defined value. #random_reauth_delay: 60 # When waiting for a master to accept the minion\(aqs public key, salt will # continuously attempt to reconnect until successful. This is the timeout value, # in seconds, for each individual attempt. After this timeout expires, the minion # will wait for acceptance_wait_time seconds before trying again. # Unless your master is under unusually heavy load, this should be left at the default. #auth_timeout: 60 # Number of consecutive SaltReqTimeoutError that are acceptable when trying to authenticate. #auth_tries: 1 # If authentication failes due to SaltReqTimeoutError, continue without ending minion. #auth_safemode: True # If the minion hits an error that is recoverable, restart the minion. #restart_on_error: False # Ping Master to ensure connection is alive (minutes). # TODO: perhaps could update the scheduler to raise Exception in main thread after /mine_interval (60 minutes)/ fails #ping_interval: 0 # To auto recover Minions if Master changes IP address (DDNS) # # auth_tries: 10 # auth_safemode: False # ping_interval: 90 # restart_on_error: True # # Minions wont know master is missing untill a ping fails. After the ping fail, # the minion will attempt authentication and likly fails out and cause a restart. # When the minion restarts it will resolve the Masters IP and attempt to reconnect. # If you don\(aqt have any problems with syn\-floods, dont bother with the # three recon_* settings described below, just leave the defaults! # # The ZeroMQ pull\-socket that binds to the masters publishing interface tries # to reconnect immediately, if the socket is disconnected (for example if # the master processes are restarted). In large setups this will have all # minions reconnect immediately which might flood the master (the ZeroMQ\-default # is usually a 100ms delay). To prevent this, these three recon_* settings # can be used. # # recon_default: the interval in milliseconds that the socket should wait before # trying to reconnect to the master (1000ms = 1 second) # # recon_max: the maximum time a socket should wait. each interval the time to wait # is calculated by doubling the previous time. if recon_max is reached, # it starts again at recon_default. Short example: # # reconnect 1: the socket will wait \(aqrecon_default\(aq milliseconds # reconnect 2: \(aqrecon_default\(aq * 2 # reconnect 3: (\(aqrecon_default\(aq * 2) * 2 # reconnect 4: value from previous interval * 2 # reconnect 5: value from previous interval * 2 # reconnect x: if value >= recon_max, it starts again with recon_default # # recon_randomize: generate a random wait time on minion start. The wait time will # be a random value between recon_default and recon_default + # recon_max. Having all minions reconnect with the same recon_default # and recon_max value kind of defeats the purpose of being able to # change these settings. If all minions have the same values and your # setup is quite large (several thousand minions), they will still # flood the master. The desired behaviour is to have timeframe within # all minions try to reconnect. # Example on how to use these settings: # The goal: have all minions reconnect within a 60 second timeframe on a disconnect # # The settings: #recon_default: 1000 #recon_max: 59000 #recon_randomize: True # # Each minion will have a randomized reconnect value between \(aqrecon_default\(aq # and \(aqrecon_default + recon_max\(aq, which in this example means between 1000ms # 60000ms (or between 1 and 60 seconds). The generated random\-value will be # doubled after each attempt to reconnect. Lets say the generated random # value is 11 seconds (or 11000ms). # # reconnect 1: wait 11 seconds # reconnect 2: wait 22 seconds # reconnect 3: wait 33 seconds # reconnect 4: wait 44 seconds # reconnect 5: wait 55 seconds # reconnect 6: wait time is bigger than 60 seconds (recon_default + recon_max) # reconnect 7: wait 11 seconds # reconnect 8: wait 22 seconds # reconnect 9: wait 33 seconds # reconnect x: etc. # # In a setup with ~6000 thousand hosts these settings would average the reconnects # to about 100 per second and all hosts would be reconnected within 60 seconds. #recon_default: 100 #recon_max: 5000 #recon_randomize: False # The loop_interval sets how long in seconds the minion will wait between # evaluating the scheduler and running cleanup tasks. This defaults to a # sane 60 seconds, but if the minion scheduler needs to be evaluated more # often lower this value #loop_interval: 60 # The grains_refresh_every setting allows for a minion to periodically check # its grains to see if they have changed and, if so, to inform the master # of the new grains. This operation is moderately expensive, therefore # care should be taken not to set this value too low. # # Note: This value is expressed in __minutes__! # # A value of 10 minutes is a reasonable default. # # If the value is set to zero, this check is disabled. #grains_refresh_every: 1 # Cache grains on the minion. Default is False. # grains_cache: False # Grains cache expiration, in seconds. If the cache file is older than this # number of seconds then the grains cache will be dumped and fully re\-populated # with fresh data. Defaults to 5 minutes. Will have no effect if \(aqgrains_cache\(aq # is not enabled. # grains_cache_expiration: 300 # When healing, a dns_check is run. This is to make sure that the originally # resolved dns has not changed. If this is something that does not happen in # your environment, set this value to False. #dns_check: True # Windows platforms lack posix IPC and must rely on slower TCP based inter\- # process communications. Set ipc_mode to \(aqtcp\(aq on such systems #ipc_mode: ipc # # Overwrite the default tcp ports used by the minion when in tcp mode #tcp_pub_port: 4510 #tcp_pull_port: 4511 # Passing very large events can cause the minion to consume large amounts of # memory. This value tunes the maximum size of a message allowed onto the # minion event bus. The value is expressed in bytes. #max_event_size: 1048576 # The minion can include configuration from other files. To enable this, # pass a list of paths to this option. The paths can be either relative or # absolute; if relative, they are considered to be relative to the directory # the main minion configuration file lives in (this file). Paths can make use # of shell\-style globbing. If no files are matched by a path passed to this # option then the minion will log a warning message. # # # Include a config file from some other path: # include: /etc/salt/extra_config # # Include config from several files and directories: #include: # \- /etc/salt/extra_config # \- /etc/roles/webserver ##### Minion module management ##### ########################################## # Disable specific modules. This allows the admin to limit the level of # access the master has to the minion #disable_modules: [cmd,test] #disable_returners: [] # # Modules can be loaded from arbitrary paths. This enables the easy deployment # of third party modules. Modules for returners and minions can be loaded. # Specify a list of extra directories to search for minion modules and # returners. These paths must be fully qualified! #module_dirs: [] #returner_dirs: [] #states_dirs: [] #render_dirs: [] #utils_dirs: [] # # A module provider can be statically overwritten or extended for the minion # via the providers option, in this case the default module will be # overwritten by the specified module. In this example the pkg module will # be provided by the yumpkg5 module instead of the system default. # #providers: # pkg: yumpkg5 # # Enable Cython modules searching and loading. (Default: False) #cython_enable: False # # # # Specify a max size (in bytes) for modules on import # this feature is currently only supported on *nix OSs and requires psutil # modules_max_memory: \-1 ##### State Management Settings ##### ########################################### # The state management system executes all of the state templates on the minion # to enable more granular control of system state management. The type of # template and serialization used for state management needs to be configured # on the minion, the default renderer is yaml_jinja. This is a yaml file # rendered from a jinja template, the available options are: # yaml_jinja # yaml_mako # yaml_wempy # json_jinja # json_mako # json_wempy # #renderer: yaml_jinja # # The failhard option tells the minions to stop immediately after the first # failure detected in the state execution, defaults to False #failhard: False # # autoload_dynamic_modules Turns on automatic loading of modules found in the # environments on the master. This is turned on by default, to turn of # autoloading modules when states run set this value to False #autoload_dynamic_modules: True # # clean_dynamic_modules keeps the dynamic modules on the minion in sync with # the dynamic modules on the master, this means that if a dynamic module is # not on the master it will be deleted from the minion. By default this is # enabled and can be disabled by changing this value to False #clean_dynamic_modules: True # # Normally the minion is not isolated to any single environment on the master # when running states, but the environment can be isolated on the minion side # by statically setting it. Remember that the recommended way to manage # environments is to isolate via the top file. #environment: None # # If using the local file directory, then the state top file name needs to be # defined, by default this is top.sls. #state_top: top.sls # # Run states when the minion daemon starts. To enable, set startup_states to: # \(aqhighstate\(aq \-\- Execute state.highstate # \(aqsls\(aq \-\- Read in the sls_list option and execute the named sls files # \(aqtop\(aq \-\- Read top_file option and execute based on that file on the Master #startup_states: \(aq\(aq # # list of states to run when the minion starts up if startup_states is \(aqsls\(aq #sls_list: # \- edit.vim # \- hyper # # top file to execute if startup_states is \(aqtop\(aq #top_file: \(aq\(aq ##### File Directory Settings ##### ########################################## # The Salt Minion can redirect all file server operations to a local directory, # this allows for the same state tree that is on the master to be used if # copied completely onto the minion. This is a literal copy of the settings on # the master but used to reference a local directory on the minion. # Set the file client. The client defaults to looking on the master server for # files, but can be directed to look at the local file directory setting # defined below by setting it to local. #file_client: remote # The file directory works on environments passed to the minion, each environment # can have multiple root directories, the subdirectories in the multiple file # roots cannot match, otherwise the downloaded files will not be able to be # reliably ensured. A base environment is required to house the top file. # Example: # file_roots: # base: # \- /srv/salt/ # dev: # \- /srv/salt/dev/services # \- /srv/salt/dev/states # prod: # \- /srv/salt/prod/services # \- /srv/salt/prod/states # #file_roots: # base: # \- /srv/salt # By default, the Salt fileserver recurses fully into all defined environments # to attempt to find files. To limit this behavior so that the fileserver only # traverses directories with SLS files and special Salt directories like _modules, # enable the option below. This might be useful for installations where a file root # has a very large number of files and performance is negatively impacted. # # Default is False. # # fileserver_limit_traversal: False # The hash_type is the hash to use when discovering the hash of a file in # the local fileserver. The default is md5, but sha1, sha224, sha256, sha384 # and sha512 are also supported. # # Warning: Prior to changing this value, the minion should be stopped and all # Salt caches should be cleared. # #hash_type: md5 # The Salt pillar is searched for locally if file_client is set to local. If # this is the case, and pillar data is defined, then the pillar_roots need to # also be configured on the minion: #pillar_roots: # base: # \- /srv/pillar ###### Security settings ##### ########################################### # Enable "open mode", this mode still maintains encryption, but turns off # authentication, this is only intended for highly secure environments or for # the situation where your keys end up in a bad state. If you run in open mode # you do so at your own risk! #open_mode: False # Enable permissive access to the salt keys. This allows you to run the # master or minion as root, but have a non\-root group be given access to # your pki_dir. To make the access explicit, root must belong to the group # you\(aqve given access to. This is potentially quite insecure. #permissive_pki_access: False # The state_verbose and state_output settings can be used to change the way # state system data is printed to the display. By default all data is printed. # The state_verbose setting can be set to True or False, when set to False # all data that has a result of True and no changes will be suppressed. #state_verbose: True # # The state_output setting changes if the output is the full multi line # output for each changed state if set to \(aqfull\(aq, but if set to \(aqterse\(aq # the output will be shortened to a single line. #state_output: full # # Fingerprint of the master public key to double verify the master is valid, # the master fingerprint can be found by running "salt\-key \-F master" on the # salt master. #master_finger: \(aq\(aq ###### Thread settings ##### ########################################### # Disable multiprocessing support, by default when a minion receives a # publication a new process is spawned and the command is executed therein. #multiprocessing: True ##### Logging settings ##### ########################################## # The location of the minion log file # The minion log can be sent to a regular file, local path name, or network # location. Remote logging works best when configured to use rsyslogd(8) (e.g.: # \(ga\(gafile:///dev/log\(ga\(ga), with rsyslogd(8) configured for network logging. The URI # format is: ://:/ #log_file: /var/log/salt/minion #log_file: file:///dev/log #log_file: udp://loghost:10514 # #log_file: /var/log/salt/minion #key_logfile: /var/log/salt/key # # The level of messages to send to the console. # One of \(aqgarbage\(aq, \(aqtrace\(aq, \(aqdebug\(aq, info\(aq, \(aqwarning\(aq, \(aqerror\(aq, \(aqcritical\(aq. # Default: \(aqwarning\(aq #log_level: warning # # The level of messages to send to the log file. # One of \(aqgarbage\(aq, \(aqtrace\(aq, \(aqdebug\(aq, info\(aq, \(aqwarning\(aq, \(aqerror\(aq, \(aqcritical\(aq. # Default: \(aqwarning\(aq #log_level_logfile: # The date and time format used in log messages. Allowed date/time formating # can be seen here: http://docs.python.org/library/time.html#time.strftime #log_datefmt: \(aq%H:%M:%S\(aq #log_datefmt_logfile: \(aq%Y\-%m\-%d %H:%M:%S\(aq # # The format of the console logging messages. Allowed formatting options can # be seen here: http://docs.python.org/library/logging.html#logrecord\-attributes #log_fmt_console: \(aq[%(levelname)\-8s] %(message)s\(aq #log_fmt_logfile: \(aq%(asctime)s,%(msecs)03.0f [%(name)\-17s][%(levelname)\-8s] %(message)s\(aq # # This can be used to control logging levels more specificically. This # example sets the main salt library at the \(aqwarning\(aq level, but sets # \(aqsalt.modules\(aq to log at the \(aqdebug\(aq level: # log_granular_levels: # \(aqsalt\(aq: \(aqwarning\(aq, # \(aqsalt.modules\(aq: \(aqdebug\(aq # #log_granular_levels: {} ###### Module configuration ##### ########################################### # Salt allows for modules to be passed arbitrary configuration data, any data # passed here in valid yaml format will be passed on to the salt minion modules # for use. It is STRONGLY recommended that a naming convention be used in which # the module name is followed by a . and then the value. Also, all top level # data must be applied via the yaml dict construct, some examples: # # You can specify that all modules should run in test mode: #test: True # # A simple value for the test module: #test.foo: foo # # A list for the test module: #test.bar: [baz,quo] # # A dict for the test module: #test.baz: {spam: sausage, cheese: bread} ###### Update settings ###### ########################################### # Using the features in Esky, a salt minion can both run as a frozen app and # be updated on the fly. These options control how the update process # (saltutil.update()) behaves. # # The url for finding and downloading updates. Disabled by default. #update_url: False # # The list of services to restart after a successful update. Empty by default. #update_restart_services: [] ###### Keepalive settings ###### ############################################ # ZeroMQ now includes support for configuring SO_KEEPALIVE if supported by # the OS. If connections between the minion and the master pass through # a state tracking device such as a firewall or VPN gateway, there is # the risk that it could tear down the connection the master and minion # without informing either party that their connection has been taken away. # Enabling TCP Keepalives prevents this from happening. # # Overall state of TCP Keepalives, enable (1 or True), disable (0 or False) # or leave to the OS defaults (\-1), on Linux, typically disabled. Default True, enabled. #tcp_keepalive: True # # How long before the first keepalive should be sent in seconds. Default 300 # to send the first keepalive after 5 minutes, OS default (\-1) is typically 7200 seconds # on Linux see /proc/sys/net/ipv4/tcp_keepalive_time. #tcp_keepalive_idle: 300 # # How many lost probes are needed to consider the connection lost. Default \-1 # to use OS defaults, typically 9 on Linux, see /proc/sys/net/ipv4/tcp_keepalive_probes. #tcp_keepalive_cnt: \-1 # # How often, in seconds, to send keepalives after the first one. Default \-1 to # use OS defaults, typically 75 seconds on Linux, see # /proc/sys/net/ipv4/tcp_keepalive_intvl. #tcp_keepalive_intvl: \-1 ###### Windows Software settings ###### ############################################ # Location of the repository cache file on the master #win_repo_cachefile: \(aqsalt://win/repo/winrepo.p\(aq .ft P .fi .SS Configuring Salt .sp Salt configuration is very simple. The default configuration for the \fImaster\fP will work for most installations and the only requirement for setting up a \fIminion\fP is to set the location of the master in the minion configuration file. .sp The configuration files will be installed to \fB/etc/salt\fP and are named after the respective components, \fB/etc/salt/master\fP and \fB/etc/salt/minion\fP. .SS Master Configuration .sp By default the Salt master listens on ports 4505 and 4506 on all interfaces (0.0.0.0). To bind Salt to a specific IP, redefine the "interface" directive in the master configuration file, typically \fB/etc/salt/master\fP, as follows: .sp .nf .ft C \- #interface: 0.0.0.0 + interface: 10.0.0.1 .ft P .fi .sp After updating the configuration file, restart the Salt master. See the \fBmaster configuration reference\fP for more details about other configurable options. .SS Minion Configuration .sp Although there are many Salt Minion configuration options, configuring a Salt Minion is very simple. By default a Salt Minion will try to connect to the DNS name "salt"; if the Minion is able to resolve that name correctly, no configuration is needed. .sp If the DNS name "salt" does not resolve to point to the correct location of the Master, redefine the "master" directive in the minion configuration file, typically \fB/etc/salt/minion\fP, as follows: .sp .nf .ft C \- #master: salt + master: 10.0.0.1 .ft P .fi .sp After updating the configuration file, restart the Salt minion. See the \fBminion configuration reference\fP for more details about other configurable options. .SS Running Salt .INDENT 0.0 .IP 1. 3 Start the master in the foreground (to daemonize the process, pass the \fI\-d flag\fP): .sp .nf .ft C salt\-master .ft P .fi .IP 2. 3 Start the minion in the foreground (to daemonize the process, pass the \fI\-d flag\fP): .sp .nf .ft C salt\-minion .ft P .fi .UNINDENT .IP "Having trouble?" .sp The simplest way to troubleshoot Salt is to run the master and minion in the foreground with \fIlog level\fP set to \fBdebug\fP: .sp .nf .ft C salt\-master \-\-log\-level=debug .ft P .fi .sp For information on salt\(aqs logging system please see the \fBlogging document\fP. .RE .IP "Run as an unprivileged (non\-root) user" .sp To run Salt as another user, set the \fBuser\fP parameter in the master config file. .sp Additionally, ownership and permissions need to be set such that the desired user can read from and write to the following directories (and their subdirectories, where applicable): .INDENT 0.0 .IP \(bu 2 /etc/salt .IP \(bu 2 /var/cache/salt .IP \(bu 2 /var/log/salt .IP \(bu 2 /var/run/salt .UNINDENT .sp More information about running salt as a non\-privileged user can be found \fBhere\fP. .RE .sp There is also a full \fBtroubleshooting guide\fP available. .SS Key Management .sp Salt uses AES encryption for all communication between the Master and the Minion. This ensures that the commands sent to the Minions cannot be tampered with, and that communication between Master and Minion is authenticated through trusted, accepted keys. .sp Before commands can be sent to a Minion, its key must be accepted on the Master. Run the \fBsalt\-key\fP command to list the keys known to the Salt Master: .sp .nf .ft C [root@master ~]# salt\-key \-L Unaccepted Keys: alpha bravo charlie delta Accepted Keys: .ft P .fi .sp This example shows that the Salt Master is aware of four Minions, but none of the keys has been accepted. To accept the keys and allow the Minions to be controlled by the Master, again use the \fBsalt\-key\fP command: .sp .nf .ft C [root@master ~]# salt\-key \-A [root@master ~]# salt\-key \-L Unaccepted Keys: Accepted Keys: alpha bravo charlie delta .ft P .fi .sp The \fBsalt\-key\fP command allows for signing keys individually or in bulk. The example above, using \fB\-A\fP bulk\-accepts all pending keys. To accept keys individually use the lowercase of the same option, \fB\-a keyname\fP. .IP "See also" .sp \fBsalt\-key manpage\fP .RE .SS Sending Commands .sp Communication between the Master and a Minion may be verified by running the \fBtest.ping\fP command: .sp .nf .ft C [root@master ~]# salt alpha test.ping alpha: True .ft P .fi .sp Communication between the Master and all Minions may be tested in a similar way: .sp .nf .ft C [root@master ~]# salt \(aq*\(aq test.ping alpha: True bravo: True charlie: True delta: True .ft P .fi .sp Each of the Minions should send a \fBTrue\fP response as shown above. .SS What\(aqs Next? .sp Understanding \fBtargeting\fP is important. From there, depending on the way you wish to use Salt, you should also proceed to learn about \fBStates\fP and \fBExecution Modules\fP. .SS Configuring the Salt Master .sp The Salt system is amazingly simple and easy to configure, the two components of the Salt system each have a respective configuration file. The \fBsalt\-master\fP is configured via the master configuration file, and the \fBsalt\-minion\fP is configured via the minion configuration file. .IP "See also" .sp \fIexample master configuration file\fP .RE .sp The configuration file for the salt\-master is located at \fB/etc/salt/master\fP. The available options are as follows: .SS Primary Master Configuration .SS \fBinterface\fP .sp Default: \fB0.0.0.0\fP (all interfaces) .sp The local interface to bind to. .sp .nf .ft C interface: 192.168.0.1 .ft P .fi .SS \fBipv6\fP .sp Default: \fBFalse\fP .sp Whether the master should listen for IPv6 connections. If this is set to True, the interface option must be adjusted too (for example: "interface: \(aq::\(aq") .sp .nf .ft C ipv6: True .ft P .fi .SS \fBpublish_port\fP .sp Default: \fB4505\fP .sp The network port to set up the publication interface .sp .nf .ft C publish_port: 4505 .ft P .fi .SS \fBuser\fP .sp Default: \fBroot\fP .sp The user to run the Salt processes .sp .nf .ft C user: root .ft P .fi .SS \fBmax_open_files\fP .sp Default: \fBmax_open_files\fP .sp Each minion connecting to the master uses AT LEAST one file descriptor, the master subscription connection. If enough minions connect you might start seeing on the console(and then salt\-master crashes): .sp .nf .ft C Too many open files (tcp_listener.cpp:335) Aborted (core dumped) .ft P .fi .sp By default this value will be the one of \fIulimit \-Hn\fP, i.e., the hard limit for max open files. .sp If you wish to set a different value than the default one, uncomment and configure this setting. Remember that this value CANNOT be higher than the hard limit. Raising the hard limit depends on your OS and/or distribution, a good way to find the limit is to search the internet for(for example): .sp .nf .ft C raise max open files hard limit debian .ft P .fi .sp .nf .ft C max_open_files: 100000 .ft P .fi .SS \fBworker_threads\fP .sp Default: \fB5\fP .sp The number of threads to start for receiving commands and replies from minions. If minions are stalling on replies because you have many minions, raise the worker_threads value. .sp Worker threads should not be put below 3 when using the peer system, but can drop down to 1 worker otherwise. .sp .nf .ft C worker_threads: 5 .ft P .fi .SS \fBret_port\fP .sp Default: \fB4506\fP .sp The port used by the return server, this is the server used by Salt to receive execution returns and command executions. .sp .nf .ft C ret_port: 4506 .ft P .fi .SS \fBpidfile\fP .sp Default: \fB/var/run/salt\-master.pid\fP .sp Specify the location of the master pidfile .sp .nf .ft C pidfile: /var/run/salt\-master.pid .ft P .fi .SS \fBroot_dir\fP .sp Default: \fB/\fP .sp The system root directory to operate from, change this to make Salt run from an alternative root. .sp .nf .ft C root_dir: / .ft P .fi .IP Note This directory is prepended to the following options: \fBpki_dir\fP, \fBcachedir\fP, \fBsock_dir\fP, \fBlog_file\fP, \fBautosign_file\fP, \fBautoreject_file\fP, \fBpidfile\fP. .RE .SS \fBpki_dir\fP .sp Default: \fB/etc/salt/pki\fP .sp The directory to store the pki authentication keys. .sp .nf .ft C pki_dir: /etc/salt/pki .ft P .fi .SS \fBextension_modules\fP .sp Directory for custom modules. This directory can contain subdirectories for each of Salt\(aqs module types such as "runners", "output", "wheel", "modules", "states", "returners", etc. This path is appended to \fBroot_dir\fP. .sp .nf .ft C extension_modules: srv/modules .ft P .fi .SS \fBcachedir\fP .sp Default: \fB/var/cache/salt\fP .sp The location used to store cache information, particularly the job information for executed salt commands. .sp .nf .ft C cachedir: /var/cache/salt .ft P .fi .SS \fBverify_env\fP .sp Default: \fBTrue\fP .sp Verify and set permissions on configuration directories at startup. .sp .nf .ft C verify_env: True .ft P .fi .SS \fBkeep_jobs\fP .sp Default: \fB24\fP .sp Set the number of hours to keep old job information .SS \fBtimeout\fP .sp Default: \fB5\fP .sp Set the default timeout for the salt command and api. .SS \fBloop_interval\fP .sp Default: \fB60\fP .sp The loop_interval option controls the seconds for the master\(aqs maintenance process check cycle. This process updates file server backends, cleans the job cache and executes the scheduler. .SS \fBoutput\fP .sp Default: \fBnested\fP .sp Set the default outputter used by the salt command. .SS \fBcolor\fP .sp Default: \fBTrue\fP .sp By default output is colored, to disable colored output set the color value to False .sp .nf .ft C color: False .ft P .fi .SS \fBsock_dir\fP .sp Default: \fB/var/run/salt/master\fP .sp Set the location to use for creating Unix sockets for master process communication .sp .nf .ft C sock_dir: /var/run/salt/master .ft P .fi .SS \fBenable_gpu_grains\fP .sp Default: \fBFalse\fP .sp The master can take a while to start up when lspci and/or dmidecode is used to populate the grains for the master. Enable if you want to see GPU hardware data for your master. .SS \fBjob_cache\fP .sp Default: \fBTrue\fP .sp The master maintains a job cache, while this is a great addition it can be a burden on the master for larger deployments (over 5000 minions). Disabling the job cache will make previously executed jobs unavailable to the jobs system and is not generally recommended. Normally it is wise to make sure the master has access to a faster IO system or a tmpfs is mounted to the jobs dir .SS \fBminion_data_cache\fP .sp Default: \fBTrue\fP .sp The minion data cache is a cache of information about the minions stored on the master, this information is primarily the pillar and grains data. The data is cached in the Master cachedir under the name of the minion and used to pre determine what minions are expected to reply from executions. .sp .nf .ft C minion_data_cache: True .ft P .fi .SS \fBext_job_cache\fP .sp Default: \fB\(aq\(aq\fP .sp Used to specify a default returner for all minions, when this option is set the specified returner needs to be properly configured and the minions will always default to sending returns to this returner. This will also disable the local job cache on the master .sp .nf .ft C ext_job_cache: redis .ft P .fi .SS \fBenforce_mine_cache\fP .sp Default: False .sp By\-default when disabling the minion_data_cache mine will stop working since it is based on cached data, by enabling this option we explicitly enabling only the cache for the mine system. .sp .nf .ft C enforce_mine_cache: False .ft P .fi .SS \fBmax_minions\fP .sp Default: 0 .sp The number of minions the master should allow to connect. Use this to accomodate the number of minions per master if you have different types of hardware serving your minions. The default of \fB0\fP means unlimited connections. Please note, that this can slow down the authentication process a bit in large setups. .sp .nf .ft C max_minions: 100 .ft P .fi .SS \fBpresence_events\fP .sp Default: False .sp When enabled the master regularly sends events of currently connected, lost and newly connected minions on the eventbus. .sp .nf .ft C presence_events: False .ft P .fi .SS Master Security Settings .SS \fBopen_mode\fP .sp Default: \fBFalse\fP .sp Open mode is a dangerous security feature. One problem encountered with pki authentication systems is that keys can become "mixed up" and authentication begins to fail. Open mode turns off authentication and tells the master to accept all authentication. This will clean up the pki keys received from the minions. Open mode should not be turned on for general use. Open mode should only be used for a short period of time to clean up pki keys. To turn on open mode set this value to \fBTrue\fP. .sp .nf .ft C open_mode: False .ft P .fi .SS \fBauto_accept\fP .sp Default: \fBFalse\fP .sp Enable auto_accept. This setting will automatically accept all incoming public keys from minions. .sp .nf .ft C auto_accept: False .ft P .fi .SS \fBautosign_timeout\fP .sp New in version Helium. .sp Default: \fB120\fP .sp Time in minutes that a incoming public key with a matching name found in pki_dir/minion_autosign/keyid is automatically accepted. Expired autosign keys are removed when the master checks the minion_autosign directory. This method to auto accept minions can be safer than an autosign_file because the keyid record can expire and is limited to being an exact name match. This should still be considered a less than secure option, due to the fact that trust is based on just the requesting minion id. .SS \fBautosign_file\fP .sp Default: \fBnot defined\fP .sp If the \fBautosign_file\fP is specified incoming keys specified in the autosign_file will be automatically accepted. Matches will be searched for first by string comparison, then by globbing, then by full\-string regex matching. This should still be considered a less than secure option, due to the fact that trust is based on just the requesting minion id. .SS \fBautoreject_file\fP .sp New in version 2014.1.0: (Hydrogen) .sp Default: \fBnot defined\fP .sp Works like \fBautosign_file\fP, but instead allows you to specify minion IDs for which keys will automatically be rejected. Will override both membership in the \fBautosign_file\fP and the \fBauto_accept\fP setting. .SS \fBclient_acl\fP .sp Default: \fB{}\fP .sp Enable user accounts on the master to execute specific modules. These modules can be expressed as regular expressions .sp .nf .ft C client_acl: fred: \- test.ping \- pkg.* .ft P .fi .SS \fBclient_acl_blacklist\fP .sp Default: \fB{}\fP .sp Blacklist users or modules .sp This example would blacklist all non sudo users, including root from running any commands. It would also blacklist any use of the "cmd" module. .sp This is completely disabled by default. .sp .nf .ft C client_acl_blacklist: users: \- root \- \(aq^(?!sudo_).*$\(aq # all non sudo users modules: \- cmd .ft P .fi .SS \fBexternal_auth\fP .sp Default: \fB{}\fP .sp The external auth system uses the Salt auth modules to authenticate and validate users to access areas of the Salt system. .sp .nf .ft C external_auth: pam: fred: \- test.* .ft P .fi .SS \fBtoken_expire\fP .sp Default: \fB43200\fP .sp Time (in seconds) for a newly generated token to live. Default: 12 hours .sp .nf .ft C token_expire: 43200 .ft P .fi .SS \fBfile_recv\fP .sp Default: \fBFalse\fP .sp Allow minions to push files to the master. This is disabled by default, for security purposes. .sp .nf .ft C file_recv: False .ft P .fi .SS Master Module Management .SS \fBrunner_dirs\fP .sp Default: \fB[]\fP .sp Set additional directories to search for runner modules .SS \fBcython_enable\fP .sp Default: \fBFalse\fP .sp Set to true to enable Cython modules (.pyx files) to be compiled on the fly on the Salt master .sp .nf .ft C cython_enable: False .ft P .fi .SS Master State System Settings .SS \fBstate_top\fP .sp Default: \fBtop.sls\fP .sp The state system uses a "top" file to tell the minions what environment to use and what modules to use. The state_top file is defined relative to the root of the base environment .sp .nf .ft C state_top: top.sls .ft P .fi .SS \fBmaster_tops\fP .sp Default: \fB{}\fP .sp The master_tops option replaces the external_nodes option by creating a pluggable system for the generation of external top data. The external_nodes option is deprecated by the master_tops option. To gain the capabilities of the classic external_nodes system, use the following configuration: .sp .nf .ft C master_tops: ext_nodes: .ft P .fi .SS \fBexternal_nodes\fP .sp Default: None .sp The external_nodes option allows Salt to gather data that would normally be placed in a top file from and external node controller. The external_nodes option is the executable that will return the ENC data. Remember that Salt will look for external nodes AND top files and combine the results if both are enabled and available! .sp .nf .ft C external_nodes: cobbler\-ext\-nodes .ft P .fi .SS \fBrenderer\fP .sp Default: \fByaml_jinja\fP .sp The renderer to use on the minions to render the state data .sp .nf .ft C renderer: yaml_jinja .ft P .fi .SS \fBfailhard\fP .sp Default: \fBFalse\fP .sp Set the global failhard flag, this informs all states to stop running states at the moment a single state fails .sp .nf .ft C failhard: False .ft P .fi .SS \fBstate_verbose\fP .sp Default: \fBTrue\fP .sp Controls the verbosity of state runs. By default, the results of all states are returned, but setting this value to \fBFalse\fP will cause salt to only display output for states which either failed, or succeeded without making any changes to the minion. .sp .nf .ft C state_verbose: False .ft P .fi .SS \fBstate_output\fP .sp Default: \fBfull\fP .sp The state_output setting changes if the output is the full multi line output for each changed state if set to \(aqfull\(aq, but if set to \(aqterse\(aq the output will be shortened to a single line. If set to \(aqmixed\(aq, the output will be terse unless a state failed, in which case that output will be full. If set to \(aqchanges\(aq, the output will be full unless the state didn\(aqt change. .sp .nf .ft C state_output: full .ft P .fi .SS \fByaml_utf8\fP .sp Default: \fBFalse\fP .sp Enable extra routines for yaml renderer used states containing UTF characters .sp .nf .ft C yaml_utf8: False .ft P .fi .SS \fBtest\fP .sp Default: \fBFalse\fP .sp Set all state calls to only test if they are going to actually make changes or just post what changes are going to be made .sp .nf .ft C test: False .ft P .fi .SS Master File Server Settings .SS \fBfileserver_backend\fP .sp Default: .sp .nf .ft C fileserver_backend: \- roots .ft P .fi .sp Salt supports a modular fileserver backend system, this system allows the salt master to link directly to third party systems to gather and manage the files available to minions. Multiple backends can be configured and will be searched for the requested file in the order in which they are defined here. The default setting only enables the standard backend \fBroots\fP, which is configured using the \fBfile_roots\fP option. .sp Example: .sp .nf .ft C fileserver_backend: \- roots \- git .ft P .fi .SS \fBhash_type\fP .sp Default: \fBmd5\fP .sp The hash_type is the hash to use when discovering the hash of a file on the master server. The default is md5, but sha1, sha224, sha256, sha384 and sha512 are also supported. .sp .nf .ft C hash_type: md5 .ft P .fi .SS \fBfile_buffer_size\fP .sp Default: \fB1048576\fP .sp The buffer size in the file server in bytes .sp .nf .ft C file_buffer_size: 1048576 .ft P .fi .SS \fBfile_ignore_regex\fP .sp Default: \fB\(aq\(aq\fP .sp A regular expression (or a list of expressions) that will be matched against the file path before syncing the modules and states to the minions. This includes files affected by the file.recurse state. For example, if you manage your custom modules and states in subversion and don\(aqt want all the \(aq.svn\(aq folders and content synced to your minions, you could set this to \(aq/.svn($|/)\(aq. By default nothing is ignored. .sp .nf .ft C file_ignore_regex: \- \(aq/\e.svn($|/)\(aq \- \(aq/\e.git($|/)\(aq .ft P .fi .SS \fBfile_ignore_glob\fP .sp Default \fB\(aq\(aq\fP .sp A file glob (or list of file globs) that will be matched against the file path before syncing the modules and states to the minions. This is similar to file_ignore_regex above, but works on globs instead of regex. By default nothing is ignored. .sp .nf .ft C file_ignore_glob: \- \(aq\e*.pyc\(aq \- \(aq\e*/somefolder/\e*.bak\(aq \- \(aq\e*.swp\(aq .ft P .fi .SS roots: Master\(aqs Local File Server .SS \fBfile_roots\fP .sp Default: .sp .nf .ft C base: \- /srv/salt .ft P .fi .sp Salt runs a lightweight file server written in ZeroMQ to deliver files to minions. This file server is built into the master daemon and does not require a dedicated port. .sp The file server works on environments passed to the master. Each environment can have multiple root directories. The subdirectories in the multiple file roots cannot match, otherwise the downloaded files will not be able to be reliably ensured. A base environment is required to house the top file. Example: .sp .nf .ft C file_roots: base: \- /srv/salt dev: \- /srv/salt/dev/services \- /srv/salt/dev/states prod: \- /srv/salt/prod/services \- /srv/salt/prod/states .ft P .fi .SS git: Git Remote File Server Backend .SS \fBgitfs_remotes\fP .sp Default: \fB[]\fP .sp When using the \fBgit\fP fileserver backend at least one git remote needs to be defined. The user running the salt master will need read access to the repo. .sp The repos will be searched in order to find the file requested by a client and the first repo to have the file will return it. Branches and tags are translated into salt environments. .sp .nf .ft C gitfs_remotes: \- git://github.com/saltstack/salt\-states.git \- file:///var/git/saltmaster .ft P .fi .IP Note \fBfile://\fP repos will be treated as a remote, so refs you want used must exist in that repo as \fIlocal\fP refs. .RE .IP Note As of the upcoming \fBHelium\fP release (and right now in the development branch), it is possible to have per\-repo versions of the \fBgitfs_base\fP, \fBgitfs_root\fP, and \fBgitfs_mountpoint\fP parameters. For example: .sp .nf .ft C gitfs_remotes: \- https://foo.com/foo.git \- https://foo.com/bar.git: \- root: salt \- mountpoint: salt://foo/bar/baz \- base: salt\-base \- https://foo.com/baz.git: \- root: salt/states .ft P .fi .RE .sp For more information on GitFS remotes, see the \fIGitFS Backend Walkthrough\fP. .SS \fBgitfs_provider\fP .sp New in version Helium. .sp Default: \fBgitpython\fP .sp GitFS defaults to using \fI\%GitPython\fP, but this parameter allows for either \fI\%pygit2\fP or \fI\%dulwich\fP to be used instead. If using pygit2, both libgit2 and git itself must also be installed. More information can be found in the \fBGitFS backend documentation\fP and the \fBGitFS walkthrough\fP. .sp .nf .ft C gitfs_provider: pygit2 .ft P .fi .SS \fBgitfs_ssl_verify\fP .sp Default: \fB[]\fP .sp The \fBgitfs_ssl_verify\fP option specifies whether to ignore SSL certificate errors when contacting the gitfs backend. You might want to set this to false if you\(aqre using a git backend that uses a self\-signed certificate but keep in mind that setting this flag to anything other than the default of True is a security concern, you may want to try using the ssh transport. .sp .nf .ft C gitfs_ssl_verify: True .ft P .fi .SS \fBgitfs_mountpoint\fP .sp New in version Helium. .sp Default: \fB\(aq\(aq\fP .sp Specifies a path on the salt fileserver from which gitfs remotes are served. Can be used in conjunction with \fBgitfs_root\fP. Can also be configured on a per\-remote basis, see \fBhere\fP for more info. .sp .nf .ft C gitfs_mountpoint: salt://foo/bar .ft P .fi .IP Note The \fBsalt://\fP protocol designation can be left off (in other words, \fBfoo/bar\fP and \fBsalt://foo/bar\fP are equivalent). .RE .SS \fBgitfs_root\fP .sp Default: \fB\(aq\(aq\fP .sp Serve files from a subdirectory within the repository, instead of the root. This is useful when there are files in the repository that should not be available to the Salt fileserver. Can be used in conjunction with \fBgitfs_mountpoint\fP. .sp .nf .ft C gitfs_root: somefolder/otherfolder .ft P .fi .sp Changed in version Helium. .SS \fBgitfs_base\fP .sp Default: \fBmaster\fP .sp Defines which branch/tag should be used as the \fBbase\fP environment. .sp Changed in version Helium: Can also be configured on a per\-remote basis, see \fBhere\fP for more info. .sp .nf .ft C gitfs_base: salt .ft P .fi .SS \fBgitfs_env_whitelist\fP .sp New in version Helium. .sp Default: \fB[]\fP .sp Used to restrict which environments are made available. Can speed up state runs if your gitfs remotes contain many branches/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID. .sp If used, only branches/tags/SHAs which match one of the specified expressions will be exposed as fileserver environments. .sp If used in conjunction with \fBgitfs_env_blacklist\fP, then the subset of branches/tags/SHAs which match the whitelist but do \fInot\fP match the blacklist will be exposed as fileserver environments. .sp .nf .ft C gitfs_env_whitelist: \- base \- v1.* \- \(aqmybranch\ed+\(aq .ft P .fi .SS \fBgitfs_env_blacklist\fP .sp New in version Helium. .sp Default: \fB[]\fP .sp Used to restrict which environments are made available. Can speed up state runs if your gitfs remotes contain many branches/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID. .sp If used, branches/tags/SHAs which match one of the specified expressions will \fInot\fP be exposed as fileserver environments. .sp If used in conjunction with \fBgitfs_env_whitelist\fP, then the subset of branches/tags/SHAs which match the whitelist but do \fInot\fP match the blacklist will be exposed as fileserver environments. .sp .nf .ft C gitfs_env_blacklist: \- base \- v1.* \- \(aqmybranch\ed+\(aq .ft P .fi .SS hg: Mercurial Remote File Server Backend .SS \fBhgfs_remotes\fP .sp New in version 0.17.0. .sp Default: \fB[]\fP .sp When using the \fBhg\fP fileserver backend at least one mercurial remote needs to be defined. The user running the salt master will need read access to the repo. .sp The repos will be searched in order to find the file requested by a client and the first repo to have the file will return it. Branches and/or bookmarks are translated into salt environments, as defined by the \fBhgfs_branch_method\fP parameter. .sp .nf .ft C hgfs_remotes: \- https://username@bitbucket.org/username/reponame .ft P .fi .IP Note As of the upcoming \fBHelium\fP release (and right now in the development branch), it is possible to have per\-repo versions of the \fBhgfs_root\fP, \fBhgfs_mountpoint\fP, \fBhgfs_base\fP, and \fBhgfs_branch_method\fP parameters. For example: .sp .nf .ft C hgfs_remotes: \- https://username@bitbucket.org/username/repo1 \- base: saltstates \- https://username@bitbucket.org/username/repo2: \- root: salt \- mountpoint: salt://foo/bar/baz \- https://username@bitbucket.org/username/repo3: \- root: salt/states \- branch_method: mixed .ft P .fi .RE .SS \fBhgfs_branch_method\fP .sp New in version 0.17.0. .sp Default: \fBbranches\fP .sp Defines the objects that will be used as fileserver environments. .INDENT 0.0 .IP \(bu 2 \fBbranches\fP \- Only branches and tags will be used .IP \(bu 2 \fBbookmarks\fP \- Only bookmarks and tags will be used .IP \(bu 2 \fBmixed\fP \- Branches, bookmarks, and tags will be used .UNINDENT .sp .nf .ft C hgfs_branch_method: mixed .ft P .fi .IP Note Starting in version 2014.1.0 (Hydrogen), the value of the \fBhgfs_base\fP parameter defines which branch is used as the \fBbase\fP environment, allowing for a \fBbase\fP environment to be used with an \fBhgfs_branch_method\fP of \fBbookmarks\fP. .sp Prior to this release, the \fBdefault\fP branch will be used as the \fBbase\fP environment. .RE .SS \fBhgfs_mountpoint\fP .sp New in version Helium. .sp Default: \fB\(aq\(aq\fP .sp Specifies a path on the salt fileserver from which hgfs remotes are served. Can be used in conjunction with \fBhgfs_root\fP. Can also be configured on a per\-remote basis, see \fBhere\fP for more info. .sp .nf .ft C hgfs_mountpoint: salt://foo/bar .ft P .fi .IP Note The \fBsalt://\fP protocol designation can be left off (in other words, \fBfoo/bar\fP and \fBsalt://foo/bar\fP are equivalent). .RE .SS \fBhgfs_root\fP .sp New in version 0.17.0. .sp Default: \fB\(aq\(aq\fP .sp Serve files from a subdirectory within the repository, instead of the root. This is useful when there are files in the repository that should not be available to the Salt fileserver. Can be used in conjunction with \fBhgfs_mountpoint\fP. .sp .nf .ft C hgfs_root: somefolder/otherfolder .ft P .fi .sp Changed in version Helium. .SS \fBhgfs_base\fP .sp New in version 2014.1.0: (Hydrogen) .sp Default: \fBdefault\fP .sp Defines which branch should be used as the \fBbase\fP environment. Change this if \fBhgfs_branch_method\fP is set to \fBbookmarks\fP to specify which bookmark should be used as the \fBbase\fP environment. .sp .nf .ft C hgfs_base: salt .ft P .fi .SS \fBhgfs_env_whitelist\fP .sp New in version Helium. .sp Default: \fB[]\fP .sp Used to restrict which environments are made available. Can speed up state runs if your hgfs remotes contain many branches/bookmarks/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID. .sp If used, only branches/bookmarks/tags which match one of the specified expressions will be exposed as fileserver environments. .sp If used in conjunction with \fBhgfs_env_blacklist\fP, then the subset of branches/bookmarks/tags which match the whitelist but do \fInot\fP match the blacklist will be exposed as fileserver environments. .sp .nf .ft C hgfs_env_whitelist: \- base \- v1.* \- \(aqmybranch\ed+\(aq .ft P .fi .SS \fBhgfs_env_blacklist\fP .sp New in version Helium. .sp Default: \fB[]\fP .sp Used to restrict which environments are made available. Can speed up state runs if your hgfs remotes contain many branches/bookmarks/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID. .sp If used, branches/bookmarks/tags which match one of the specified expressions will \fInot\fP be exposed as fileserver environments. .sp If used in conjunction with \fBhgfs_env_whitelist\fP, then the subset of branches/bookmarks/tags which match the whitelist but do \fInot\fP match the blacklist will be exposed as fileserver environments. .sp .nf .ft C hgfs_env_blacklist: \- base \- v1.* \- \(aqmybranch\ed+\(aq .ft P .fi .SS svn: Subversion Remote File Server Backend .SS \fBsvnfs_remotes\fP .sp New in version 0.17.0. .sp Default: \fB[]\fP .sp When using the \fBsvn\fP fileserver backend at least one subversion remote needs to be defined. The user running the salt master will need read access to the repo. .sp The repos will be searched in order to find the file requested by a client and the first repo to have the file will return it. The trunk, branches, and tags become environments, with the trunk being the \fBbase\fP environment. .sp .nf .ft C svnfs_remotes: \- svn://foo.com/svn/myproject .ft P .fi .IP Note As of the upcoming \fBHelium\fP release (and right now in the development branch), it is possible to have per\-repo versions of the following configuration parameters: .INDENT 0.0 .IP \(bu 2 \fBsvnfs_root\fP .IP \(bu 2 \fBsvnfs_mountpoint\fP .IP \(bu 2 \fBsvnfs_trunk\fP .IP \(bu 2 \fBsvnfs_branches\fP .IP \(bu 2 \fBsvnfs_tags\fP .UNINDENT .sp For example: .sp .nf .ft C svnfs_remotes: \- svn://foo.com/svn/project1 \- svn://foo.com/svn/project2: \- root: salt \- mountpoint: salt://foo/bar/baz \- svn//foo.com/svn/project3: \- root: salt/states \- branches: branch \- tags: tag .ft P .fi .RE .SS \fBsvnfs_mountpoint\fP .sp New in version Helium. .sp Default: \fB\(aq\(aq\fP .sp Specifies a path on the salt fileserver from which svnfs remotes are served. Can be used in conjunction with \fBsvnfs_root\fP. Can also be configured on a per\-remote basis, see \fBhere\fP for more info. .sp .nf .ft C svnfs_mountpoint: salt://foo/bar .ft P .fi .IP Note The \fBsalt://\fP protocol designation can be left off (in other words, \fBfoo/bar\fP and \fBsalt://foo/bar\fP are equivalent). .RE .SS \fBsvnfs_root\fP .sp New in version 0.17.0. .sp Default: \fB\(aq\(aq\fP .sp Serve files from a subdirectory within the repository, instead of the root. This is useful when there are files in the repository that should not be available to the Salt fileserver. Can be used in conjunction with \fBsvnfs_mountpoint\fP. .sp .nf .ft C svnfs_root: somefolder/otherfolder .ft P .fi .sp Changed in version Helium. .SS \fBsvnfs_trunk\fP .sp New in version Helium. .sp Default: \fBtrunk\fP .sp Path relative to the root of the repository where the trunk is located. Can also be configured on a per\-remote basis, see \fBhere\fP for more info. .sp .nf .ft C svnfs_trunk: trunk .ft P .fi .SS \fBsvnfs_branches\fP .sp New in version Helium. .sp Default: \fBbranches\fP .sp Path relative to the root of the repository where the branches are located. Can also be configured on a per\-remote basis, see \fBhere\fP for more info. .sp .nf .ft C svnfs_branches: branches .ft P .fi .SS \fBsvnfs_tags\fP .sp New in version Helium. .sp Default: \fBtags\fP .sp Path relative to the root of the repository where the tags is located. Can also be configured on a per\-remote basis, see \fBhere\fP for more info. .sp .nf .ft C svnfs_tags: tags .ft P .fi .SS \fBsvnfs_env_whitelist\fP .sp New in version Helium. .sp Default: \fB[]\fP .sp Used to restrict which environments are made available. Can speed up state runs if your svnfs remotes contain many branches/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID. .sp If used, only branches/tags which match one of the specified expressions will be exposed as fileserver environments. .sp If used in conjunction with \fBsvnfs_env_blacklist\fP, then the subset of branches/tags which match the whitelist but do \fInot\fP match the blacklist will be exposed as fileserver environments. .sp .nf .ft C svnfs_env_whitelist: \- base \- v1.* \- \(aqmybranch\ed+\(aq .ft P .fi .SS \fBsvnfs_env_blacklist\fP .sp New in version Helium. .sp Default: \fB[]\fP .sp Used to restrict which environments are made available. Can speed up state runs if your svnfs remotes contain many branches/tags. Full names, globs, and regular expressions are supported. If using a regular expression, the expression must match the entire minion ID. .sp If used, branches/tags which match one of the specified expressions will \fInot\fP be exposed as fileserver environments. .sp If used in conjunction with \fBsvnfs_env_whitelist\fP, then the subset of branches/tags which match the whitelist but do \fInot\fP match the blacklist will be exposed as fileserver environments. .sp .nf .ft C svnfs_env_blacklist: \- base \- v1.* \- \(aqmybranch\ed+\(aq .ft P .fi .SS minion: MinionFS Remote File Server Backend .SS \fBminionfs_env\fP .sp New in version Helium. .sp Default: \fBbase\fP .sp Environment from which MinionFS files are made available. .sp .nf .ft C minionfs_env: minionfs .ft P .fi .SS \fBminionfs_mountpoint\fP .sp New in version Helium. .sp Default: \fB\(aq\(aq\fP .sp Specifies a path on the salt fileserver from which minionfs files are served. .sp .nf .ft C minionfs_mountpoint: salt://foo/bar .ft P .fi .IP Note The \fBsalt://\fP protocol designation can be left off (in other words, \fBfoo/bar\fP and \fBsalt://foo/bar\fP are equivalent). .RE .SS \fBminionfs_whitelist\fP .sp New in version Helium. .sp Default: \fB[]\fP .sp Used to restrict which minions\(aq pushed files are exposed via minionfs. If using a regular expression, the expression must match the entire minion ID. .sp If used, only the pushed files from minions which match one of the specified expressions will be exposed. .sp If used in conjunction with \fBminionfs_blacklist\fP, then the subset of hosts which match the whitelist but do \fInot\fP match the blacklist will be exposed. .sp .nf .ft C minionfs_whitelist: \- base \- v1.* \- \(aqmybranch\ed+\(aq .ft P .fi .SS \fBminionfs_blacklist\fP .sp New in version Helium. .sp Default: \fB[]\fP .sp Used to restrict which minions\(aq pushed files are exposed via minionfs. If using a regular expression, the expression must match the entire minion ID. .sp If used, only the pushed files from minions which match one of the specified expressions will \fInot\fP be exposed. .sp If used in conjunction with \fBminionfs_whitelist\fP, then the subset of hosts which match the whitelist but do \fInot\fP match the blacklist will be exposed. .sp .nf .ft C minionfs_blacklist: \- base \- v1.* \- \(aqmybranch\ed+\(aq .ft P .fi .SS Pillar Configuration .SS \fBpillar_roots\fP .sp Default: .sp .nf .ft C base: \- /srv/pillar .ft P .fi .sp Set the environments and directories used to hold pillar sls data. This configuration is the same as \fBfile_roots\fP: .sp .nf .ft C pillar_roots: base: \- /srv/pillar dev: \- /srv/pillar/dev prod: \- /srv/pillar/prod .ft P .fi .SS \fBext_pillar\fP .sp The ext_pillar option allows for any number of external pillar interfaces to be called when populating pillar data. The configuration is based on ext_pillar functions. The available ext_pillar functions can be found herein: .sp \fI\%https://github.com/saltstack/salt/blob/develop/salt/pillar\fP .sp By default, the ext_pillar interface is not configured to run. .sp Default: \fBNone\fP .sp .nf .ft C ext_pillar: \- hiera: /etc/hiera.yaml \- cmd_yaml: cat /etc/salt/yaml \- reclass: inventory_base_uri: /etc/reclass .ft P .fi .sp There are additional details at \fIsalt\-pillars\fP .SS \fBpillar_source_merging_strategy\fP .sp Default: \fBsmart\fP .sp The pillar_source_merging_strategy option allows to configure merging strategy between differents sources. It accepts 3 values: .INDENT 0.0 .IP \(bu 2 recurse: .sp it will merge recursivelly mapping of data. For example, theses 2 sources: .sp .nf .ft C foo: 42 bar: element1: True .ft P .fi .sp .nf .ft C bar: element2: True baz: quux .ft P .fi .sp will be merged as: .sp .nf .ft C foo: 42 bar: element1: True element2: True baz: quux .ft P .fi .IP \(bu 2 aggregate: .sp instructs aggregation of elements between sources that use the #!yamlex rendered. .sp For example, these two documents: .sp .nf .ft C #!yamlex foo: 42 bar: !aggregate { element1: True } baz: !aggregate quux .ft P .fi .sp .nf .ft C #!yamlex bar: !aggregate { element2: True } baz: !aggregate quux2 .ft P .fi .sp will be merged as: .sp .nf .ft C foo: 42 bar: element1: True element2: True baz: \- quux \- quux2 .ft P .fi .IP \(bu 2 smart (default): .INDENT 2.0 .INDENT 3.5 it guesses the best strategy, based on the "renderer" setting. .UNINDENT .UNINDENT .UNINDENT .SS Syndic Server Settings .sp A Salt syndic is a Salt master used to pass commands from a higher Salt master to minions below the syndic. Using the syndic is simple. If this is a master that will have syndic servers(s) below it, set the "order_masters" setting to True. If this is a master that will be running a syndic daemon for passthrough the "syndic_master" setting needs to be set to the location of the master server .sp Do not not forget that in other word it means that it shares with the local minion it\(aqs ID and PKI_DIR. .SS \fBorder_masters\fP .sp Default: \fBFalse\fP .sp Extra data needs to be sent with publications if the master is controlling a lower level master via a syndic minion. If this is the case the order_masters value must be set to True .sp .nf .ft C order_masters: False .ft P .fi .SS \fBsyndic_master\fP .sp Default: \fBNone\fP .sp If this master will be running a salt\-syndic to connect to a higher level master, specify the higher level master with this configuration value .sp .nf .ft C syndic_master: masterofmasters .ft P .fi .SS \fBsyndic_master_port\fP .sp Default: \fB4506\fP .sp If this master will be running a salt\-syndic to connect to a higher level master, specify the higher level master port with this configuration value .sp .nf .ft C syndic_master_port: 4506 .ft P .fi .SS \fBsyndic_pidfile\fP .sp Default: \fBsalt\-syndic.pid\fP .sp If this master will be running a salt\-syndic to connect to a higher level master, specify the pidfile of the syndic daemon. .sp .nf .ft C syndic_pidfile: syndic.pid .ft P .fi .SS \fBsyndic_log_file\fP .sp Default: \fBsyndic.log\fP .sp If this master will be running a salt\-syndic to connect to a higher level master, specify the log_file of the syndic daemon. .sp .nf .ft C syndic_log_file: salt\-syndic.log .ft P .fi .SS Peer Publish Settings .sp Salt minions can send commands to other minions, but only if the minion is allowed to. By default "Peer Publication" is disabled, and when enabled it is enabled for specific minions and specific commands. This allows secure compartmentalization of commands based on individual minions. .SS \fBpeer\fP .sp Default: \fB{}\fP .sp The configuration uses regular expressions to match minions and then a list of regular expressions to match functions. The following will allow the minion authenticated as foo.example.com to execute functions from the test and pkg modules .sp .nf .ft C peer: foo.example.com: \- test.* \- pkg.* .ft P .fi .sp This will allow all minions to execute all commands: .sp .nf .ft C peer: .*: \- .* .ft P .fi .sp This is not recommended, since it would allow anyone who gets root on any single minion to instantly have root on all of the minions! .sp By adding an additional layer you can limit the target hosts in addition to the accessible commands: .sp .nf .ft C peer: foo.example.com: \(aqdb*\(aq: \- test.* \- pkg.* .ft P .fi .SS \fBpeer_run\fP .sp Default: \fB{}\fP .sp The peer_run option is used to open up runners on the master to access from the minions. The peer_run configuration matches the format of the peer configuration. .sp The following example would allow foo.example.com to execute the manage.up runner: .sp .nf .ft C peer_run: foo.example.com: \- manage.up .ft P .fi .SS Master Logging Settings .SS \fBlog_file\fP .sp Default: \fB/var/log/salt/master\fP .sp The master log can be sent to a regular file, local path name, or network location. See also \fBlog_file\fP. .sp Examples: .sp .nf .ft C log_file: /var/log/salt/master .ft P .fi .sp .nf .ft C log_file: file:///dev/log .ft P .fi .sp .nf .ft C log_file: udp://loghost:10514 .ft P .fi .SS \fBlog_level\fP .sp Default: \fBwarning\fP .sp The level of messages to send to the console. See also \fBlog_level\fP. .sp .nf .ft C log_level: warning .ft P .fi .SS \fBlog_level_logfile\fP .sp Default: \fBwarning\fP .sp The level of messages to send to the log file. See also \fBlog_level_logfile\fP. .sp .nf .ft C log_level_logfile: warning .ft P .fi .SS \fBlog_datefmt\fP .sp Default: \fB%H:%M:%S\fP .sp The date and time format used in console log messages. See also \fBlog_datefmt\fP. .sp .nf .ft C log_datefmt: \(aq%H:%M:%S\(aq .ft P .fi .SS \fBlog_datefmt_logfile\fP .sp Default: \fB%Y\-%m\-%d %H:%M:%S\fP .sp The date and time format used in log file messages. See also \fBlog_datefmt_logfile\fP. .sp .nf .ft C log_datefmt_logfile: \(aq%Y\-%m\-%d %H:%M:%S\(aq .ft P .fi .SS \fBlog_fmt_console\fP .sp Default: \fB[%(levelname)\-8s] %(message)s\fP .sp The format of the console logging messages. See also \fBlog_fmt_console\fP. .sp .nf .ft C log_fmt_console: \(aq[%(levelname)\-8s] %(message)s\(aq .ft P .fi .SS \fBlog_fmt_logfile\fP .sp Default: \fB%(asctime)s,%(msecs)03.0f [%(name)\-17s][%(levelname)\-8s] %(message)s\fP .sp The format of the log file logging messages. See also \fBlog_fmt_logfile\fP. .sp .nf .ft C log_fmt_logfile: \(aq%(asctime)s,%(msecs)03.0f [%(name)\-17s][%(levelname)\-8s] %(message)s\(aq .ft P .fi .SS \fBlog_granular_levels\fP .sp Default: \fB{}\fP .sp This can be used to control logging levels more specifically. See also \fBlog_granular_levels\fP. .SS Node Groups .sp Default: \fB{}\fP .sp Node groups allow for logical groupings of minion nodes. A group consists of a group name and a compound target. .sp .nf .ft C nodegroups: group1: \(aqL@foo.domain.com,bar.domain.com,baz.domain.com or bl*.domain.com\(aq group2: \(aqG@os:Debian and foo.domain.com\(aq .ft P .fi .SS Range Cluster Settings .SS \fBrange_server\fP .sp Default: \fB\(aq\(aq\fP .sp The range server (and optional port) that serves your cluster information \fI\%https://github.com/grierj/range/wiki/Introduction-to-Range-with-YAML-files\fP .sp .nf .ft C range_server: range:80 .ft P .fi .SS Include Configuration .SS \fBdefault_include\fP .sp Default: \fBmaster.d/*.conf\fP .sp The master can include configuration from other files. Per default the master will automatically include all config files from \fBmaster.d/*.conf\fP where \fBmaster.d\fP is relative to the directory of the master configuration file. .SS \fBinclude\fP .sp Default: \fBnot defined\fP .sp The master can include configuration from other files. To enable this, pass a list of paths to this option. The paths can be either relative or absolute; if relative, they are considered to be relative to the directory the main minion configuration file lives in. Paths can make use of shell\-style globbing. If no files are matched by a path passed to this option then the master will log a warning message. .sp .nf .ft C # Include files from a master.d directory in the same # directory as the master config file include: master.d/* # Include a single extra file into the configuration include: /etc/roles/webserver # Include several files and the master.d directory include: \- extra_config \- master.d/* \- /etc/roles/webserver .ft P .fi .SS Windows Software Repo Settings .SS \fBwin_repo\fP .sp Default: \fB/srv/salt/win/repo\fP .sp Location of the repo on the master .sp .nf .ft C win_repo: \(aq/srv/salt/win/repo\(aq .ft P .fi .SS \fBwin_repo_mastercachefile\fP .sp Default: \fB/srv/salt/win/repo/winrepo.p\fP .sp .nf .ft C win_repo_mastercachefile: \(aq/srv/salt/win/repo/winrepo.p\(aq .ft P .fi .SS \fBwin_gitrepos\fP .sp Default: \fB\(aq\(aq\fP .sp List of git repositories to include with the local repo .sp .nf .ft C win_gitrepos: \- \(aqhttps://github.com/saltstack/salt\-winrepo.git\(aq .ft P .fi .SS Configuring the Salt Minion .sp The Salt system is amazingly simple and easy to configure, the two components of the Salt system each have a respective configuration file. The \fBsalt\-master\fP is configured via the master configuration file, and the \fBsalt\-minion\fP is configured via the minion configuration file. .IP "See also" .sp \fIexample minion configuration file\fP .RE .sp The Salt Minion configuration is very simple, typically the only value that needs to be set is the master value so the minion can find its master. .SS Minion Primary Configuration .SS \fBmaster\fP .sp Default: \fBsalt\fP .sp The hostname or ipv4 of the master. .sp Default: \fBsalt\fP .sp .nf .ft C master: salt .ft P .fi .sp The option can can also be set to a list of masters, enabling \fBmulti\-master\fP mode. .sp .nf .ft C master: \- address1 \- address2 .ft P .fi .sp Changed in version Helium. .SS \fBmaster_type\fP .sp New in version Helium. .sp Default: \fBstr\fP .sp The type of the \fBmaster\fP variable. Can be either \fBfunc\fP or \fBfailover\fP. .sp If the master needs to be dynamically assigned by executing a function instead of reading in the static master value, set this to \fBfunc\fP. This can be used to manage the minion\(aqs master setting from an execution module. By simply changing the algorithm in the module to return a new master ip/fqdn, restart the minion and it will connect to the new master. .sp .nf .ft C master_type: func .ft P .fi .sp If this option is set to \fBfailover\fP, \fBmaster\fP must be a list of master addresses. The minion will then try each master in the order specified in the list until it successfully connects. .sp .nf .ft C master_type: failover .ft P .fi .SS \fBmaster_shuffle\fP .sp New in version Helium. .sp Default: \fBFalse\fP .sp If \fBmaster\fP is a list of addresses, shuffle them before trying to connect to distribute the minions over all available masters. This uses Python\(aqs \fI\%random.shuffle\fP method. .sp .nf .ft C master_shuffle: True .ft P .fi .SS \fBmaster_port\fP .sp Default: \fB4506\fP .sp The port of the master ret server, this needs to coincide with the ret_port option on the Salt master. .sp .nf .ft C master_port: 4506 .ft P .fi .SS \fBuser\fP .sp Default: \fBroot\fP .sp The user to run the Salt processes .sp .nf .ft C user: root .ft P .fi .SS \fBpidfile\fP .sp Default: \fB/var/run/salt\-minion.pid\fP .sp The location of the daemon\(aqs process ID file .sp .nf .ft C pidfile: /var/run/salt\-minion.pid .ft P .fi .SS \fBroot_dir\fP .sp Default: \fB/\fP .sp This directory is prepended to the following options: \fBpki_dir\fP, \fBcachedir\fP, \fBlog_file\fP, \fBsock_dir\fP, and \fBpidfile\fP. .sp .nf .ft C root_dir: / .ft P .fi .SS \fBpki_dir\fP .sp Default: \fB/etc/salt/pki\fP .sp The directory used to store the minion\(aqs public and private keys. .sp .nf .ft C pki_dir: /etc/salt/pki .ft P .fi .SS \fBid\fP .sp Default: the system\(aqs hostname .IP "See also" .sp \fISalt Walkthrough\fP .sp The \fBSetting up a Salt Minion\fP section contains detailed information on how the hostname is determined. .RE .sp Explicitly declare the id for this minion to use. Since Salt uses detached ids it is possible to run multiple minions on the same machine but with different ids. .sp .nf .ft C id: foo.bar.com .ft P .fi .SS \fBappend_domain\fP .sp Default: \fBNone\fP .sp Append a domain to a hostname in the event that it does not exist. This is useful for systems where \fBsocket.getfqdn()\fP does not actually result in a FQDN (for instance, Solaris). .sp .nf .ft C append_domain: foo.org .ft P .fi .SS \fBcachedir\fP .sp Default: \fB/var/cache/salt\fP .sp The location for minion cache data. .sp .nf .ft C cachedir: /var/cache/salt .ft P .fi .SS \fBverify_env\fP .sp Default: \fBTrue\fP .sp Verify and set permissions on configuration directories at startup. .sp .nf .ft C verify_env: True .ft P .fi .IP Note When marked as True the verify_env option requires WRITE access to the configuration directory (/etc/salt/). In certain situations such as mounting /etc/salt/ as read\-only for templating this will create a stack trace when state.highstate is called. .RE .SS \fBcache_jobs\fP .sp Default: \fBFalse\fP .sp The minion can locally cache the return data from jobs sent to it, this can be a good way to keep track of the minion side of the jobs the minion has executed. By default this feature is disabled, to enable set cache_jobs to \fBTrue\fP. .sp .nf .ft C cache_jobs: False .ft P .fi .SS \fBsock_dir\fP .sp Default: \fB/var/run/salt/minion\fP .sp The directory where Unix sockets will be kept. .sp .nf .ft C sock_dir: /var/run/salt/minion .ft P .fi .SS \fBbackup_mode\fP .sp Default: \fB[]\fP .sp Backup files replaced by file.managed and file.recurse under cachedir. .sp .nf .ft C backup_mode: minion .ft P .fi .SS \fBacceptance_wait_time\fP .sp Default: \fB10\fP .sp The number of seconds to wait until attempting to re\-authenticate with the master. .sp .nf .ft C acceptance_wait_time: 10 .ft P .fi .SS \fBrandom_reauth_delay\fP .sp When the master key changes, the minion will try to re\-auth itself to receive the new master key. In larger environments this can cause a syn\-flood on the master because all minions try to re\-auth immediately. To prevent this and have a minion wait for a random amount of time, use this optional parameter. The wait\-time will be a random number of seconds between 0 and the defined value. .sp .nf .ft C random_reauth_delay: 60 .ft P .fi .SS \fBacceptance_wait_time_max\fP .sp Default: \fBNone\fP .sp The maximum number of seconds to wait until attempting to re\-authenticate with the master. If set, the wait will increase by acceptance_wait_time seconds each iteration. .sp .nf .ft C acceptance_wait_time_max: None .ft P .fi .SS \fBdns_check\fP .sp Default: \fBTrue\fP .sp When healing, a dns_check is run. This is to make sure that the originally resolved dns has not changed. If this is something that does not happen in your environment, set this value to \fBFalse\fP. .sp .nf .ft C dns_check: True .ft P .fi .SS \fBipc_mode\fP .sp Default: \fBipc\fP .sp Windows platforms lack POSIX IPC and must rely on slower TCP based inter\- process communications. Set ipc_mode to \fBtcp\fP on such systems. .sp .nf .ft C ipc_mode: ipc .ft P .fi .SS \fBtcp_pub_port\fP .sp Default: \fB4510\fP .sp Publish port used when \fBipc_mode\fP is set to \fBtcp\fP. .sp .nf .ft C tcp_pub_port: 4510 .ft P .fi .SS \fBtcp_pull_port\fP .sp Default: \fB4511\fP .sp Pull port used when \fBipc_mode\fP is set to \fBtcp\fP. .sp .nf .ft C tcp_pull_port: 4511 .ft P .fi .SS Minion Module Management .SS \fBdisable_modules\fP .sp Default: \fB[]\fP (all modules are enabled by default) .sp The event may occur in which the administrator desires that a minion should not be able to execute a certain module. The sys module is built into the minion and cannot be disabled. .sp This setting can also tune the minion, as all modules are loaded into ram disabling modules will lover the minion\(aqs ram footprint. .sp .nf .ft C disable_modules: \- test \- solr .ft P .fi .SS \fBdisable_returners\fP .sp Default: \fB[]\fP (all returners are enabled by default) .sp If certain returners should be disabled, this is the place .sp .nf .ft C disable_returners: \- mongo_return .ft P .fi .SS \fBmodule_dirs\fP .sp Default: \fB[]\fP .sp A list of extra directories to search for Salt modules .sp .nf .ft C module_dirs: \- /var/lib/salt/modules .ft P .fi .SS \fBreturner_dirs\fP .sp Default: \fB[]\fP .sp A list of extra directories to search for Salt returners .sp .nf .ft C returners_dirs: \- /var/lib/salt/returners .ft P .fi .SS \fBstates_dirs\fP .sp Default: \fB[]\fP .sp A list of extra directories to search for Salt states .sp .nf .ft C states_dirs: \- /var/lib/salt/states .ft P .fi .SS \fBgrains_dirs\fP .sp Default: \fB[]\fP .sp A list of extra directories to search for Salt grains .sp .nf .ft C grains_dirs: \- /var/lib/salt/grains .ft P .fi .SS \fBrender_dirs\fP .sp Default: \fB[]\fP .sp A list of extra directories to search for Salt renderers .sp .nf .ft C render_dirs: \- /var/lib/salt/renderers .ft P .fi .SS \fBcython_enable\fP .sp Default: \fBFalse\fP .sp Set this value to true to enable auto\-loading and compiling of \fB.pyx\fP modules, This setting requires that \fBgcc\fP and \fBcython\fP are installed on the minion .sp .nf .ft C cython_enable: False .ft P .fi .SS \fBproviders\fP .sp Default: (empty) .sp A module provider can be statically overwritten or extended for the minion via the \fBproviders\fP option. This can be done \fBon an individual basis in an SLS file\fP, or globally here in the minion config, like below. .sp .nf .ft C providers: service: systemd .ft P .fi .SS State Management Settings .SS \fBrenderer\fP .sp Default: \fByaml_jinja\fP .sp The default renderer used for local state executions .sp .nf .ft C renderer: yaml_jinja .ft P .fi .SS \fBstate_verbose\fP .sp Default: \fBFalse\fP .sp state_verbose allows for the data returned from the minion to be more verbose. Normally only states that fail or states that have changes are returned, but setting state_verbose to \fBTrue\fP will return all states that were checked .sp .nf .ft C state_verbose: True .ft P .fi .SS \fBstate_output\fP .sp Default: \fBfull\fP .sp The state_output setting changes if the output is the full multi line output for each changed state if set to \(aqfull\(aq, but if set to \(aqterse\(aq the output will be shortened to a single line. .sp .nf .ft C state_output: full .ft P .fi .SS \fBautoload_dynamic_modules\fP .sp Default: \fBTrue\fP .sp autoload_dynamic_modules Turns on automatic loading of modules found in the environments on the master. This is turned on by default, to turn of auto\-loading modules when states run set this value to \fBFalse\fP .sp .nf .ft C autoload_dynamic_modules: True .ft P .fi .sp Default: \fBTrue\fP .sp clean_dynamic_modules keeps the dynamic modules on the minion in sync with the dynamic modules on the master, this means that if a dynamic module is not on the master it will be deleted from the minion. By default this is enabled and can be disabled by changing this value to \fBFalse\fP .sp .nf .ft C clean_dynamic_modules: True .ft P .fi .SS \fBenvironment\fP .sp Default: \fBNone\fP .sp Normally the minion is not isolated to any single environment on the master when running states, but the environment can be isolated on the minion side by statically setting it. Remember that the recommended way to manage environments is to isolate via the top file. .sp .nf .ft C environment: None .ft P .fi .SS File Directory Settings .SS \fBfile_client\fP .sp Default: \fBremote\fP .sp The client defaults to looking on the master server for files, but can be directed to look on the minion by setting this parameter to \fBlocal\fP. .sp .nf .ft C file_client: remote .ft P .fi .SS \fBfile_roots\fP .sp Default: .sp .nf .ft C base: \- /srv/salt .ft P .fi .sp When using a local \fBfile_client\fP, this parameter is used to setup the fileserver\(aqs environments. This parameter operates identically to the \fBmaster config parameter of the same name\fP. .sp .nf .ft C file_roots: base: \- /srv/salt dev: \- /srv/salt/dev/services \- /srv/salt/dev/states prod: \- /srv/salt/prod/services \- /srv/salt/prod/states .ft P .fi .SS \fBhash_type\fP .sp Default: \fBmd5\fP .sp The hash_type is the hash to use when discovering the hash of a file on the local fileserver. The default is md5, but sha1, sha224, sha256, sha384 and sha512 are also supported. .sp .nf .ft C hash_type: md5 .ft P .fi .SS \fBpillar_roots\fP .sp Default: .sp .nf .ft C base: \- /srv/pillar .ft P .fi .sp When using a local \fBfile_client\fP, this parameter is used to setup the pillar environments. .sp .nf .ft C pillar_roots: base: \- /srv/pillar dev: \- /srv/pillar/dev prod: \- /srv/pillar/prod .ft P .fi .SS Security Settings .SS \fBopen_mode\fP .sp Default: \fBFalse\fP .sp Open mode can be used to clean out the PKI key received from the Salt master, turn on open mode, restart the minion, then turn off open mode and restart the minion to clean the keys. .sp .nf .ft C open_mode: False .ft P .fi .SS Thread Settings .sp Default: \fBTrue\fP .sp Disable multiprocessing support by default when a minion receives a publication a new process is spawned and the command is executed therein. .sp .nf .ft C multiprocessing: True .ft P .fi .SS Minion Logging Settings .SS \fBlog_file\fP .sp Default: \fB/var/log/salt/minion\fP .sp The minion log can be sent to a regular file, local path name, or network location. See also \fBlog_file\fP. .sp Examples: .sp .nf .ft C log_file: /var/log/salt/minion .ft P .fi .sp .nf .ft C log_file: file:///dev/log .ft P .fi .sp .nf .ft C log_file: udp://loghost:10514 .ft P .fi .SS \fBlog_level\fP .sp Default: \fBwarning\fP .sp The level of messages to send to the console. See also \fBlog_level\fP. .sp .nf .ft C log_level: warning .ft P .fi .SS \fBlog_level_logfile\fP .sp Default: \fBwarning\fP .sp The level of messages to send to the log file. See also \fBlog_level_logfile\fP. .sp .nf .ft C log_level_logfile: warning .ft P .fi .SS \fBlog_datefmt\fP .sp Default: \fB%H:%M:%S\fP .sp The date and time format used in console log messages. See also \fBlog_datefmt\fP. .sp .nf .ft C log_datefmt: \(aq%H:%M:%S\(aq .ft P .fi .SS \fBlog_datefmt_logfile\fP .sp Default: \fB%Y\-%m\-%d %H:%M:%S\fP .sp The date and time format used in log file messages. See also \fBlog_datefmt_logfile\fP. .sp .nf .ft C log_datefmt_logfile: \(aq%Y\-%m\-%d %H:%M:%S\(aq .ft P .fi .SS \fBlog_fmt_console\fP .sp Default: \fB[%(levelname)\-8s] %(message)s\fP .sp The format of the console logging messages. See also \fBlog_fmt_console\fP. .sp .nf .ft C log_fmt_console: \(aq[%(levelname)\-8s] %(message)s\(aq .ft P .fi .SS \fBlog_fmt_logfile\fP .sp Default: \fB%(asctime)s,%(msecs)03.0f [%(name)\-17s][%(levelname)\-8s] %(message)s\fP .sp The format of the log file logging messages. See also \fBlog_fmt_logfile\fP. .sp .nf .ft C log_fmt_logfile: \(aq%(asctime)s,%(msecs)03.0f [%(name)\-17s][%(levelname)\-8s] %(message)s\(aq .ft P .fi .SS \fBlog_granular_levels\fP .sp Default: \fB{}\fP .sp This can be used to control logging levels more specifically. See also \fBlog_granular_levels\fP. .SS Include Configuration .SS \fBdefault_include\fP .sp Default: \fBminion.d/*.conf\fP .sp The minion can include configuration from other files. Per default the minion will automatically include all config files from \fIminion.d/*.conf\fP where minion.d is relative to the directory of the minion configuration file. .SS \fBinclude\fP .sp Default: \fBnot defined\fP .sp The minion can include configuration from other files. To enable this, pass a list of paths to this option. The paths can be either relative or absolute; if relative, they are considered to be relative to the directory the main minion configuration file lives in. Paths can make use of shell\-style globbing. If no files are matched by a path passed to this option then the minion will log a warning message. .sp .nf .ft C # Include files from a minion.d directory in the same # directory as the minion config file include: minion.d/*.conf # Include a single extra file into the configuration include: /etc/roles/webserver # Include several files and the minion.d directory include: \- extra_config \- minion.d/* \- /etc/roles/webserver .ft P .fi .SS Frozen Build Update Settings .sp These options control how \fBsalt.modules.saltutil.update()\fP works with esky frozen apps. For more information look at \fI\%https://github.com/cloudmatrix/esky/\fP. .SS \fBupdate_url\fP .sp Default: \fBFalse\fP (Update feature is disabled) .sp The url to use when looking for application updates. Esky depends on directory listings to search for new versions. A webserver running on your Master is a good starting point for most setups. .sp .nf .ft C update_url: \(aqhttp://salt.example.com/minion\-updates\(aq .ft P .fi .SS \fBupdate_restart_services\fP .sp Default: \fB[]\fP (service restarting on update is disabled) .sp A list of services to restart when the minion software is updated. This would typically just be a list containing the minion\(aqs service name, but you may have other services that need to go with it. .sp .nf .ft C update_restart_services: [\(aqsalt\-minion\(aq] .ft P .fi .SS Running the Salt Master/Minion as an Unprivileged User .sp While the default setup runs the master and minion as the root user, some may consider it an extra measure of security to run the master as a non\-root user. Keep in mind that doing so does not change the master\(aqs capability to access minions as the user they are running as. Due to this many feel that running the master as a non\-root user does not grant any real security advantage which is why the master has remained as root by default. .IP Note Some of Salt\(aqs operations cannot execute correctly when the master is not running as root, specifically the pam external auth system, as this system needs root access to check authentication. .RE .sp As of Salt 0.9.10 it is possible to run Salt as a non\-root user. This can be done by setting the \fBuser\fP parameter in the master configuration file. and restarting the \fBsalt\-master\fP service. .sp The minion has it\(aqs own \fBuser\fP parameter as well, but running the minion as an unprivileged user will keep it from making changes to things like users, installed packages, etc. unless access controls (sudo, etc.) are setup on the minion to permit the non\-root user to make the needed changes. .sp In order to allow Salt to successfully run as a non\-root user, ownership and permissions need to be set such that the desired user can read from and write to the following directories (and their subdirectories, where applicable): .INDENT 0.0 .IP \(bu 2 /etc/salt .IP \(bu 2 /var/cache/salt .IP \(bu 2 /var/log/salt .IP \(bu 2 /var/run/salt .UNINDENT .sp Ownership can be easily changed with \fBchown\fP, like so: .sp .nf .ft C # chown \-R user /etc/salt /var/cache/salt /var/log/salt /var/run/salt .ft P .fi .IP Warning Running either the master or minion with the \fBroot_dir\fP parameter specified will affect these paths, as will setting options like \fBpki_dir\fP, \fBcachedir\fP, \fBlog_file\fP, and other options that normally live in the above directories. .RE .SS Logging .sp The salt project tries to get the logging to work for you and help us solve any issues you might find along the way. .sp If you want to get some more information on the nitty\-gritty of salt\(aqs logging system, please head over to the \fBlogging development document\fP, if all you\(aqre after is salt\(aqs logging configurations, please continue reading. .SS Available Configuration Settings .SS \fBlog_file\fP .sp The log records can be sent to a regular file, local path name, or network location. Remote logging works best when configured to use rsyslogd(8) (e.g.: \fBfile:///dev/log\fP), with rsyslogd(8) configured for network logging. The format for remote addresses is: \fB://:/\fP. .sp Default: Dependent of the binary being executed, for example, for \fBsalt\-master\fP, \fB/var/log/salt/master\fP. .sp Examples: .sp .nf .ft C log_file: /var/log/salt/master .ft P .fi .sp .nf .ft C log_file: /var/log/salt/minion .ft P .fi .sp .nf .ft C log_file: file:///dev/log .ft P .fi .sp .nf .ft C log_file: udp://loghost:10514 .ft P .fi .SS \fBlog_level\fP .sp Default: \fBwarning\fP .sp The level of log record messages to send to the console. One of \fBall\fP, \fBgarbage\fP, \fBtrace\fP, \fBdebug\fP, \fBinfo\fP, \fBwarning\fP, \fBerror\fP, \fBcritical\fP, \fBquiet\fP. .sp .nf .ft C log_level: warning .ft P .fi .SS \fBlog_level_logfile\fP .sp Default: \fBwarning\fP .sp The level of messages to send to the log file. One of \fBall\fP, \fBgarbage\fP, \fBtrace\fP, \fBdebug\fP, \fBinfo\fP, \fBwarning\fP, \fBerror\fP, \fBcritical\fP, \fBquiet\fP. .sp .nf .ft C log_level_logfile: warning .ft P .fi .SS \fBlog_datefmt\fP .sp Default: \fB%H:%M:%S\fP .sp The date and time format used in console log messages. Allowed date/time formatting can be seen on \fI\%time.strftime\fP. .sp .nf .ft C log_datefmt: \(aq%H:%M:%S\(aq .ft P .fi .SS \fBlog_datefmt_logfile\fP .sp Default: \fB%Y\-%m\-%d %H:%M:%S\fP .sp The date and time format used in log file messages. Allowed date/time formatting can be seen on \fI\%time.strftime\fP. .sp .nf .ft C log_datefmt_logfile: \(aq%Y\-%m\-%d %H:%M:%S\(aq .ft P .fi .SS \fBlog_fmt_console\fP .sp Default: \fB[%(levelname)\-8s] %(message)s\fP .sp The format of the console logging messages. Allowed formatting options can be seen on the \fI\%LogRecord attributes\fP. .sp .nf .ft C log_fmt_console: \(aq[%(levelname)\-8s] %(message)s\(aq .ft P .fi .SS \fBlog_fmt_logfile\fP .sp Default: \fB%(asctime)s,%(msecs)03.0f [%(name)\-17s][%(levelname)\-8s] %(message)s\fP .sp The format of the log file logging messages. Allowed formatting options can be seen on the \fI\%LogRecord attributes\fP. .sp .nf .ft C log_fmt_logfile: \(aq%(asctime)s,%(msecs)03.0f [%(name)\-17s][%(levelname)\-8s] %(message)s\(aq .ft P .fi .SS \fBlog_granular_levels\fP .sp Default: \fB{}\fP .sp This can be used to control logging levels more specifically. The example sets the main salt library at the \(aqwarning\(aq level, but sets \fBsalt.modules\fP to log at the \fBdebug\fP level: .sp .nf .ft C log_granular_levels: \(aqsalt\(aq: \(aqwarning\(aq, \(aqsalt.modules\(aq: \(aqdebug\(aq .ft P .fi .SS External Logging Handlers .sp Besides the internal logging handlers used by salt, there are some external which can be used, see the \fBexternal logging handlers\fP document. .SS External Logging Handlers .TS center; |l|l|. _ T{ \fBlogstash_mod\fP T} T{ Logstash Logging Handler T} _ T{ \fBsentry_mod\fP T} T{ Sentry Logging Handler T} _ .TE .SS Logstash Logging Handler .sp New in version 0.17.0. .sp This module provides some \fI\%Logstash\fP logging handlers. .SS UDP Logging Handler .sp For versions of \fI\%Logstash\fP before 1.2.0: .sp In the salt configuration file: .sp .nf .ft C logstash_udp_handler: host: 127.0.0.1 port: 9999 version: 0 .ft P .fi .sp In the \fI\%Logstash\fP configuration file: .sp .nf .ft C input { udp { type => "udp\-type" format => "json_event" } } .ft P .fi .sp For version 1.2.0 of \fI\%Logstash\fP and newer: .sp In the salt configuration file: .sp .nf .ft C logstash_udp_handler: host: 127.0.0.1 port: 9999 version: 1 .ft P .fi .sp In the \fI\%Logstash\fP configuration file: .sp .nf .ft C input { udp { port => 9999 codec => json } } .ft P .fi .sp Please read the \fI\%UDP input\fP configuration page for additional information. .SS ZeroMQ Logging Handler .sp For versions of \fI\%Logstash\fP before 1.2.0: .sp In the salt configuration file: .sp .nf .ft C logstash_zmq_handler: address: tcp://127.0.0.1:2021 version: 0 .ft P .fi .sp In the \fI\%Logstash\fP configuration file: .sp .nf .ft C input { zeromq { type => "zeromq\-type" mode => "server" topology => "pubsub" address => "tcp://0.0.0.0:2021" charset => "UTF\-8" format => "json_event" } } .ft P .fi .sp For version 1.2.0 of \fI\%Logstash\fP and newer: .sp In the salt configuration file: .sp .nf .ft C logstash_zmq_handler: address: tcp://127.0.0.1:2021 version: 1 .ft P .fi .sp In the \fI\%Logstash\fP configuration file: .sp .nf .ft C input { zeromq { topology => "pubsub" address => "tcp://0.0.0.0:2021" codec => json } } .ft P .fi .sp Please read the \fI\%ZeroMQ input\fP configuration page for additional information. .IP "Important Logstash Setting" .sp One of the most important settings that you should not forget on your \fI\%Logstash\fP configuration file regarding these logging handlers is \fBformat\fP. Both the \fIUDP\fP and \fIZeroMQ\fP inputs need to have \fBformat\fP as \fBjson_event\fP which is what we send over the wire. .RE .SS Log Level .sp Both the \fBlogstash_udp_handler\fP and the \fBlogstash_zmq_handler\fP configuration sections accept an additional setting \fBlog_level\fP. If not set, the logging level used will be the one defined for \fBlog_level\fP in the global configuration file section. .SS HWM .sp The \fI\%high water mark\fP for the ZMQ socket setting. Only applicable for the \fBlogstash_zmq_handler\fP. .IP "Inspiration" .sp This work was inspired in \fI\%pylogstash\fP, \fI\%python-logstash\fP, \fI\%canary\fP and the \fI\%PyZMQ logging handler\fP. .RE .SS Sentry Logging Handler .sp New in version 0.17.0. .sp This module provides a \fI\%Sentry\fP logging handler. .IP "Note" .sp The \fI\%Raven\fP library needs to be installed on the system for this logging handler to be available. .RE .sp Configuring the python \fI\%Sentry\fP client, \fI\%Raven\fP, should be done under the \fBsentry_handler\fP configuration key. At the bare minimum, you need to define the \fI\%DSN\fP. As an example: .sp .nf .ft C sentry_handler: dsn: https://pub\-key:secret\-key@app.getsentry.com/app\-id .ft P .fi .sp More complex configurations can be achieved, for example: .sp .nf .ft C sentry_handler: servers: \- https://sentry.example.com \- http://192.168.1.1 project: app\-id public_key: deadbeefdeadbeefdeadbeefdeadbeef secret_key: beefdeadbeefdeadbeefdeadbeefdead .ft P .fi .sp All the client configuration keys are supported, please see the \fI\%Raven client documentation\fP. .sp The default logging level for the sentry handler is \fBERROR\fP. If you wish to define a different one, define \fBlog_level\fP under the \fBsentry_handler\fP configuration key: .sp .nf .ft C sentry_handler: dsn: https://pub\-key:secret\-key@app.getsentry.com/app\-id log_level: warning .ft P .fi .sp The available log levels are those also available for the salt \fBcli\fP tools and configuration; \fBsalt \-\-help\fP should give you the required information. .SS Threaded Transports .sp Raven\(aqs documents rightly suggest using its threaded transport for critical applications. However, don\(aqt forget that if you start having troubles with Salt after enabling the threaded transport, please try switching to a non\-threaded transport to see if that fixes your problem. .SS Salt File Server .sp Salt comes with a simple file server suitable for distributing files to the Salt minions. The file server is a stateless ZeroMQ server that is built into the Salt master. .sp The main intent of the Salt file server is to present files for use in the Salt state system. With this said, the Salt file server can be used for any general file transfer from the master to the minions. .SS File Server Backends .sp Salt version 0.12.0 introduced the ability for the Salt Master to integrate different file server backends. File server backends allows the Salt file server to act as a transparent bridge to external resources. The primary example of this is the git backend which allows for all of the Salt formulas and files to be maintained in a remote git repository. .sp The fileserver backend system can accept multiple backends as well. This makes it possible to have the environments listed in the \fBfile_roots\fP configuration available in addition to other backends, or the ability to mix multiple backends. .sp This feature is managed by the \fBfileserver_backend\fP option in the master config. The desired backend systems are listed in order of search priority: .sp .nf .ft C fileserver_backend: \- roots \- git .ft P .fi .sp With this configuration, the environments and files defined in the \fBfile_roots\fP parameter will be searched first, if the referenced environment and file is not found then the \fBgit\fP backend will be searched. .SS Environments .sp The concept of environments is followed in all backend systems. The environments in the classic \fBroots\fP backend are defined in the \fBfile_roots\fP option. Environments map differently based on the backend, for instance the git backend translated branches and tags in git to environments. This makes it easy to define environments in git by just setting a tag or forking a branch. .SS Dynamic Module Distribution .sp New in version 0.9.5. .sp Salt Python modules can be distributed automatically via the Salt file server. Under the root of any environment defined via the \fBfile_roots\fP option on the master server directories corresponding to the type of module can be used. .sp The directories are prepended with an underscore: .INDENT 0.0 .INDENT 3.5 .INDENT 0.0 .IP 1. 3 \fB_modules\fP .IP 2. 3 \fB_grains\fP .IP 3. 3 \fB_renderers\fP .IP 4. 3 \fB_returners\fP .IP 5. 3 \fB_states\fP .UNINDENT .UNINDENT .UNINDENT .sp The contents of these directories need to be synced over to the minions after Python modules have been created in them. There are a number of ways to sync the modules. .SS Sync Via States .sp The minion configuration contains an option \fBautoload_dynamic_modules\fP which defaults to True. This option makes the state system refresh all dynamic modules when states are run. To disable this behavior set \fBautoload_dynamic_modules\fP to False in the minion config. .sp When dynamic modules are autoloaded via states, modules only pertinent to the environments matched in the master\(aqs top file are downloaded. .sp This is important to remember, because modules can be manually loaded from any specific environment that environment specific modules will be loaded when a state run is executed. .SS Sync Via the saltutil Module .sp The saltutil module has a number of functions that can be used to sync all or specific dynamic modules. The saltutil module function \fBsaltutil.sync_all\fP will sync all module types over to a minion. For more information see: \fBsalt.modules.saltutil\fP .SS File Server Configuration .sp The Salt file server is a high performance file server written in ZeroMQ. It manages large files quickly and with little overhead, and has been optimized to handle small files in an extremely efficient manner. .sp The Salt file server is an environment aware file server. This means that files can be allocated within many root directories and accessed by specifying both the file path and the environment to search. The individual environments can span across multiple directory roots to create overlays and to allow for files to be organized in many flexible ways. .SS Environments .sp The Salt file server defaults to the mandatory \fBbase\fP environment. This environment \fBMUST\fP be defined and is used to download files when no environment is specified. .sp Environments allow for files and sls data to be logically separated, but environments are not isolated from each other. This allows for logical isolation of environments by the engineer using Salt, but also allows for information to be used in multiple environments. .SS Directory Overlay .sp The \fBenvironment\fP setting is a list of directories to publish files from. These directories are searched in order to find the specified file and the first file found is returned. .sp This means that directory data is prioritized based on the order in which they are listed. In the case of this \fBfile_roots\fP configuration: .sp .nf .ft C file_roots: base: \- /srv/salt/base \- /srv/salt/failover .ft P .fi .sp If a file\(aqs URI is \fBsalt://httpd/httpd.conf\fP, it will first search for the file at \fB/srv/salt/base/httpd/httpd.conf\fP. If the file is found there it will be returned. If the file is not found there, then \fB/srv/salt/failover/httpd/httpd.conf\fP will be used for the source. .sp This allows for directories to be overlaid and prioritized based on the order they are defined in the configuration. .sp It is also possible to have \fBfile_roots\fP which supports multiple environments: .sp .nf .ft C file_roots: base: \- /srv/salt/base dev: \- /srv/salt/dev \- /srv/salt/base prod: \- /srv/salt/prod \- /srv/salt/base .ft P .fi .sp This example ensures that each environment will check the associated environment directory for files first. If a file is not found in the appropriate directory, the system will default to using the base directory. .SS Local File Server .sp New in version 0.9.8. .sp The file server can be rerouted to run from the minion. This is primarily to enable running Salt states without a Salt master. To use the local file server interface, copy the file server data to the minion and set the file_roots option on the minion to point to the directories copied from the master. Once the minion \fBfile_roots\fP option has been set, change the \fBfile_client\fP option to local to make sure that the local file server interface is used. .SS The cp Module .sp The cp module is the home of minion side file server operations. The cp module is used by the Salt state system, salt\-cp and can be used to distribute files presented by the Salt file server. .SS Environments .sp Since the file server is made to work with the Salt state system, it supports environments. The environments are defined in the master config file and when referencing an environment the file specified will be based on the root directory of the environment. .SS get_file .sp The cp.get_file function can be used on the minion to download a file from the master, the syntax looks like this: .sp .nf .ft C # salt \(aq*\(aq cp.get_file salt://vimrc /etc/vimrc .ft P .fi .sp This will instruct all Salt minions to download the vimrc file and copy it to /etc/vimrc .sp Template rendering can be enabled on both the source and destination file names like so: .sp .nf .ft C # salt \(aq*\(aq cp.get_file "salt://{{grains.os}}/vimrc" /etc/vimrc template=jinja .ft P .fi .sp This example would instruct all Salt minions to download the vimrc from a directory with the same name as their OS grain and copy it to /etc/vimrc .sp For larger files, the cp.get_file module also supports gzip compression. Because gzip is CPU\-intensive, this should only be used in scenarios where the compression ratio is very high (e.g. pretty\-printed JSON or YAML files). .sp To use compression, use the \fBgzip\fP named argument. Valid values are integers from 1 to 9, where 1 is the lightest compression and 9 the heaviest. In other words, 1 uses the least CPU on the master (and minion), while 9 uses the most. .sp .nf .ft C # salt \(aq*\(aq cp.get_file salt://vimrc /etc/vimrc gzip=5 .ft P .fi .sp Finally, note that by default cp.get_file does \fInot\fP create new destination directories if they do not exist. To change this, use the \fBmakedirs\fP argument: .sp .nf .ft C # salt \(aq*\(aq cp.get_file salt://vimrc /etc/vim/vimrc makedirs=True .ft P .fi .sp In this example, /etc/vim/ would be created if it didn\(aqt already exist. .SS get_dir .sp The cp.get_dir function can be used on the minion to download an entire directory from the master. The syntax is very similar to get_file: .sp .nf .ft C # salt \(aq*\(aq cp.get_dir salt://etc/apache2 /etc .ft P .fi .sp cp.get_dir supports template rendering and gzip compression arguments just like get_file: .sp .nf .ft C # salt \(aq*\(aq cp.get_dir salt://etc/{{pillar.webserver}} /etc gzip=5 template=jinja .ft P .fi .SS File Server Client API .sp A client API is available which allows for modules and applications to be written which make use of the Salt file server. .sp The file server uses the same authentication and encryption used by the rest of the Salt system for network communication. .SS FileClient Class .sp The FileClient class is used to set up the communication from the minion to the master. When creating a FileClient object the minion configuration needs to be passed in. When using the FileClient from within a minion module the built in \fB__opts__\fP data can be passed: .sp .nf .ft C import salt.minion def get_file(path, dest, env=\(aqbase\(aq): \(aq\(aq\(aq Used to get a single file from the Salt master CLI Example: salt \(aq*\(aq cp.get_file salt://vimrc /etc/vimrc \(aq\(aq\(aq # Create the FileClient object client = salt.minion.FileClient(__opts__) # Call get_file return client.get_file(path, dest, False, env) .ft P .fi .sp Using the FileClient class outside of a minion module where the \fB__opts__\fP data is not available, it needs to be generated: .sp .nf .ft C import salt.minion import salt.config def get_file(path, dest, env=\(aqbase\(aq): \(aq\(aq\(aq Used to get a single file from the Salt master \(aq\(aq\(aq # Get the configuration data opts = salt.config.minion_config(\(aq/etc/salt/minion\(aq) # Create the FileClient object client = salt.minion.FileClient(opts) # Call get_file return client.get_file(path, dest, False, env) .ft P .fi .SS Full list of builtin fileserver modules .TS center; |l|l|. _ T{ \fBgitfs\fP T} T{ Git Fileserver Backend T} _ T{ \fBhgfs\fP T} T{ Mercurial Fileserver Backend T} _ T{ \fBminionfs\fP T} T{ Fileserver backend which serves files pushed to master by \fBcp.push\fP \fBfile_recv\fP needs to be enabled in the master config file in order to use this backend, and \fBminion\fP must also be present in the \fBfileserver_backends\fP list. T} _ T{ \fBroots\fP T} T{ The default file server backend T} _ T{ \fBs3fs\fP T} T{ The backend for a fileserver based on Amazon S3 T} _ T{ \fBsvnfs\fP T} T{ Subversion Fileserver Backend T} _ .TE .SS salt.fileserver.gitfs .sp Git Fileserver Backend .sp With this backend, branches and tags in a remote git repository are exposed to salt as different environments. .sp To enable, add \fBgit\fP to the \fBfileserver_backend\fP option in the master config file. .sp As of the next feature release, the Git fileserver backend will support \fI\%GitPython\fP, \fI\%pygit2\fP, and \fI\%dulwich\fP to provide the Python interface to git. If more than one of these are present, the order of preference for which one will be chosen is the same as the order in which they were listed: GitPython, pygit2, dulwich (keep in mind, this order is subject to change). .sp \fBpygit2 and dulwich support presently exist only in the develop branch and are not yet available in an official release\fP .sp An optional master config parameter (\fBgitfs_provider\fP) can be used to specify which provider should be used. .IP Note Minimum requirements .sp Using \fI\%GitPython\fP requires a minimum GitPython version of 0.3.0, as well as git itself. Instructions for installing GitPython can be found \fIhere\fP. .sp Using \fI\%pygit2\fP requires a minimum pygit2 version of 0.19.0. Additionally, using pygit2 as a provider requires \fI\%libgit2\fP 0.19.0 or newer, as well as git itself. pygit2 and libgit2 are developed alongside one another, so it is recommended to keep them both at the same major release to avoid unexpected behavior. .RE .IP Warning \fI\%pygit2\fP does not yet support supplying passing SSH credentials, so at this time only \fBhttp://\fP, \fBhttps://\fP, and \fBfile://\fP URIs are supported as valid \fBgitfs_remotes\fP entries if pygit2 is being used. .sp Additionally, \fI\%pygit2\fP does not yet support passing http/https credentials via a \fI\%.netrc\fP file. .RE .INDENT 0.0 .TP .B salt.fileserver.gitfs.dir_list(load) Return a list of all directories on the master .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.gitfs.envs(ignore_cache=False) Return a list of refs that can be used as environments .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.gitfs.file_hash(load, fnd) Return a file hash, the hash type is set in the master config file .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.gitfs.file_list(load) Return a list of all files on the file server in a specified environment .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.gitfs.file_list_emptydirs(load) Return a list of all empty directories on the master .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.gitfs.find_file(path, tgt_env=\(aqbase\(aq, **kwargs) Find the first file to match the path and ref, read the file out of git and send the path to the newly cached file .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.gitfs.init() Return the git repo object for this session .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.gitfs.purge_cache() Purge the fileserver cache .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.gitfs.serve_file(load, fnd) Return a chunk from a file based on the data received .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.gitfs.update() Execute a git fetch on all of the repos .UNINDENT .SS salt.fileserver.hgfs .sp Mercurial Fileserver Backend .sp To enable, add \fBhg\fP to the \fBfileserver_backend\fP option in the master config file. .sp After enabling this backend, branches, bookmarks, and tags in a remote mercurial repository are exposed to salt as different environments. This feature is managed by the \fBfileserver_backend\fP option in the salt master config file. .sp This fileserver has an additional option \fBhgfs_branch_method\fP that will set the desired branch method. Possible values are: \fBbranches\fP, \fBbookmarks\fP, or \fBmixed\fP. If using \fBbranches\fP or \fBmixed\fP, the \fBdefault\fP branch will be mapped to \fBbase\fP. .sp Changed in version 2014.1.0: (Hydrogen) The \fBhgfs_base\fP master config parameter was added, allowing for a branch other than \fBdefault\fP to be used for the \fBbase\fP environment, and allowing for a \fBbase\fP environment to be specified when using an \fBhgfs_branch_method\fP of \fBbookmarks\fP. .INDENT 0.0 .TP .B depends .INDENT 7.0 .IP \(bu 2 mercurial .IP \(bu 2 python bindings for mercurial (\fBpython\-hglib\fP) .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.hgfs.dir_list(load) Return a list of all directories on the master .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.hgfs.envs(ignore_cache=False) Return a list of refs that can be used as environments .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.hgfs.file_hash(load, fnd) Return a file hash, the hash type is set in the master config file .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.hgfs.file_list(load) Return a list of all files on the file server in a specified environment .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.hgfs.file_list_emptydirs(load) Return a list of all empty directories on the master .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.hgfs.find_file(path, tgt_env=\(aqbase\(aq, **kwargs) Find the first file to match the path and ref, read the file out of hg and send the path to the newly cached file .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.hgfs.init() Return a list of hglib objects for the various hgfs remotes .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.hgfs.purge_cache() Purge the fileserver cache .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.hgfs.serve_file(load, fnd) Return a chunk from a file based on the data received .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.hgfs.update() Execute an hg pull on all of the repos .UNINDENT .SS salt.fileserver.minionfs .sp Fileserver backend which serves files pushed to master by \fBcp.push\fP .sp \fBfile_recv\fP needs to be enabled in the master config file in order to use this backend, and \fBminion\fP must also be present in the \fBfileserver_backends\fP list. .sp Other minionfs settings include: \fBminionfs_whitelist\fP, \fBminionfs_blacklist\fP, \fBminionfs_mountpoint\fP, and \fBminionfs_env\fP. .IP "See also" .sp \fItutorial\-minionfs\fP .RE .INDENT 0.0 .TP .B salt.fileserver.minionfs.dir_list(load) Return a list of all directories on the master .sp CLI Example: .sp .nf .ft C $ salt \(aqsource\-minion\(aq cp.push /absolute/path/file # Push the file to the master $ salt \(aqdestination\-minion\(aq cp.list_master_dirs destination\-minion: \- source\-minion/absolute \- source\-minion/absolute/path .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.minionfs.envs() Returns the one environment specified for minionfs in the master configuration. .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.minionfs.file_hash(load, fnd) Return a file hash, the hash type is set in the master config file .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.minionfs.file_list(load) Return a list of all files on the file server in a specified environment .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.minionfs.find_file(path, tgt_env=\(aqbase\(aq, **kwargs) Search the environment for the relative path .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.minionfs.serve_file(load, fnd) Return a chunk from a file based on the data received .sp CLI Example: .sp .nf .ft C # Push the file to the master $ salt \(aqsource\-minion\(aq cp.push /path/to/the/file $ salt \(aqdestination\-minion\(aq cp.get_file salt://source\-minion/path/to/the/file /destination/file .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.minionfs.update() When we are asked to update (regular interval) lets reap the cache .UNINDENT .SS salt.fileserver.roots .sp The default file server backend .sp Based on the environments in the \fBfile_roots\fP configuration option. .INDENT 0.0 .TP .B salt.fileserver.roots.dir_list(load) Return a list of all directories on the master .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.roots.envs() Return the file server environments .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.roots.file_hash(load, fnd) Return a file hash, the hash type is set in the master config file .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.roots.file_list(load) Return a list of all files on the file server in a specified environment .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.roots.file_list_emptydirs(load) Return a list of all empty directories on the master .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.roots.find_file(path, saltenv=\(aqbase\(aq, env=None, **kwargs) Search the environment for the relative path .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.roots.serve_file(load, fnd) Return a chunk from a file based on the data received .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.roots.symlink_list(load) Return a dict of all symlinks based on a given path on the Master .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.roots.update() When we are asked to update (regular interval) lets reap the cache .UNINDENT .SS salt.fileserver.s3fs .sp The backend for a fileserver based on Amazon S3 .IP "See also" .sp \fB/ref/file_server/index\fP .RE .sp This backend exposes directories in S3 buckets as Salt environments. This feature is managed by the \fBfileserver_backend\fP option in the Salt Master config. .sp S3 credentials can be set in the master config file like so: .sp .nf .ft C s3.keyid: GKTADJGHEIQSXMKKRBJ08H s3.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs .ft P .fi .sp Alternatively, if on EC2 these credentials can be automatically loaded from instance metadata. .sp Additionally, \fBs3fs\fP must be included in the \fBfileserver_backend\fP config parameter in the master config file: .sp .nf .ft C fileserver_backend: \- s3fs .ft P .fi .sp This fileserver supports two modes of operation for the buckets: .INDENT 0.0 .IP 1. 3 \fBA single bucket per environment\fP .sp .nf .ft C s3.buckets: production: \- bucket1 \- bucket2 staging: \- bucket3 \- bucket4 .ft P .fi .IP 2. 3 \fBMultiple environments per bucket\fP .sp .nf .ft C s3.buckets: \- bucket1 \- bucket2 \- bucket3 \- bucket4 .ft P .fi .UNINDENT .sp Note that bucket names must be all lowercase both in the AWS console and in Salt, otherwise you may encounter \fBSignatureDoesNotMatch\fP errors. .sp A multiple\-environment bucket must adhere to the following root directory structure: .sp .nf .ft C s3://// .ft P .fi .IP Note This fileserver back\-end requires the use of the MD5 hashing algorightm. MD5 may not be compliant with all security policies. .RE .INDENT 0.0 .TP .B salt.fileserver.s3fs.dir_list(load) Return a list of all directories on the master .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.s3fs.envs() Return a list of directories within the bucket that can be used as environments. .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.s3fs.file_hash(load, fnd) Return an MD5 file hash .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.s3fs.file_list(load) Return a list of all files on the file server in a specified environment .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.s3fs.file_list_emptydirs(load) Return a list of all empty directories on the master .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.s3fs.find_file(path, saltenv=\(aqbase\(aq, env=None, **kwargs) Look through the buckets cache file for a match. If the field is found, it is retrieved from S3 only if its cached version is missing, or if the MD5 does not match. .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.s3fs.serve_file(load, fnd) Return a chunk from a file based on the data received .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.s3fs.update() Update the cache file for the bucket. .UNINDENT .SS salt.fileserver.svnfs .sp Subversion Fileserver Backend .sp After enabling this backend, branches, and tags in a remote subversion repository are exposed to salt as different environments. This feature is managed by the \fBfileserver_backend\fP option in the salt master config. .sp This backend assumes a standard svn layout with directories for \fBbranches\fP, \fBtags\fP, and \fBtrunk\fP, at the repository root. .INDENT 0.0 .TP .B depends .INDENT 7.0 .IP \(bu 2 subversion .IP \(bu 2 pysvn .UNINDENT .UNINDENT .sp Changed in version Helium: The paths to the trunk, branches, and tags have been made configurable, via the config options \fBsvnfs_trunk\fP, \fBsvnfs_branches\fP, and \fBsvnfs_tags\fP. \fBsvnfs_mountpoint\fP was also added. Finally, support for per\-remote configuration parameters was added. See the \fBdocumentation\fP for more information. .INDENT 0.0 .TP .B salt.fileserver.svnfs.dir_list(load) Return a list of all directories on the master .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.svnfs.envs(ignore_cache=False) Return a list of refs that can be used as environments .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.svnfs.file_hash(load, fnd) Return a file hash, the hash type is set in the master config file .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.svnfs.file_list(load) Return a list of all files on the file server in a specified environment .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.svnfs.file_list_emptydirs(load) Return a list of all empty directories on the master .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.svnfs.find_file(path, tgt_env=\(aqbase\(aq, **kwargs) Find the first file to match the path and ref. This operates similarly to the roots file sever but with assumptions of the directory structure based of svn standard practices. .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.svnfs.init() Return the list of svn remotes and their configuration information .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.svnfs.purge_cache() Purge the fileserver cache .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.svnfs.serve_file(load, fnd) Return a chunk from a file based on the data received .UNINDENT .INDENT 0.0 .TP .B salt.fileserver.svnfs.update() Execute an svn update on all of the repos .UNINDENT .SS Salt code and internals .sp Reference documentation on Salt\(aqs internal code. .SS Contents .SS salt.serializers .SS salt.utils.aggregation .sp This library allows to introspect dataset and aggregate nodes when it is instructed. .IP Note The following examples with be expressed in YAML for convenience sake: .INDENT 0.0 .IP \(bu 2 !aggr\-scalar will refer to Scalar python function .IP \(bu 2 !aggr\-map will refer to Map python object .IP \(bu 2 !aggr\-seq will refer for Sequence python object .UNINDENT .RE .SS How to instructs merging .sp This yaml document have duplicate keys: .sp .nf .ft C foo: !aggr\-scalar first foo: !aggr\-scalar second bar: !aggr\-map {first: foo} bar: !aggr\-map {second: bar} baz: !aggr\-scalar 42 .ft P .fi .sp but tagged values instruct salt that overlaping values they can be merged together: .sp .nf .ft C foo: !aggr\-seq [first, second] bar: !aggr\-map {first: foo, second: bar} baz: !aggr\-seq [42] .ft P .fi .SS Default merge strategy is keeped untouched .sp For example, this yaml document have still duplicate keys, but does not instruct aggregation: .sp .nf .ft C foo: first foo: second bar: {first: foo} bar: {second: bar} baz: 42 .ft P .fi .sp So the late found values prevail: .sp .nf .ft C foo: second bar: {second: bar} baz: 42 .ft P .fi .SS Limitations .sp Aggregation is permitted between tagged objects that share the same type. If not, the default merge strategy prevails. .sp For example, these examples: .sp .nf .ft C foo: {first: value} foo: !aggr\-map {second: value} bar: !aggr\-map {first: value} bar: 42 baz: !aggr\-seq [42] baz: [fail] qux: 42 qux: !aggr\-scalar fail .ft P .fi .sp are interpreted like this: .sp .nf .ft C foo: !aggr\-map{second: value} bar: 42 baz: [fail] qux: !aggr\-seq [fail] .ft P .fi .SS Introspection .sp TODO: write this part .INDENT 0.0 .TP .B salt.utils.aggregation.aggregate(obj_a, obj_b, level=False, map_class=, sequence_class=) Merge obj_b into obj_a. .sp .nf .ft C >>> aggregate(\(aqfirst\(aq, \(aqsecond\(aq, True) == [\(aqfirst\(aq, \(aqsecond\(aq] True .ft P .fi .UNINDENT .INDENT 0.0 .TP .B class salt.utils.aggregation.Aggregate Aggregation base. .UNINDENT .INDENT 0.0 .TP .B class salt.utils.aggregation.Map(*args, **kwds) Map aggregation. .UNINDENT .INDENT 0.0 .TP .B salt.utils.aggregation.Scalar(obj) Shortcut for Sequence creation .sp .nf .ft C >>> Scalar(\(aqfoo\(aq) == Sequence([\(aqfoo\(aq]) True .ft P .fi .UNINDENT .INDENT 0.0 .TP .B class salt.utils.aggregation.Sequence Sequence aggregation. .UNINDENT .SS Exceptions .sp Salt\-specific exceptions should be thrown as often as possible so the various interfaces to Salt (CLI, API, etc) can handle those errors appropriately and display error messages appropriately. .TS center; |l|l|. _ T{ \fBsalt.exceptions\fP T} T{ This module is a central location for all salt exceptions T} _ .TE .SS salt.exceptions .sp This module is a central location for all salt exceptions .INDENT 0.0 .TP .B exception salt.exceptions.AuthenticationError If sha256 signature fails during decryption .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.AuthorizationError Thrown when runner or wheel execution fails due to permissions .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.CommandExecutionError Used when a module runs a command which returns an error and wants to show the user the output gracefully instead of dying .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.CommandNotFoundError Used in modules or grains when a required binary is not available .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.EauthAuthenticationError Thrown when eauth authentication fails .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.LoaderError Problems loading the right renderer .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.MasterExit Rise when the master exits .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.MinionError Minion problems reading uris such as salt:// or http:// .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.PkgParseError Used when of the pkg modules cannot correctly parse the output from the CLI tool (pacman, yum, apt, aptitude, etc) .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.SaltClientError Problem reading the master root key .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.SaltException Base exception class; all Salt\-specific exceptions should subclass this .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.SaltInvocationError Used when the wrong number of arguments are sent to modules or invalid arguments are specified on the command line .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.SaltMasterError Problem reading the master root key .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.SaltRenderError(error, line_num=None, buf=\(aq\(aq, marker=\(aq <======================\(aq, trace=None) Used when a renderer needs to raise an explicit error. If a line number and buffer string are passed, get_context will be invoked to get the location of the error. .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.SaltReqTimeoutError Thrown when a salt master request call fails to return within the timeout .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.SaltRunnerError Problem in runner .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.SaltSystemExit(code=0, msg=None) This exception is raised when an unsolvable problem is found. There\(aqs nothing else to do, salt should just exit. .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.SaltWheelError Problem in wheel .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.TimedProcTimeoutError Thrown when a timed subprocess does not terminate within the timeout, or if the specified timeout is not an int or a float .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.TokenAuthenticationError Thrown when token authentication fails .UNINDENT .SS salt.exceptions .sp This module is a central location for all salt exceptions .INDENT 0.0 .TP .B exception salt.exceptions.AuthenticationError If sha256 signature fails during decryption .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.AuthorizationError Thrown when runner or wheel execution fails due to permissions .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.CommandExecutionError Used when a module runs a command which returns an error and wants to show the user the output gracefully instead of dying .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.CommandNotFoundError Used in modules or grains when a required binary is not available .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.EauthAuthenticationError Thrown when eauth authentication fails .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.LoaderError Problems loading the right renderer .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.MasterExit Rise when the master exits .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.MinionError Minion problems reading uris such as salt:// or http:// .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.PkgParseError Used when of the pkg modules cannot correctly parse the output from the CLI tool (pacman, yum, apt, aptitude, etc) .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.SaltClientError Problem reading the master root key .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.SaltException Base exception class; all Salt\-specific exceptions should subclass this .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.SaltInvocationError Used when the wrong number of arguments are sent to modules or invalid arguments are specified on the command line .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.SaltMasterError Problem reading the master root key .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.SaltRenderError(error, line_num=None, buf=\(aq\(aq, marker=\(aq <======================\(aq, trace=None) Used when a renderer needs to raise an explicit error. If a line number and buffer string are passed, get_context will be invoked to get the location of the error. .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.SaltReqTimeoutError Thrown when a salt master request call fails to return within the timeout .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.SaltRunnerError Problem in runner .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.SaltSystemExit(code=0, msg=None) This exception is raised when an unsolvable problem is found. There\(aqs nothing else to do, salt should just exit. .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.SaltWheelError Problem in wheel .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.TimedProcTimeoutError Thrown when a timed subprocess does not terminate within the timeout, or if the specified timeout is not an int or a float .UNINDENT .INDENT 0.0 .TP .B exception salt.exceptions.TokenAuthenticationError Thrown when token authentication fails .UNINDENT .SS salt.serializers .SS salt.utils.serializers .sp This module implements all the serializers needed by salt. Each serializer offers the same functions and attributes: .INDENT 0.0 .TP .B deserialize function for deserializing string or stream .TP .B serialize function for serializing a Python object .TP .B available flag that tells if the serializer is available (all dependencies are met etc.) .UNINDENT .SS salt.utils.serializers.json .sp Implements JSON serializer. .sp It\(aqs just a wrapper around json (or simplejson if available). .INDENT 0.0 .TP .B salt.utils.serializers.json.deserialize(stream_or_string, **options) Deserialize any string of stream like object into a Python data structure. .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBstream_or_string\fP \-\- stream or string to deserialize. .IP \(bu 2 \fBoptions\fP \-\- options given to lower json/simplejson module. .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.utils.serializers.json.serialize(obj, **options) Serialize Python data to JSON. .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBobj\fP \-\- the datastructure to serialize .IP \(bu 2 \fBoptions\fP \-\- options given to lower json/simplejson module. .UNINDENT .UNINDENT .UNINDENT .SS salt.utils.serializers.yaml .sp Implements YAML serializer. .sp Underneath, it is based on pyyaml and use the safe dumper and loader. It also use C bindings if they are available. .INDENT 0.0 .TP .B salt.utils.serializers.yaml.deserialize(stream_or_string, **options) Deserialize any string of stream like object into a Python data structure. .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBstream_or_string\fP \-\- stream or string to deserialize. .IP \(bu 2 \fBoptions\fP \-\- options given to lower yaml module. .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.utils.serializers.yaml.serialize(obj, **options) Serialize Python data to YAML. .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBobj\fP \-\- the datastructure to serialize .IP \(bu 2 \fBoptions\fP \-\- options given to lower yaml module. .UNINDENT .UNINDENT .UNINDENT .SS salt.utils.serializers.msgpack .sp Implements MsgPack serializer. .INDENT 0.0 .TP .B salt.utils.serializers.msgpack.deserialize(stream_or_string, **options) Deserialize any string of stream like object into a Python data structure. .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBstream_or_string\fP \-\- stream or string to deserialize. .IP \(bu 2 \fBoptions\fP \-\- options given to lower msgpack module. .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.utils.serializers.msgpack.serialize(obj, **options) Serialize Python data to MsgPack. .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBobj\fP \-\- the datastructure to serialize .IP \(bu 2 \fBoptions\fP \-\- options given to lower msgpack module. .UNINDENT .UNINDENT .UNINDENT .SS Full list of builtin execution modules .IP "Virtual modules" .SS salt.modules.pkg .sp \fBpkg\fP is a virtual module that is fulfilled by one of the following modules: .INDENT 0.0 .IP \(bu 2 \fBsalt.modules.aptpkg\fP .IP \(bu 2 \fBsalt.modules.brew\fP .IP \(bu 2 \fBsalt.modules.ebuild\fP .IP \(bu 2 \fBsalt.modules.freebsdpkg\fP .IP \(bu 2 \fBsalt.modules.openbsdpkg\fP .IP \(bu 2 \fBsalt.modules.pacman\fP .IP \(bu 2 \fBsalt.modules.pkgin\fP .IP \(bu 2 \fBsalt.modules.pkgng\fP .IP \(bu 2 \fBsalt.modules.pkgutil\fP .IP \(bu 2 \fBsalt.modules.solarispkg\fP .IP \(bu 2 \fBsalt.modules.win_pkg\fP .IP \(bu 2 \fBsalt.modules.yumpkg\fP .IP \(bu 2 \fBsalt.modules.zypper\fP .UNINDENT .RE .TS center; |l|l|. _ T{ \fBaliases\fP T} T{ Manage the information in the aliases file T} _ T{ \fBalternatives\fP T} T{ Support for Alternatives system T} _ T{ \fBapache\fP T} T{ Support for Apache T} _ T{ \fBaptpkg\fP T} T{ Support for APT (Advanced Packaging Tool) T} _ T{ \fBarchive\fP T} T{ A module to wrap archive calls .. T} _ T{ \fBat\fP T} T{ Wrapper module for at(1) T} _ T{ \fBaugeas_cfg\fP T} T{ Manages configuration files via augeas T} _ T{ \fBaws_sqs\fP T} T{ Support for the Amazon Simple Queue Service. T} _ T{ \fBblockdev\fP T} T{ Module for managing block devices .. T} _ T{ \fBbluez\fP T} T{ Support for Bluetooth (using BlueZ in Linux). T} _ T{ \fBboto_asg\fP T} T{ Connection module for Amazon Autoscale Groups T} _ T{ \fBboto_elb\fP T} T{ Connection module for Amazon ELB T} _ T{ \fBboto_iam\fP T} T{ Connection module for Amazon IAM T} _ T{ \fBboto_route53\fP T} T{ Connection module for Amazon Route53 T} _ T{ \fBboto_secgroup\fP T} T{ Connection module for Amazon Security Groups T} _ T{ \fBboto_sqs\fP T} T{ Connection module for Amazon SQS T} _ T{ \fBbrew\fP T} T{ Homebrew for Mac OS X T} _ T{ \fBbridge\fP T} T{ Module for gathering and managing bridging information T} _ T{ \fBbsd_shadow\fP T} T{ Manage the password database on BSD systems T} _ T{ \fBcassandra\fP T} T{ Cassandra NoSQL Database Module T} _ T{ \fBchef\fP T} T{ Execute chef in server or solo mode T} _ T{ \fBchocolatey\fP T} T{ A dead simple module wrapping calls to the Chocolatey package manager T} _ T{ \fBcloud\fP T} T{ Salt\-specific interface for calling Salt Cloud directly T} _ T{ \fBcmdmod\fP T} T{ A module for shelling out T} _ T{ \fBcomposer\fP T} T{ Use composer to install PHP dependencies for a directory T} _ T{ \fBconfig\fP T} T{ Return config information T} _ T{ \fBcp\fP T} T{ Minion side functions for salt\-cp T} _ T{ \fBcron\fP T} T{ Work with cron T} _ T{ \fBdaemontools\fP T} T{ daemontools service module. This module will create daemontools type T} _ T{ \fBdarwin_sysctl\fP T} T{ Module for viewing and modifying sysctl parameters T} _ T{ \fBdata\fP T} T{ Manage a local persistent data structure that can hold any arbitrary data T} _ T{ \fBddns\fP T} T{ Support for RFC 2136 dynamic DNS updates. T} _ T{ \fBdeb_apache\fP T} T{ Support for Apache T} _ T{ \fBdebconfmod\fP T} T{ Support for Debconf T} _ T{ \fBdebian_ip\fP T} T{ The networking module for Debian based distros T} _ T{ \fBdebian_service\fP T} T{ Service support for Debian systems (uses update\-rc.d and /sbin/service) T} _ T{ \fBdefaults\fP T} T{ T} _ T{ \fBdig\fP T} T{ Compendium of generic DNS utilities T} _ T{ \fBdisk\fP T} T{ Module for gathering disk information T} _ T{ \fBdjangomod\fP T} T{ Manage Django sites T} _ T{ \fBdnsmasq\fP T} T{ Module for managing dnsmasq T} _ T{ \fBdnsutil\fP T} T{ Compendium of generic DNS utilities T} _ T{ \fBdockerio\fP T} T{ Management of dockers ===================== .. T} _ T{ \fBdpkg\fP T} T{ Support for DEB packages T} _ T{ \fBebuild\fP T} T{ Support for Portage T} _ T{ \fBeix\fP T} T{ Support for Eix T} _ T{ \fBenviron\fP T} T{ Support for getting and setting the environment variables T} _ T{ \fBeselect\fP T} T{ Support for eselect, Gentoo\(aqs configuration and management tool. T} _ T{ \fBetcd_mod\fP T} T{ Execution module to work with etcd T} _ T{ \fBevent\fP T} T{ Use the \fBSalt Event System\fP to fire events from the T} _ T{ \fBextfs\fP T} T{ Module for managing ext2/3/4 file systems T} _ T{ \fBfile\fP T} T{ Manage information about regular files, directories, T} _ T{ \fBfreebsd_sysctl\fP T} T{ Module for viewing and modifying sysctl parameters T} _ T{ \fBfreebsdjail\fP T} T{ The jail module for FreeBSD T} _ T{ \fBfreebsdkmod\fP T} T{ Module to manage FreeBSD kernel modules T} _ T{ \fBfreebsdpkg\fP T} T{ Remote package support using \fBpkg_add(1)\fP .. T} _ T{ \fBfreebsdports\fP T} T{ Install software from the FreeBSD \fBports(7)\fP system T} _ T{ \fBfreebsdservice\fP T} T{ The service module for FreeBSD T} _ T{ \fBgem\fP T} T{ Manage ruby gems. T} _ T{ \fBgenesis\fP T} T{ Module for managing container and VM images T} _ T{ \fBgentoo_service\fP T} T{ Top level package command wrapper, used to translate the os detected by grains T} _ T{ \fBgentoolkitmod\fP T} T{ Support for Gentoolkit T} _ T{ \fBgit\fP T} T{ Support for the Git SCM T} _ T{ \fBglance\fP T} T{ Module for handling openstack glance calls. T} _ T{ \fBglusterfs\fP T} T{ Manage a glusterfs pool T} _ T{ \fBgnomedesktop\fP T} T{ GNOME implementations T} _ T{ \fBgrains\fP T} T{ Return/control aspects of the grains data T} _ T{ \fBgroupadd\fP T} T{ Manage groups on Linux and OpenBSD T} _ T{ \fBgrub_legacy\fP T} T{ Support for GRUB Legacy T} _ T{ \fBguestfs\fP T} T{ Interact with virtual machine images via libguestfs T} _ T{ \fBhadoop\fP T} T{ Support for hadoop T} _ T{ \fBhaproxyconn\fP T} T{ Support for haproxy T} _ T{ \fBhg\fP T} T{ Support for the Mercurial SCM T} _ T{ \fBhosts\fP T} T{ Manage the information in the hosts file T} _ T{ \fBhtpasswd\fP T} T{ Support for htpasswd command .. T} _ T{ \fBimg\fP T} T{ Virtual machine image management tools T} _ T{ \fBincron\fP T} T{ Work with incron T} _ T{ \fBinflux\fP T} T{ InfluxDB \- A distributed time series database T} _ T{ \fBini_manage\fP T} T{ Edit ini files T} _ T{ \fBintrospect\fP T} T{ Functions to perform introspection on a minion, and return data in a format T} _ T{ \fBipset\fP T} T{ Support for ipset T} _ T{ \fBiptables\fP T} T{ Support for iptables T} _ T{ \fBjunos\fP T} T{ Module for interfacing to Junos devices T} _ T{ \fBkey\fP T} T{ Functions to view the minion\(aqs public key information T} _ T{ \fBkeyboard\fP T} T{ Module for managing keyboards on supported POSIX\-like systems such as T} _ T{ \fBkeystone\fP T} T{ Module for handling openstack keystone calls. T} _ T{ \fBkmod\fP T} T{ Module to manage Linux kernel modules T} _ T{ \fBlaunchctl\fP T} T{ Module for the management of MacOS systems that use launchd/launchctl T} _ T{ \fBlayman\fP T} T{ Support for Layman T} _ T{ \fBldapmod\fP T} T{ Salt interface to LDAP commands T} _ T{ \fBlinux_acl\fP T} T{ Support for Linux File Access Control Lists T} _ T{ \fBlinux_lvm\fP T} T{ Support for Linux LVM2 T} _ T{ \fBlinux_sysctl\fP T} T{ Module for viewing and modifying sysctl parameters T} _ T{ \fBlocalemod\fP T} T{ Module for managing locales on POSIX\-like systems. T} _ T{ \fBlocate\fP T} T{ Module for using the locate utilities T} _ T{ \fBlogrotate\fP T} T{ Module for managing logrotate. T} _ T{ \fBlvs\fP T} T{ Support for LVS (Linux Virtual Server) T} _ T{ \fBlxc\fP T} T{ Control Linux Containers via Salt T} _ T{ \fBmac_group\fP T} T{ Manage groups on Mac OS 10.7+ T} _ T{ \fBmac_user\fP T} T{ Manage users on Mac OS 10.7+ T} _ T{ \fBmacports\fP T} T{ Support for MacPorts under MacOSX T} _ T{ \fBmakeconf\fP T} T{ Support for modifying make.conf under Gentoo T} _ T{ \fBmatch\fP T} T{ The match module allows for match routines to be run and determine target specs T} _ T{ \fBmdadm\fP T} T{ Salt module to manage RAID arrays with mdadm T} _ T{ \fBmemcached\fP T} T{ Module for Management of Memcached Keys T} _ T{ \fBmine\fP T} T{ The function cache system allows for data to be stored on the master so it can be easily read by other minions T} _ T{ \fBmodjk\fP T} T{ Control Modjk via the Apache Tomcat "Status" worker T} _ T{ \fBmongodb\fP T} T{ Module to provide MongoDB functionality to Salt T} _ T{ \fBmonit\fP T} T{ Monit service module. T} _ T{ \fBmoosefs\fP T} T{ Module for gathering and managing information about MooseFS T} _ T{ \fBmount\fP T} T{ Salt module to manage unix mounts and the fstab file T} _ T{ \fBmunin\fP T} T{ Run munin plugins/checks from salt and format the output as data. T} _ T{ \fBmysql\fP T} T{ Module to provide MySQL compatibility to salt. T} _ T{ \fBnagios\fP T} T{ Run nagios plugins/checks from salt and get the return as data. T} _ T{ \fBnetbsd_sysctl\fP T} T{ Module for viewing and modifying sysctl parameters T} _ T{ \fBnetbsdservice\fP T} T{ The service module for NetBSD T} _ T{ \fBnetwork\fP T} T{ Module for gathering and managing network information T} _ T{ \fBnfs3\fP T} T{ Module for managing NFS version 3. T} _ T{ \fBnftables\fP T} T{ Support for nftables T} _ T{ \fBnginx\fP T} T{ Support for nginx T} _ T{ \fBnova\fP T} T{ Module for handling OpenStack Nova calls T} _ T{ \fBnpm\fP T} T{ Manage and query NPM packages. T} _ T{ \fBomapi\fP T} T{ This module interacts with an ISC DHCP Server via OMAPI. T} _ T{ \fBopenbsdpkg\fP T} T{ Package support for OpenBSD T} _ T{ \fBopenbsdservice\fP T} T{ The service module for OpenBSD T} _ T{ \fBopenstack_config\fP T} T{ Modify, retrieve, or delete values from OpenStack configuration files. T} _ T{ \fBosxdesktop\fP T} T{ Mac OS X implementations of various commands in the "desktop" interface T} _ T{ \fBpacman\fP T} T{ A module to wrap pacman calls, since Arch is the best T} _ T{ \fBpagerduty\fP T} T{ Module for Firing Events via PagerDuty T} _ T{ \fBpam\fP T} T{ Support for pam T} _ T{ \fBparted\fP T} T{ Module for managing partitions on POSIX\-like systems. T} _ T{ \fBpecl\fP T} T{ Manage PHP pecl extensions. T} _ T{ \fBpillar\fP T} T{ Extract the pillar data for this minion T} _ T{ \fBpip\fP T} T{ Install Python packages with pip to either the system or a virtualenv T} _ T{ \fBpkg_resource\fP T} T{ Resources needed by pkg providers T} _ T{ \fBpkgin\fP T} T{ Package support for pkgin based systems, inspired from freebsdpkg module T} _ T{ \fBpkgng\fP T} T{ Support for \fBpkgng\fP, the new package manager for FreeBSD T} _ T{ \fBpkgutil\fP T} T{ Pkgutil support for Solaris T} _ T{ \fBportage_config\fP T} T{ Configure \fBportage(5)\fP T} _ T{ \fBpostfix\fP T} T{ Support for Postfix T} _ T{ \fBpostgres\fP T} T{ Module to provide Postgres compatibility to salt. T} _ T{ \fBpoudriere\fP T} T{ Support for poudriere T} _ T{ \fBpowerpath\fP T} T{ powerpath support. T} _ T{ \fBps\fP T} T{ A salt interface to psutil, a system and process library. T} _ T{ \fBpublish\fP T} T{ Publish a command from a minion to a target T} _ T{ \fBpuppet\fP T} T{ Execute puppet routines T} _ T{ \fBpw_group\fP T} T{ Manage groups on FreeBSD T} _ T{ \fBpw_user\fP T} T{ Manage users with the useradd command T} _ T{ \fBqemu_img\fP T} T{ Qemu\-img Command Wrapper T} _ T{ \fBqemu_nbd\fP T} T{ Qemu Command Wrapper T} _ T{ \fBquota\fP T} T{ Module for managing quotas on POSIX\-like systems. T} _ T{ \fBrabbitmq\fP T} T{ Module to provide RabbitMQ compatibility to Salt. T} _ T{ \fBrbenv\fP T} T{ Manage ruby installations with rbenv. T} _ T{ \fBrdp\fP T} T{ Manage RDP Service on Windows servers T} _ T{ \fBreg\fP T} T{ Manage the registry on Windows T} _ T{ \fBredismod\fP T} T{ Module to provide redis functionality to Salt T} _ T{ \fBrest_package\fP T} T{ Service support for the REST example T} _ T{ \fBrest_sample\fP T} T{ Module for interfacing to the REST example T} _ T{ \fBrest_service\fP T} T{ Service support for the REST example T} _ T{ \fBret\fP T} T{ Module to integrate with the returner system and retrieve data sent to a salt returner T} _ T{ \fBrh_ip\fP T} T{ The networking module for RHEL/Fedora based distros T} _ T{ \fBrh_service\fP T} T{ Service support for RHEL\-based systems, including support for both upstart and sysvinit T} _ T{ \fBriak\fP T} T{ Riak Salt Module T} _ T{ \fBrpm\fP T} T{ Support for rpm T} _ T{ \fBrsync\fP T} T{ Wrapper for rsync .. T} _ T{ \fBrvm\fP T} T{ Manage ruby installations and gemsets with RVM, the Ruby Version Manager. T} _ T{ \fBs3\fP T} T{ Connection module for Amazon S3 T} _ T{ \fBsaltcloudmod\fP T} T{ Control a salt cloud system T} _ T{ \fBsaltutil\fP T} T{ The Saltutil module is used to manage the state of the salt minion itself. It T} _ T{ \fBseed\fP T} T{ Virtual machine image management tools T} _ T{ \fBselinux\fP T} T{ Execute calls on selinux .. T} _ T{ \fBserverdensity_device\fP T} T{ Wrapper around Server Density API T} _ T{ \fBservice\fP T} T{ The default service module, if not otherwise specified salt will fall back T} _ T{ \fBshadow\fP T} T{ Manage the shadow file T} _ T{ \fBsmartos_imgadm\fP T} T{ Module for running imgadm command on SmartOS T} _ T{ \fBsmartos_vmadm\fP T} T{ Module for managing VMs on SmartOS T} _ T{ \fBsmf\fP T} T{ Service support for Solaris 10 and 11, should work with other systems T} _ T{ \fBsmtp\fP T} T{ Module for Sending Messages via SMTP T} _ T{ \fBsoftwareupdate\fP T} T{ Support for the softwareupdate command on MacOS. T} _ T{ \fBsolaris_group\fP T} T{ Manage groups on Solaris T} _ T{ \fBsolaris_shadow\fP T} T{ Manage the password database on Solaris systems T} _ T{ \fBsolaris_user\fP T} T{ Manage users with the useradd command T} _ T{ \fBsolarispkg\fP T} T{ Package support for Solaris T} _ T{ \fBsolr\fP T} T{ Apache Solr Salt Module T} _ T{ \fBsqlite3\fP T} T{ Support for SQLite3 T} _ T{ \fBssh\fP T} T{ Manage client ssh components .. T} _ T{ \fBstate\fP T} T{ Control the state system on the minion T} _ T{ \fBstatus\fP T} T{ Module for returning various status data about a minion. T} _ T{ \fBsupervisord\fP T} T{ Provide the service module for system supervisord or supervisord in a T} _ T{ \fBsvn\fP T} T{ Subversion SCM T} _ T{ \fBswift\fP T} T{ Module for handling OpenStack Swift calls T} _ T{ \fBsysbench\fP T} T{ The \(aqsysbench\(aq module is used to analyse the T} _ T{ \fBsysmod\fP T} T{ The sys module provides information about the available functions on the minion T} _ T{ \fBsystem\fP T} T{ Support for reboot, shutdown, etc T} _ T{ \fBsystemd\fP T} T{ Provide the service module for systemd T} _ T{ \fBtest\fP T} T{ Module for running arbitrary tests T} _ T{ \fBtimezone\fP T} T{ Module for managing timezone on POSIX\-like systems. T} _ T{ \fBtls\fP T} T{ A salt module for SSL/TLS. T} _ T{ \fBtomcat\fP T} T{ Support for Tomcat T} _ T{ \fBupstart\fP T} T{ Module for the management of upstart systems. T} _ T{ \fBuseradd\fP T} T{ Manage users with the useradd command T} _ T{ \fBuwsgi\fP T} T{ uWSGI stats server \fI\%http://uwsgi-docs.readthedocs.org/en/latest/StatsServer.html\fP T} _ T{ \fBvarnish\fP T} T{ Support for Varnish T} _ T{ \fBvirt\fP T} T{ Work with virtual machines managed by libvirt T} _ T{ \fBvirtualenv_mod\fP T} T{ Create virtualenv environments T} _ T{ \fBwin_autoruns\fP T} T{ Module for listing programs that automatically run on startup T} _ T{ \fBwin_disk\fP T} T{ Module for gathering disk information on Windows T} _ T{ \fBwin_dns_client\fP T} T{ Module for configuring DNS Client on Windows systems T} _ T{ \fBwin_file\fP T} T{ Manage information about files on the minion, set/read user, group T} _ T{ \fBwin_firewall\fP T} T{ Module for configuring Windows Firewall T} _ T{ \fBwin_groupadd\fP T} T{ Manage groups on Windows T} _ T{ \fBwin_ip\fP T} T{ The networking module for Windows based systems T} _ T{ \fBwin_network\fP T} T{ Module for gathering and managing network information T} _ T{ \fBwin_ntp\fP T} T{ Management of NTP servers on Windows T} _ T{ \fBwin_path\fP T} T{ Manage the Windows System PATH T} _ T{ \fBwin_pkg\fP T} T{ A module to manage software on Windows T} _ T{ \fBwin_repo\fP T} T{ Module to manage Windows software repo on a Standalone Minion T} _ T{ \fBwin_servermanager\fP T} T{ Manage Windows features via the ServerManager powershell module T} _ T{ \fBwin_service\fP T} T{ Windows Service module. T} _ T{ \fBwin_shadow\fP T} T{ Manage the shadow file T} _ T{ \fBwin_status\fP T} T{ Module for returning various status data about a minion. T} _ T{ \fBwin_system\fP T} T{ Support for reboot, shutdown, etc T} _ T{ \fBwin_timezone\fP T} T{ Module for managing timezone on Windows systems. T} _ T{ \fBwin_useradd\fP T} T{ Manage Windows users with the net user command T} _ T{ \fBxapi\fP T} T{ This module (mostly) uses the XenAPI to manage Xen virtual machines. T} _ T{ \fBxmpp\fP T} T{ Module for Sending Messages via XMPP (a.k.a. Jabber) T} _ T{ \fByumpkg\fP T} T{ Support for YUM T} _ T{ \fBzcbuildout\fP T} T{ Management of zc.buildout ========================= .. T} _ T{ \fBzfs\fP T} T{ Salt interface to ZFS commands T} _ T{ \fBznc\fP T} T{ znc \- An advanced IRC bouncer T} _ T{ \fBzpool\fP T} T{ Module for running ZFS zpool command T} _ T{ \fBzypper\fP T} T{ Package support for openSUSE via the zypper package manager T} _ .TE .SS salt.modules.aliases .sp Manage the information in the aliases file .INDENT 0.0 .TP .B salt.modules.aliases.get_target(alias) Return the target associated with an alias .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq aliases.get_target alias .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aliases.has_target(alias, target) Return true if the alias/target is set .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq aliases.has_target alias target .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aliases.list_aliases() Return the aliases found in the aliases file in this format: .sp .nf .ft C {\(aqalias\(aq: \(aqtarget\(aq} .ft P .fi .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq aliases.list_aliases .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aliases.rm_alias(alias) Remove an entry from the aliases file .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq aliases.rm_alias alias .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aliases.set_target(alias, target) Set the entry in the aliases file for the given alias, this will overwrite any previous entry for the given alias or create a new one if it does not exist. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq aliases.set_target alias target .ft P .fi .UNINDENT .SS salt.modules.alternatives .sp Support for Alternatives system .INDENT 0.0 .TP .B codeauthor Radek Rada <\fI\%radek.rada@gmail.com\fP> .UNINDENT .INDENT 0.0 .TP .B salt.modules.alternatives.auto(name) Trigger alternatives to set the path for as specified by priority. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq alternatives.auto name .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.alternatives.check_installed(name, path) Check if the current highest\-priority match for a given alternatives link is set to the desired path .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq alternatives.check_installed name path .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.alternatives.display(name) Display alternatives settings for defined command name .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq alternatives.display editor .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.alternatives.install(name, link, path, priority) Install symbolic links determining default commands .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq alternatives.install editor /usr/bin/editor /usr/bin/emacs23 50 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.alternatives.remove(name, path) Remove symbolic links determining the default commands. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq alternatives.remove name path .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.alternatives.set(name, path) Manually set the alternative for . .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq alternatives.set name path .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.alternatives.show_current(name) Display the current highest\-priority alternative for a given alternatives link .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq alternatives.show_current editor .ft P .fi .UNINDENT .SS salt.modules.apache .sp Support for Apache .sp Please note: The functions in here are generic functions designed to work with all implementations of Apache. Debian\-specific functions have been moved into deb_apache.py, but will still load under the \fBapache\fP namespace when a Debian\-based system is detected. .INDENT 0.0 .TP .B salt.modules.apache.config(name, config, edit=True) Create VirtualHost configuration files .INDENT 7.0 .TP .B name File for the virtual host .TP .B config VirtualHost configurations .UNINDENT .sp Note: This function is not meant to be used from the command line. Config is meant to be an ordered dict of all of the apache configs. .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq apache.config /etc/httpd/conf.d/ports.conf config="[{\(aqListen\(aq: \(aq22\(aq}]" .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.apache.directives() Return list of directives together with expected arguments and places where the directive is valid (\fBapachectl \-L\fP) .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq apache.directives .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.apache.fullversion() Return server version from apachectl \-V .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq apache.fullversion .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.apache.modules() Return list of static and shared modules from apachectl \-M .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq apache.modules .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.apache.server_status(profile=\(aqdefault\(aq) Get Information from the Apache server\-status handler .sp NOTE: the server\-status handler is disabled by default. in order for this function to work it needs to be enabled. \fI\%http://httpd.apache.org/docs/2.2/mod/mod_status.html\fP .sp The following configuration needs to exists in pillar/grains each entry nested in apache.server\-status is a profile of a vhost/server this would give support for multiple apache servers/vhosts .INDENT 7.0 .TP .B apache.server\-status: .INDENT 7.0 .TP .B \(aqdefault\(aq: \(aqurl\(aq: \fI\%http://localhost/server-status\fP \(aquser\(aq: someuser \(aqpass\(aq: password \(aqrealm\(aq: \(aqauthentication realm for digest passwords\(aq \(aqtimeout\(aq: 5 .UNINDENT .UNINDENT .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq apache.server_status salt \(aq*\(aq apache.server_status other\-profile .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.apache.servermods() Return list of modules compiled into the server (apachectl \-l) .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq apache.servermods .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.apache.signal(signal=None) Signals httpd to start, restart, or stop. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq apache.signal restart .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.apache.useradd(pwfile, user, password, opts=\(aq\(aq) Add an HTTP user using the htpasswd command. If the htpasswd file does not exist, it will be created. Valid options that can be passed are: .INDENT 7.0 .INDENT 3.5 n Don\(aqt update file; display results on stdout. m Force MD5 encryption of the password (default). d Force CRYPT encryption of the password. p Do not encrypt the password (plaintext). s Force SHA encryption of the password. .UNINDENT .UNINDENT .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq apache.useradd /etc/httpd/htpasswd larry badpassword salt \(aq*\(aq apache.useradd /etc/httpd/htpasswd larry badpass opts=ns .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.apache.userdel(pwfile, user) Delete an HTTP user from the specified htpasswd file. .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq apache.userdel /etc/httpd/htpasswd larry .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.apache.version() Return server version from apachectl \-v .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq apache.version .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.apache.vhosts() Show the settings as parsed from the config file (currently only shows the virtualhost settings). (\fBapachectl \-S\fP) Because each additional virtual host adds to the execution time, this command may require a long timeout be specified. .sp CLI Example: .sp .nf .ft C salt \-t 10 \(aq*\(aq apache.vhosts .ft P .fi .UNINDENT .SS salt.modules.aptpkg .sp Support for APT (Advanced Packaging Tool) .INDENT 0.0 .TP .B salt.modules.aptpkg.del_repo(repo, **kwargs) Delete a repo from the sources.list / sources.list.d .sp If the .list file is in the sources.list.d directory and the file that the repo exists in does not contain any other repo configuration, the file itself will be deleted. .sp The repo passed in must be a fully formed repository definition string. .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq pkg.del_repo "myrepo definition" .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aptpkg.expand_repo_def(repokwargs) Take a repository definition and expand it to the full pkg repository dict that can be used for comparison. This is a helper function to make the Debian/Ubuntu apt sources sane for comparison in the pkgrepo states. .sp There is no use to calling this function via the CLI. .UNINDENT .INDENT 0.0 .TP .B salt.modules.aptpkg.file_dict(*packages) List the files that belong to a package, grouped by package. Not specifying any packages will return a list of _every_ file on the system\(aqs package database (not generally recommended). .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq pkg.file_list httpd salt \(aq*\(aq pkg.file_list httpd postfix salt \(aq*\(aq pkg.file_list .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aptpkg.file_list(*packages) List the files that belong to a package. Not specifying any packages will return a list of _every_ file on the system\(aqs package database (not generally recommended). .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq pkg.file_list httpd salt \(aq*\(aq pkg.file_list httpd postfix salt \(aq*\(aq pkg.file_list .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aptpkg.get_repo(repo, **kwargs) Display a repo from the sources.list / sources.list.d .sp The repo passed in needs to be a complete repo entry. .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq pkg.get_repo "myrepo definition" .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aptpkg.get_selections(pattern=None, state=None) View package state from the dpkg database. .sp Returns a dict of dicts containing the state, and package names: .sp .nf .ft C {\(aq\(aq: {\(aq\(aq: [\(aqpkg1\(aq, ... ] }, ... } .ft P .fi .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.get_selections salt \(aq*\(aq pkg.get_selections \(aqpython\-*\(aq salt \(aq*\(aq pkg.get_selections state=hold salt \(aq*\(aq pkg.get_selections \(aqopenssh*\(aq state=hold .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aptpkg.hold(name=None, pkgs=None, sources=None, *kwargs) New in version Helium. .sp Set package in \(aqhold\(aq state, meaning it will not be upgraded. .INDENT 7.0 .TP .B name The name of the package, e.g., \(aqtmux\(aq .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.hold .ft P .fi .TP .B pkgs A list of packages to hold. Must be passed as a python list. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.hold pkgs=\(aq["foo", "bar"]\(aq .ft P .fi .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.modules.aptpkg.install(name=None, refresh=False, fromrepo=None, skip_verify=False, debconf=None, pkgs=None, sources=None, **kwargs) Install the passed package, add refresh=True to update the dpkg database. .INDENT 7.0 .TP .B name The name of the package to be installed. Note that this parameter is ignored if either "pkgs" or "sources" is passed. Additionally, please note that this option can only be used to install packages from a software repository. To install a package file manually, use the "sources" option. .sp 32\-bit packages can be installed on 64\-bit systems by appending the architecture designation (\fB:i386\fP, etc.) to the end of the package name. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.install .ft P .fi .TP .B refresh Whether or not to refresh the package database before installing. .TP .B fromrepo Specify a package repository to install from (e.g., \fBapt\-get \-t unstable install somepackage\fP) .TP .B skip_verify Skip the GPG verification check (e.g., \fB\-\-allow\-unauthenticated\fP, or \fB\-\-force\-bad\-verify\fP for install from package file). .TP .B debconf Provide the path to a debconf answers file, processed before installation. .TP .B version Install a specific version of the package, e.g. 1.2.3~0ubuntu0. Ignored if "pkgs" or "sources" is passed. .UNINDENT .sp Multiple Package Installation Options: .INDENT 7.0 .TP .B pkgs A list of packages to install from a software repository. Must be passed as a python list. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.install pkgs=\(aq["foo", "bar"]\(aq salt \(aq*\(aq pkg.install pkgs=\(aq["foo", {"bar": "1.2.3\-0ubuntu0"}]\(aq .ft P .fi .TP .B sources A list of DEB packages to install. Must be passed as a list of dicts, with the keys being package names, and the values being the source URI or local path to the package. Dependencies are automatically resolved and marked as auto\-installed. .sp 32\-bit packages can be installed on 64\-bit systems by appending the architecture designation (\fB:i386\fP, etc.) to the end of the package name. .sp Changed in version Helium. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.install sources=\(aq[{"foo": "salt://foo.deb"},{"bar": "salt://bar.deb"}]\(aq .ft P .fi .TP .B force_yes Passes \fB\-\-force\-yes\fP to the apt\-get command. Don\(aqt use this unless you know what you\(aqre doing. .sp New in version 0.17.4. .UNINDENT .sp Returns a dict containing the new package names and versions: .sp .nf .ft C {\(aq\(aq: {\(aqold\(aq: \(aq\(aq, \(aqnew\(aq: \(aq\(aq}} .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aptpkg.latest_version(*names, **kwargs) Return the latest version of the named package available for upgrade or installation. If more than one package name is specified, a dict of name/version pairs is returned. .sp If the latest version of a given package is already installed, an empty string will be returned for that package. .sp A specific repo can be requested using the \fBfromrepo\fP keyword argument. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.latest_version salt \(aq*\(aq pkg.latest_version fromrepo=unstable salt \(aq*\(aq pkg.latest_version ... .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aptpkg.list_pkgs(versions_as_list=False, removed=False, purge_desired=False, **kwargs) List the packages currently installed in a dict: .sp .nf .ft C {\(aq\(aq: \(aq\(aq} .ft P .fi .INDENT 7.0 .TP .B removed If \fBTrue\fP, then only packages which have been removed (but not purged) will be returned. .TP .B purge_desired If \fBTrue\fP, then only packages which have been marked to be purged, but can\(aqt be purged due to their status as dependencies for other installed packages, will be returned. Note that these packages will appear in installed .sp Changed in version 2014.1.1. .UNINDENT .sp External dependencies: .sp .nf .ft C Virtual package resolution requires dctrl\-tools. Without dctrl\-tools virtual packages will be reported as not installed. .ft P .fi .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.list_pkgs salt \(aq*\(aq pkg.list_pkgs versions_as_list=True .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aptpkg.list_repos() Lists all repos in the sources.list (and sources.lists.d) files .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.list_repos salt \(aq*\(aq pkg.list_repos disabled=True .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aptpkg.list_upgrades(refresh=True) List all available package upgrades. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.list_upgrades .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aptpkg.mod_repo(repo, saltenv=\(aqbase\(aq, **kwargs) Modify one or more values for a repo. If the repo does not exist, it will be created, so long as the definition is well formed. For Ubuntu the "ppa:/repo" format is acceptable. "ppa:" format can only be used to create a new repository. .sp The following options are available to modify a repo definition: .sp .nf .ft C comps (a comma separated list of components for the repo, e.g. "main") file (a file name to be used) keyserver (keyserver to get gpg key from) keyid (key id to load with the keyserver argument) key_url (URL to a gpg key to add to the apt gpg keyring) consolidate (if true, will attempt to de\-dup and consolidate sources) * Note: Due to the way keys are stored for apt, there is a known issue where the key wont be updated unless another change is made at the same time. Keys should be properly added on initial configuration. .ft P .fi .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq pkg.mod_repo \(aqmyrepo definition\(aq uri=http://new/uri salt \(aq*\(aq pkg.mod_repo \(aqmyrepo definition\(aq comps=main,universe .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aptpkg.owner(*paths) New in version Helium. .sp Return the name of the package that owns the file. Multiple file paths can be passed. Like \fBpkg.version salt \(aq*\(aq pkg.purge ,, salt \(aq*\(aq pkg.purge pkgs=\(aq["foo", "bar"]\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aptpkg.refresh_db() Updates the APT database to latest packages based upon repositories .sp Returns a dict, with the keys being package databases and the values being the result of the update attempt. Values can be one of the following: .INDENT 7.0 .IP \(bu 2 \fBTrue\fP: Database updated successfully .IP \(bu 2 \fBFalse\fP: Problem updating database .IP \(bu 2 \fBNone\fP: Database already up\-to\-date .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.refresh_db .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aptpkg.remove(name=None, pkgs=None, **kwargs) Remove packages using \fBapt\-get remove\fP. .INDENT 7.0 .TP .B name The name of the package to be deleted. .UNINDENT .sp Multiple Package Options: .INDENT 7.0 .TP .B pkgs A list of packages to delete. Must be passed as a python list. The \fBname\fP parameter will be ignored if this option is passed. .UNINDENT .sp New in version 0.16.0. .sp Returns a dict containing the changes. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.remove salt \(aq*\(aq pkg.remove ,, salt \(aq*\(aq pkg.remove pkgs=\(aq["foo", "bar"]\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aptpkg.set_selections(path=None, selection=None, clear=False, saltenv=\(aqbase\(aq) Change package state in the dpkg database. .sp The state can be any one of, documented in \fBdpkg(1)\fP: .INDENT 7.0 .INDENT 3.5 .INDENT 0.0 .IP \(bu 2 install .IP \(bu 2 hold .IP \(bu 2 deinstall .IP \(bu 2 purge .UNINDENT .UNINDENT .UNINDENT .sp This command is commonly used to mark specific packages to be held from being upgraded, that is, to be kept at a certain version. When a state is changed to anything but being held, then it is typically followed by \fBapt\-get \-u dselect\-upgrade\fP. .sp Note: Be careful with the \fBclear\fP argument, since it will start with setting all packages to deinstall state. .sp Returns a dict of dicts containing the package names, and the new and old versions: .sp .nf .ft C {\(aq\(aq: {\(aq\(aq: {\(aqnew\(aq: \(aq\(aq, \(aqold\(aq: \(aq\(aq} }, ... } .ft P .fi .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.set_selections selection=\(aq{"install": ["netcat"]}\(aq salt \(aq*\(aq pkg.set_selections selection=\(aq{"hold": ["openssh\-server", "openssh\-client"]}\(aq salt \(aq*\(aq pkg.set_selections salt://path/to/file salt \(aq*\(aq pkg.set_selections salt://path/to/file clear=True .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aptpkg.unhold(name=None, pkgs=None, sources=None, **kwargs) New in version Helium. .sp Set package current in \(aqhold\(aq state to install state, meaning it will be upgraded. .INDENT 7.0 .TP .B name The name of the package, e.g., \(aqtmux\(aq .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.unhold .ft P .fi .TP .B pkgs A list of packages to hold. Must be passed as a python list. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.unhold pkgs=\(aq["foo", "bar"]\(aq .ft P .fi .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.modules.aptpkg.upgrade(refresh=True, dist_upgrade=True, **kwargs) Upgrades all packages via \fBapt\-get dist\-upgrade\fP .sp Returns a dict containing the changes. .INDENT 7.0 .INDENT 3.5 .INDENT 0.0 .TP .B {\(aq\(aq: {\(aqold\(aq: \(aq\(aq, \(aqnew\(aq: \(aq\(aq}} .UNINDENT .UNINDENT .UNINDENT .INDENT 7.0 .TP .B dist_upgrade Whether to perform the upgrade using dist\-upgrade vs upgrade. Default is to use dist\-upgrade. .UNINDENT .sp New in version Helium. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.upgrade .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aptpkg.upgrade_available(name) Check whether or not an upgrade is available for a given package .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.upgrade_available .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aptpkg.version(*names, **kwargs) Returns a string representing the package version or an empty string if not installed. If more than one package name is specified, a dict of name/version pairs is returned. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.version salt \(aq*\(aq pkg.version ... .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.aptpkg.version_cmp(pkg1, pkg2) Do a cmp\-style comparison on two packages. Return \-1 if pkg1 < pkg2, 0 if pkg1 == pkg2, and 1 if pkg1 > pkg2. Return None if there was a problem making the comparison. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.version_cmp \(aq0.2.4\-0ubuntu1\(aq \(aq0.2.4.1\-0ubuntu1\(aq .ft P .fi .UNINDENT .SS salt.modules.archive .sp A module to wrap archive calls .sp New in version 2014.1.0: (Hydrogen) .INDENT 0.0 .TP .B salt.modules.archive.gunzip(gzipfile, template=None) Uses the gunzip command to unpack gzip files .sp CLI Example to create \fB/tmp/sourcefile.txt\fP: .sp .nf .ft C salt \(aq*\(aq archive.gunzip /tmp/sourcefile.txt.gz .ft P .fi .sp The template arg can be set to \(aqjinja\(aq or another supported template engine to render the command arguments before execution. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq archive.gunzip template=jinja /tmp/{{grains.id}}.txt.gz .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.archive.gzip(sourcefile, template=None) Uses the gzip command to create gzip files .sp CLI Example to create \fB/tmp/sourcefile.txt.gz\fP: .sp .nf .ft C salt \(aq*\(aq archive.gzip /tmp/sourcefile.txt .ft P .fi .sp The template arg can be set to \(aqjinja\(aq or another supported template engine to render the command arguments before execution. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq archive.gzip template=jinja /tmp/{{grains.id}}.txt .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.archive.rar(rarfile, sources, template=None) Uses the rar command to create rar files Uses rar for Linux from \fI\%http://www.rarlab.com/\fP .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq archive.rar /tmp/rarfile.rar /tmp/sourcefile1,/tmp/sourcefile2 .ft P .fi .sp The template arg can be set to \(aqjinja\(aq or another supported template engine to render the command arguments before execution. .sp For example: .sp .nf .ft C salt \(aq*\(aq archive.rar template=jinja /tmp/rarfile.rar /tmp/sourcefile1,/tmp/{{grains.id}}.txt .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.archive.tar(options, tarfile, sources=None, dest=None, cwd=None, template=None) .IP Note This function has changed for version 0.17.0. In prior versions, the \fBcwd\fP and \fBtemplate\fP arguments must be specified, with the source directories/files coming as a space\-separated list at the end of the command. Beginning with 0.17.0, \fBsources\fP must be a comma\-separated list, and the \fBcwd\fP and \fBtemplate\fP arguments are optional. .RE .sp Uses the tar command to pack, unpack, etc tar files .INDENT 7.0 .TP .B options: Options to pass to the \fBtar\fP binary. .TP .B tarfile: The tar filename to pack/unpack. .TP .B sources: Comma delimited list of files to \fBpack\fP into the tarfile. .TP .B dest: The destination directory to \fBunpack\fP the tarfile to. .TP .B cwd: The directory in which the tar command should be executed. .TP .B template: Template engine name to render the command arguments before execution. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq archive.tar cjvf /tmp/tarfile.tar.bz2 /tmp/file_1,/tmp/file_2 .ft P .fi .sp The template arg can be set to \fBjinja\fP or another supported template engine to render the command arguments before execution. For example: .sp .nf .ft C salt \(aq*\(aq archive.tar template=jinja cjvf /tmp/salt.tar.bz2 {{grains.saltpath}} .ft P .fi .sp To unpack a tarfile, for example: .sp .nf .ft C salt \(aq*\(aq archive.tar foo.tar xf dest=/target/directory .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.archive.unrar(rarfile, dest, excludes=None, template=None) Uses the unrar command to unpack rar files Uses rar for Linux from \fI\%http://www.rarlab.com/\fP .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq archive.unrar /tmp/rarfile.rar /home/strongbad/ excludes=file_1,file_2 .ft P .fi .sp The template arg can be set to \(aqjinja\(aq or another supported template engine to render the command arguments before execution. .sp For example: .sp .nf .ft C salt \(aq*\(aq archive.unrar template=jinja /tmp/rarfile.rar /tmp/{{grains.id}}/ excludes=file_1,file_2 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.archive.unzip(zipfile, dest, excludes=None, template=None, options=None) Uses the unzip command to unpack zip files .INDENT 7.0 .TP .B options: Options to pass to the \fBunzip\fP binary. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq archive.unzip /tmp/zipfile.zip /home/strongbad/ excludes=file_1,file_2 .ft P .fi .sp The template arg can be set to \(aqjinja\(aq or another supported template engine to render the command arguments before execution. .sp For example: .sp .nf .ft C salt \(aq*\(aq archive.unzip template=jinja /tmp/zipfile.zip /tmp/{{grains.id}}/ excludes=file_1,file_2 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.archive.zip(zipfile, sources, template=None) Uses the zip command to create zip files .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq archive.zip /tmp/zipfile.zip /tmp/sourcefile1,/tmp/sourcefile2 .ft P .fi .sp The template arg can be set to \(aqjinja\(aq or another supported template engine to render the command arguments before execution. .sp For example: .sp .nf .ft C salt \(aq*\(aq archive.zip template=jinja /tmp/zipfile.zip /tmp/sourcefile1,/tmp/{{grains.id}}.txt .ft P .fi .UNINDENT .SS salt.modules.at .sp Wrapper module for at(1) .sp Also, a \(aqtag\(aq feature has been added to more easily tag jobs. .INDENT 0.0 .TP .B salt.modules.at.at(*args, **kwargs) Add a job to the queue. .sp The \(aqtimespec\(aq follows the format documented in the at(1) manpage. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq at.at [tag=] [runas=] salt \(aq*\(aq at.at 12:05am \(aq/sbin/reboot\(aq tag=reboot salt \(aq*\(aq at.at \(aq3:05am +3 days\(aq \(aqbin/myscript\(aq tag=nightly runas=jim .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.at.atc(jobid) Print the at(1) script that will run for the passed job id. This is mostly for debugging so the output will just be text. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq at.atc .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.at.atq(tag=None) List all queued and running jobs or only those with an optional \(aqtag\(aq. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq at.atq salt \(aq*\(aq at.atq [tag] salt \(aq*\(aq at.atq [job number] .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.at.atrm(*args) Remove jobs from the queue. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq at.atrm .. salt \(aq*\(aq at.atrm all salt \(aq*\(aq at.atrm all [tag] .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.at.jobcheck(**kwargs) Check the job from queue. The kwargs dict include \(aqhour minute day month year tag runas\(aq Other parameters will be ignored. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq at.jobcheck runas=jam day=13 salt \(aq*\(aq at.jobcheck day=13 month=12 year=13 tag=rose .ft P .fi .UNINDENT .SS salt.modules.augeas_cfg .sp Manages configuration files via augeas .sp This module requires the \fBaugeas\fP Python module. .IP Warning Minimal installations of Debian and Ubuntu have been seen to have packaging bugs with python\-augeas, causing the augeas module to fail to import. If the minion has the augeas module installed, but the functions in this execution module fail to run due to being unavailable, first restart the salt\-minion service. If the problem persists past that, the following command can be run from the master to determine what is causing the import to fail: .sp .nf .ft C salt minion\-id cmd.run \(aqpython \-c "from augeas import Augeas"\(aq .ft P .fi .sp For affected Debian/Ubuntu hosts, installing \fBlibpython2.7\fP has been known to resolve the issue. .RE .INDENT 0.0 .TP .B salt.modules.augeas_cfg.get(path, value=\(aq\(aq) Get a value for a specific augeas path .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq augeas.get /files/etc/hosts/1/ ipaddr .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.augeas_cfg.ls(path) List the direct children of a node .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq augeas.ls /files/etc/passwd .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.augeas_cfg.match(path, value=\(aq\(aq) Get matches for path expression .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq augeas.match /files/etc/services/service\-name ssh .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.augeas_cfg.remove(path) Get matches for path expression .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq augeas.remove /files/etc/sysctl.conf/net.ipv4.conf.all.log_martians .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.augeas_cfg.setvalue(*args) Set a value for a specific augeas path .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq augeas.setvalue /files/etc/hosts/1/canonical localhost .ft P .fi .sp This will set the first entry in /etc/hosts to localhost .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq augeas.setvalue /files/etc/hosts/01/ipaddr 192.168.1.1 \e /files/etc/hosts/01/canonical test .ft P .fi .sp Adds a new host to /etc/hosts the ip address 192.168.1.1 and hostname test .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq augeas.setvalue prefix=/files/etc/sudoers/ \e "spec[user = \(aq%wheel\(aq]/user" "%wheel" \e "spec[user = \(aq%wheel\(aq]/host_group/host" \(aqALL\(aq \e "spec[user = \(aq%wheel\(aq]/host_group/command[1]" \(aqALL\(aq \e "spec[user = \(aq%wheel\(aq]/host_group/command[1]/tag" \(aqPASSWD\(aq \e "spec[user = \(aq%wheel\(aq]/host_group/command[2]" \(aq/usr/bin/apt\-get\(aq \e "spec[user = \(aq%wheel\(aq]/host_group/command[2]/tag" NOPASSWD .ft P .fi .sp Ensures that the following line is present in /etc/sudoers: .sp .nf .ft C %wheel ALL = PASSWD : ALL , NOPASSWD : /usr/bin/apt\-get , /usr/bin/aptitude .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.augeas_cfg.tree(path) Returns recursively the complete tree of a node .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq augeas.tree /files/etc/ .ft P .fi .UNINDENT .SS salt.modules.aws_sqs .sp Support for the Amazon Simple Queue Service. .INDENT 0.0 .TP .B salt.modules.aws_sqs.create_queue(name, region, opts=None, user=None) Creates a queue with the correct name. .INDENT 7.0 .TP .B name Name of the SQS queue to create .TP .B region Region to create the SQS queue in .TP .B opts None Any additional options to add to the command line .TP .B user None Run hg as a user other than what the minion runs as .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.modules.aws_sqs.delete_message(queue, region, receipthandle, opts=None, user=None) Delete one or more messages from a queue in a region .INDENT 7.0 .TP .B queue The name of the queue to delete messages from .TP .B region Region where SQS queues exists .TP .B receipthandle The ReceiptHandle of the message to delete. The ReceiptHandle is obtained in the return from receive_message .TP .B opts None Any additional options to add to the command line .TP .B user None Run as a user other than what the minion runs as .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq aws_sqs.delete_message receipthandle=\(aq\(aq .ft P .fi .sp New in version Helium. .UNINDENT .INDENT 0.0 .TP .B salt.modules.aws_sqs.delete_queue(name, region, opts=None, user=None) Deletes a queue in the region. .INDENT 7.0 .TP .B name Name of the SQS queue to deletes .TP .B region Name of the region to delete the queue from .TP .B opts None Any additional options to add to the command line .TP .B user None Run hg as a user other than what the minion runs as .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.modules.aws_sqs.list_queues(region, opts=None, user=None) List the queues in the selected region. .INDENT 7.0 .TP .B region Region to list SQS queues for .TP .B opts None Any additional options to add to the command line .TP .B user None Run hg as a user other than what the minion runs as .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.modules.aws_sqs.queue_exists(name, region, opts=None, user=None) Returns True or False on whether the queue exists in the region .INDENT 7.0 .TP .B name Name of the SQS queue to search for .TP .B region Name of the region to search for the queue in .TP .B opts None Any additional options to add to the command line .TP .B user None Run hg as a user other than what the minion runs as .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.modules.aws_sqs.receive_message(queue, region, num=1, opts=None, user=None) Receive one or more messages from a queue in a region .INDENT 7.0 .TP .B queue The name of the queue to receive messages from .TP .B region Region where SQS queues exists .TP .B num 1 The max number of messages to receive .TP .B opts None Any additional options to add to the command line .TP .B user None Run as a user other than what the minion runs as .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq aws_sqs.receive_message salt \(aq*\(aq aws_sqs.receive_message num=10 .ft P .fi .sp New in version Helium. .UNINDENT .SS salt.modules.blockdev .sp Module for managing block devices .sp New in version Helium. .INDENT 0.0 .TP .B salt.modules.blockdev.dump(device, args=None) Return all contents of dumpe2fs for a specified device .sp CLI Example: .. code\-block:: bash .INDENT 7.0 .INDENT 3.5 salt \(aq*\(aq extfs.dump /dev/sda1 .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.modules.blockdev.tune(device, **kwargs) Set attributes for the specified device .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq blockdev.tune /dev/sda1 read\-ahead=1024 read\-write=True .ft P .fi .sp Valid options are: \fBread\-ahead\fP, \fBfilesystem\-read\-ahead\fP, \fBread\-only\fP, \fBread\-write\fP. .sp See the \fBblockdev(8)\fP manpage for a more complete description of these options. .UNINDENT .INDENT 0.0 .TP .B salt.modules.blockdev.wipe(device) Remove the filesystem information .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq blockdev.wipe /dev/sda1 .ft P .fi .UNINDENT .SS salt.modules.bluez .sp Support for Bluetooth (using BlueZ in Linux). .sp The following packages are required packages for this module: .INDENT 0.0 .INDENT 3.5 bluez >= 5.7 bluez\-libs >= 5.7 bluez\-utils >= 5.7 pybluez >= 0.18 .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.modules.bluez.address() Get the many addresses of the Bluetooth adapter .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq bluetooth.address .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.bluez.block(bdaddr) Block a specific bluetooth device by BD Address .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq bluetooth.block DE:AD:BE:EF:CA:FE .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.bluez.discoverable(dev) Enable this bluetooth device to be discoverable. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq bluetooth.discoverable hci0 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.bluez.noscan(dev) Turn off scanning modes on this device. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq bluetooth.noscan hci0 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.bluez.pair(address, key) Pair the bluetooth adapter with a device .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq bluetooth.pair DE:AD:BE:EF:CA:FE 1234 .ft P .fi .sp Where DE:AD:BE:EF:CA:FE is the address of the device to pair with, and 1234 is the passphrase. .sp TODO: This function is currently broken, as the bluez\-simple\-agent program no longer ships with BlueZ >= 5.0. It needs to be refactored. .UNINDENT .INDENT 0.0 .TP .B salt.modules.bluez.power(dev, mode) Power a bluetooth device on or off .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq bluetooth.power hci0 on salt \(aq*\(aq bluetooth.power hci0 off .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.bluez.scan() Scan for bluetooth devices in the area .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq bluetooth.scan .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.bluez.start() Start the bluetooth service. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq bluetooth.start .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.bluez.stop() Stop the bluetooth service. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq bluetooth.stop .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.bluez.unblock(bdaddr) Unblock a specific bluetooth device by BD Address .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq bluetooth.unblock DE:AD:BE:EF:CA:FE .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.bluez.unpair(address) Unpair the bluetooth adapter from a device .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq bluetooth.unpair DE:AD:BE:EF:CA:FE .ft P .fi .sp Where DE:AD:BE:EF:CA:FE is the address of the device to unpair. .sp TODO: This function is currently broken, as the bluez\-simple\-agent program no longer ships with BlueZ >= 5.0. It needs to be refactored. .UNINDENT .INDENT 0.0 .TP .B salt.modules.bluez.version() Return Bluez version from bluetoothd \-v .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq bluetoothd.version .ft P .fi .UNINDENT .SS salt.modules.boto_asg .sp Connection module for Amazon Autoscale Groups .sp New in version Helium. .INDENT 0.0 .TP .B configuration This module accepts explicit autoscale credentials but can also utilize IAM roles assigned to the instance trough Instance Profiles. Dynamic credentials are then automatically obtained from AWS API and no further configuration is necessary. More Information available at: .sp .nf .ft C http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam\-roles\-for\-amazon\-ec2.html .ft P .fi .sp If IAM roles are not used you need to specify them either in a pillar or in the minion\(aqs config file: .sp .nf .ft C asg.keyid: GKTADJGHEIQSXMKKRBJ08H asg.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs .ft P .fi .sp A region may also be specified in the configuration: .sp .nf .ft C asg.region: us\-east\-1 .ft P .fi .sp If a region is not specified, the default is us\-east\-1. .sp It\(aqs also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: .INDENT 7.0 .INDENT 3.5 .INDENT 0.0 .TP .B myprofile: keyid: GKTADJGHEIQSXMKKRBJ08H key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs region: us\-east\-1 .UNINDENT .UNINDENT .UNINDENT .TP .B depends boto .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_asg.create(name, launch_config_name, availability_zones, min_size, max_size, desired_capacity=None, load_balancers=None, default_cooldown=None, health_check_type=None, health_check_period=None, placement_group=None, vpc_zone_identifier=None, tags=None, termination_policies=None, region=None, key=None, keyid=None, profile=None) Create an autoscale group. .sp CLI example: .sp .nf .ft C salt myminion boto_asg.create myasg mylc \(aq["us\-east\-1a", "us\-east\-1e"]\(aq 1 10 load_balancers=\(aq["myelb", "myelb2"]\(aq tags=\(aq[{"key": "Name", value="myasg", "propagate_at_launch": True}]\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_asg.create_launch_configuration(name, image_id=None, key_name=None, security_groups=None, user_data=None, instance_type=\(aqm1.small\(aq, kernel_id=None, ramdisk_id=None, block_device_mappings=None, instance_monitoring=False, spot_price=None, instance_profile_name=None, ebs_optimized=False, associate_public_ip_address=None, volume_type=None, delete_on_termination=True, iops=None, use_block_device_types=False, region=None, key=None, keyid=None, profile=None) Create a launch configuration. .sp CLI example: .sp .nf .ft C salt myminion boto_asg.create_launch_configuration mylc image_id=ami\-0b9c9f62 key_name=\(aqmykey\(aq security_groups=\(aq["mygroup"]\(aq instance_type=\(aqc3.2xlarge\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_asg.delete(name, force=False, region=None, key=None, keyid=None, profile=None) Delete an autoscale group. .sp CLI example: .sp .nf .ft C salt myminion boto_asg.delete myasg region=us\-east\-1 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_asg.delete_launch_configuration(name, region=None, key=None, keyid=None, profile=None) Delete a launch configuration. .sp CLI example: .sp .nf .ft C salt myminion boto_asg.delete_launch_configuration mylc .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_asg.exists(name, region=None, key=None, keyid=None, profile=None) Check to see if an autoscale group exists. .sp CLI example: .sp .nf .ft C salt myminion boto_asg.exists myasg region=us\-east\-1 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_asg.get_cloud_init_mime(cloud_init) Get a mime multipart encoded string from a cloud\-init dict. Currently supports scripts and cloud\-config. .sp CLI Example: .sp .nf .ft C salt myminion boto.get_cloud_init_mime .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_asg.get_config(name, region=None, key=None, keyid=None, profile=None) Get the configuration for an autoscale group. .sp CLI example: .sp .nf .ft C salt myminion boto_asg.get_config myasg region=us\-east\-1 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_asg.launch_configuration_exists(name, region=None, key=None, keyid=None, profile=None) Check for a launch configuration\(aqs existence. .sp CLI example: .sp .nf .ft C salt myminion boto_asg.launch_configuration_exists mylc .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_asg.update(name, launch_config_name, availability_zones, min_size, max_size, desired_capacity=None, load_balancers=None, default_cooldown=None, health_check_type=None, health_check_period=None, placement_group=None, vpc_zone_identifier=None, tags=None, termination_policies=None, region=None, key=None, keyid=None, profile=None) Update an autoscale group. .sp CLI example: .sp .nf .ft C salt myminion boto_asg.update myasg mylc \(aq["us\-east\-1a", "us\-east\-1e"]\(aq 1 10 load_balancers=\(aq["myelb", "myelb2"]\(aq tags=\(aq[{"key": "Name", value="myasg", "propagate_at_launch": True}]\(aq .ft P .fi .UNINDENT .SS salt.modules.boto_elb .sp Connection module for Amazon ELB .sp New in version Helium. .INDENT 0.0 .TP .B configuration This module accepts explicit elb credentials but can also utilize IAM roles assigned to the instance trough Instance Profiles. Dynamic credentials are then automatically obtained from AWS API and no further configuration is necessary. More Information available at: .sp .nf .ft C http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam\-roles\-for\-amazon\-ec2.html .ft P .fi .sp If IAM roles are not used you need to specify them either in a pillar or in the minion\(aqs config file: .sp .nf .ft C elb.keyid: GKTADJGHEIQSXMKKRBJ08H elb.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs .ft P .fi .sp A region may also be specified in the configuration: .sp .nf .ft C elb.region: us\-east\-1 .ft P .fi .sp If a region is not specified, the default is us\-east\-1. .sp It\(aqs also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: .INDENT 7.0 .INDENT 3.5 .INDENT 0.0 .TP .B myprofile: keyid: GKTADJGHEIQSXMKKRBJ08H key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs region: us\-east\-1 .UNINDENT .UNINDENT .UNINDENT .TP .B depends boto .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_elb.attach_subnets(name, subnets, region=None, key=None, keyid=None, profile=None) Attach ELB to subnets. .sp CLI example: .sp .nf .ft C salt myminion boto_elb.attach_subnets myelb \(aq["mysubnet"]\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_elb.create(name, availability_zones, listeners=None, subnets=None, security_groups=None, scheme=\(aqinternet\-facing\(aq, region=None, key=None, keyid=None, profile=None) Create an ELB .sp CLI example to create an ELB: .sp .nf .ft C salt myminion boto_elb.create myelb \(aq["us\-east\-1a", "us\-east\-1e"]\(aq listeners=\(aq[["HTTPS", "HTTP", 443, 80, "arn:aws:iam::1111111:server\-certificate/mycert"]]\(aq region=us\-east\-1 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_elb.create_listeners(name, listeners=None, region=None, key=None, keyid=None, profile=None) Create listeners on an ELB. .sp CLI example: .sp .nf .ft C salt myminion boto_elb.create_listeners myelb listeners=\(aq[["HTTPS", "HTTP", 443, 80, "arn:aws:iam::11 11111:server\-certificate/mycert"]]\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_elb.delete(name, region=None, key=None, keyid=None, profile=None) Delete an ELB. .sp CLI example to delete an ELB: .sp .nf .ft C salt myminion boto_elb.delete myelb region=us\-east\-1 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_elb.delete_listeners(name, ports, region=None, key=None, keyid=None, profile=None) Delete listeners on an ELB. .sp CLI example: .sp .nf .ft C salt myminion boto_elb.delete_listeners myelb \(aq[80,443]\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_elb.detach_subnets(name, subnets, region=None, key=None, keyid=None, profile=None) Detach ELB from subnets. .sp CLI example: .sp .nf .ft C salt myminion boto_elb.detach_subnets myelb \(aq["mysubnet"]\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_elb.disable_availability_zones(name, availability_zones, region=None, key=None, keyid=None, profile=None) Disable availability zones for ELB. .sp CLI example: .sp .nf .ft C salt myminion boto_elb.disable_availability_zones myelb \(aq["us\-east\-1a"]\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_elb.enable_availability_zones(name, availability_zones, region=None, key=None, keyid=None, profile=None) Enable availability zones for ELB. .sp CLI example: .sp .nf .ft C salt myminion boto_elb.enable_availability_zones myelb \(aq["us\-east\-1a"]\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_elb.exists(name, region=None, key=None, keyid=None, profile=None) Check to see if an ELB exists. .sp CLI example: .sp .nf .ft C salt myminion boto_elb.exists myelb region=us\-east\-1 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_elb.get_attributes(name, region=None, key=None, keyid=None, profile=None) Check to see if attributes are set on an ELB. .sp CLI example: .sp .nf .ft C salt myminion boto_elb.get_attributes myelb .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_elb.get_elb_config(name, region=None, key=None, keyid=None, profile=None) Check to see if an ELB exists. .sp CLI example: .sp .nf .ft C salt myminion boto_elb.exists myelb region=us\-east\-1 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_elb.get_health_check(name, region=None, key=None, keyid=None, profile=None) Get the health check configured for this ELB. .sp CLI example: .sp .nf .ft C salt myminion boto_elb.get_health_check myelb .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_elb.set_attributes(name, attributes, region=None, key=None, keyid=None, profile=None) Set attributes on an ELB. .sp CLI example to set attributes on an ELB: .sp .nf .ft C salt myminion boto_elb.set_attributes myelb \(aq{"access_log": {"enabled": "true", "s3_bucket_name": "mybucket", "s3_bucket_prefix": "mylogs/", "emit_interval": "5"}}\(aq region=us\-east\-1 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_elb.set_health_check(name, health_check, region=None, key=None, keyid=None, profile=None) Set attributes on an ELB. .sp CLI example to set attributes on an ELB: .sp .nf .ft C salt myminion boto_elb.set_health_check myelb \(aq{"target": "HTTP:80/"}\(aq .ft P .fi .UNINDENT .SS salt.modules.boto_iam .sp Connection module for Amazon IAM .sp New in version Helium. .INDENT 0.0 .TP .B configuration This module accepts explicit iam credentials but can also utilize IAM roles assigned to the instance trough Instance Profiles. Dynamic credentials are then automatically obtained from AWS API and no further configuration is necessary. More Information available at: .sp .nf .ft C http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam\-roles\-for\-amazon\-ec2.html .ft P .fi .sp If IAM roles are not used you need to specify them either in a pillar or in the minion\(aqs config file: .sp .nf .ft C iam.keyid: GKTADJGHEIQSXMKKRBJ08H iam.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs iam.region: us\-east\-1 .ft P .fi .sp It\(aqs also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: .INDENT 7.0 .INDENT 3.5 .INDENT 0.0 .TP .B myprofile: keyid: GKTADJGHEIQSXMKKRBJ08H key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs region: us\-east\-1 .UNINDENT .UNINDENT .UNINDENT .TP .B depends boto .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_iam.associate_profile_to_role(profile_name, role_name, region=None, key=None, keyid=None, profile=None) Associate an instance profile with an IAM role. .sp CLI example: .sp .nf .ft C salt myminion boto_iam.associate_profile_to_role myirole myiprofile .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_iam.create_instance_profile(name, region=None, key=None, keyid=None, profile=None) Create an instance profile. .sp CLI example: .sp .nf .ft C salt myminion boto_iam.create_instance_profile myiprofile .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_iam.create_role(name, policy_document=None, path=None, region=None, key=None, keyid=None, profile=None) Create an instance role. .sp CLI example: .sp .nf .ft C salt myminion boto_iam.create_role myrole .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_iam.create_role_policy(role_name, policy_name, policy, region=None, key=None, keyid=None, profile=None) Create or modify a role policy. .sp CLI example: .sp .nf .ft C salt myminion boto_iam.create_role_policy myirole mypolicy \(aq{"MyPolicy": "Statement": [{"Action": ["sqs:*"], "Effect": "Allow", "Resource": ["arn:aws:sqs:*:*:*"], "Sid": "MyPolicySqs1"}]}\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_iam.delete_instance_profile(name, region=None, key=None, keyid=None, profile=None) Delete an instance profile. .sp CLI example: .sp .nf .ft C salt myminion boto_iam.delete_instance_profile myiprofile .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_iam.delete_role(name, region=None, key=None, keyid=None, profile=None) Delete an IAM role. .sp CLI example: .sp .nf .ft C salt myminion boto_iam.delete_role myirole .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_iam.delete_role_policy(role_name, policy_name, region=None, key=None, keyid=None, profile=None) Delete a role policy. .sp CLI example: .sp .nf .ft C salt myminion boto_iam.delete_role_policy myirole mypolicy .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_iam.disassociate_profile_from_role(profile_name, role_name, region=None, key=None, keyid=None, profile=None) Disassociate an instance profile from an IAM role. .sp CLI example: .sp .nf .ft C salt myminion boto_iam.disassociate_profile_from_role myirole myiprofile .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_iam.get_role_policy(role_name, policy_name, region=None, key=None, keyid=None, profile=None) Get a role policy. .sp CLI example: .sp .nf .ft C salt myminion boto_iam.get_role_policy myirole mypolicy .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_iam.instance_profile_exists(name, region=None, key=None, keyid=None, profile=None) Check to see if an instance profile exists. .sp CLI example: .sp .nf .ft C salt myminion boto_iam.instance_profile_exists myiprofile .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_iam.list_role_policies(role_name, region=None, key=None, keyid=None, profile=None) Get a list of policy names from a role. .sp CLI example: .sp .nf .ft C salt myminion boto_iam.list_role_policies myirole .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_iam.profile_associated(role_name, profile_name, region, key, keyid, profile) Check to see if an instance profile is associated with an IAM role. .sp CLI example: .sp .nf .ft C salt myminion boto_iam.profile_associated myirole myiprofile .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_iam.role_exists(name, region=None, key=None, keyid=None, profile=None) Check to see if an IAM role exists. .sp CLI example: .sp .nf .ft C salt myminion boto_iam.role_exists myirole .ft P .fi .UNINDENT .SS salt.modules.boto_route53 .sp Connection module for Amazon Route53 .sp New in version Helium. .INDENT 0.0 .TP .B configuration This module accepts explicit route53 credentials but can also utilize IAM roles assigned to the instance trough Instance Profiles. Dynamic credentials are then automatically obtained from AWS API and no further configuration is necessary. More Information available at: .sp .nf .ft C http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam\-roles\-for\-amazon\-ec2.html .ft P .fi .sp If IAM roles are not used you need to specify them either in a pillar or in the minion\(aqs config file: .sp .nf .ft C route53.keyid: GKTADJGHEIQSXMKKRBJ08H route53.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs .ft P .fi .sp A region may also be specified in the configuration: .sp .nf .ft C route53.region: us\-east\-1 .ft P .fi .sp If a region is not specified, the default is us\-east\-1. .sp It\(aqs also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: .INDENT 7.0 .INDENT 3.5 .INDENT 0.0 .TP .B myprofile: keyid: GKTADJGHEIQSXMKKRBJ08H key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs region: us\-east\-1 .UNINDENT .UNINDENT .UNINDENT .TP .B depends boto .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_route53.add_record(name, value, zone, record_type, identifier=None, ttl=None, region=None, key=None, keyid=None, profile=None) Add a record to a zone. .sp CLI example: .sp .nf .ft C salt myminion boto_route53.add_record test.example.org 1.1.1.1 example.org A .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_route53.delete_record(name, zone, record_type, identifier=None, all_records=False, region=None, key=None, keyid=None, profile=None) Modify a record in a zone. .sp CLI example: .sp .nf .ft C salt myminion boto_route53.delete_record test.example.org example.org A .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_route53.get_record(name, zone, record_type, fetch_all=False, region=None, key=None, keyid=None, profile=None) Get a record from a zone. .sp CLI example: .sp .nf .ft C salt myminion boto_route53.get_record test.example.org example.org A .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_route53.update_record(name, value, zone, record_type, identifier=None, ttl=None, region=None, key=None, keyid=None, profile=None) Modify a record in a zone. .sp CLI example: .sp .nf .ft C salt myminion boto_route53.modify_record test.example.org 1.1.1.1 example.org A .ft P .fi .UNINDENT .SS salt.modules.boto_secgroup .sp Connection module for Amazon Security Groups .sp New in version Helium. .INDENT 0.0 .TP .B configuration This module accepts explicit ec2 credentials but can also utilize IAM roles assigned to the instance trough Instance Profiles. Dynamic credentials are then automatically obtained from AWS API and no further configuration is necessary. More Information available at: .sp .nf .ft C http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam\-roles\-for\-amazon\-ec2.html .ft P .fi .sp If IAM roles are not used you need to specify them either in a pillar or in the minion\(aqs config file: .sp .nf .ft C secgroup.keyid: GKTADJGHEIQSXMKKRBJ08H secgroup.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs .ft P .fi .sp A region may also be specified in the configuration: .sp .nf .ft C secgroup.region: us\-east\-1 .ft P .fi .sp If a region is not specified, the default is us\-east\-1. .sp It\(aqs also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: .INDENT 7.0 .INDENT 3.5 .INDENT 0.0 .TP .B myprofile: keyid: GKTADJGHEIQSXMKKRBJ08H key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs region: us\-east\-1 .UNINDENT .UNINDENT .UNINDENT .TP .B depends boto .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_secgroup.authorize(name, source_group_name=None, source_group_owner_id=None, ip_protocol=None, from_port=None, to_port=None, cidr_ip=None, group_id=None, source_group_group_id=None, region=None, key=None, keyid=None, profile=None) Add a new rule to an existing security group. .sp CLI example: .sp .nf .ft C salt myminion boto_secgroup.authorize mysecgroup ip_protocol=tcp from_port=80 to_port=80 cidr_ip=\(aq["10.0.0.0/0", "192.168.0.0/0"]\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_secgroup.create(name, description, vpc_id=None, region=None, key=None, keyid=None, profile=None) Create an autoscale group. .sp CLI example: .sp .nf .ft C salt myminion boto_secgroup.create mysecgroup \(aqMy Security Group\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_secgroup.delete(name, group_id=None, region=None, key=None, keyid=None, profile=None) Delete an autoscale group. .sp CLI example: .sp .nf .ft C salt myminion boto_secgroup.delete mysecgroup .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_secgroup.exists(name, region=None, key=None, keyid=None, profile=None) Check to see if an security group exists. .sp CLI example: .sp .nf .ft C salt myminion boto_secgroup.exists mysecgroup .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_secgroup.get_config(name=None, group_id=None, region=None, key=None, keyid=None, profile=None) Get the configuration for a security group. .sp CLI example: .sp .nf .ft C salt myminion boto_secgroup.get_config mysecgroup .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_secgroup.revoke(name, source_group_name=None, source_group_owner_id=None, ip_protocol=None, from_port=None, to_port=None, cidr_ip=None, group_id=None, source_group_group_id=None, region=None, key=None, keyid=None, profile=None) Remove a rule from an existing security group. .sp CLI example: .sp .nf .ft C salt myminion boto_secgroup.revoke mysecgroup ip_protocol=tcp from_port=80 to_port=80 cidr_ip=\(aq["10.0.0.0/0", "192.168.0.0/0"]\(aq .ft P .fi .UNINDENT .SS salt.modules.boto_sqs .sp Connection module for Amazon SQS .sp New in version Helium. .INDENT 0.0 .TP .B configuration This module accepts explicit sqs credentials but can also utilize IAM roles assigned to the instance trough Instance Profiles. Dynamic credentials are then automatically obtained from AWS API and no further configuration is necessary. More Information available at: .sp .nf .ft C http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam\-roles\-for\-amazon\-ec2.html .ft P .fi .sp If IAM roles are not used you need to specify them either in a pillar or in the minion\(aqs config file: .sp .nf .ft C sqs.keyid: GKTADJGHEIQSXMKKRBJ08H sqs.key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs .ft P .fi .sp A region may also be specified in the configuration: .sp .nf .ft C sqs.region: us\-east\-1 .ft P .fi .sp If a region is not specified, the default is us\-east\-1. .sp It\(aqs also possible to specify key, keyid and region via a profile, either as a passed in dict, or as a string to pull from pillars or minion config: .INDENT 7.0 .INDENT 3.5 .INDENT 0.0 .TP .B myprofile: keyid: GKTADJGHEIQSXMKKRBJ08H key: askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs region: us\-east\-1 .UNINDENT .UNINDENT .UNINDENT .TP .B depends boto .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_sqs.create(name, region=None, key=None, keyid=None, profile=None) Create an SQS queue. .sp CLI example to create a queue: .sp .nf .ft C salt myminion boto_sqs.create myqueue region=us\-east\-1 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_sqs.delete(name, region=None, key=None, keyid=None, profile=None) Delete an SQS queue. .sp CLI example to delete a queue: .sp .nf .ft C salt myminion boto_sqs.delete myqueue region=us\-east\-1 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_sqs.exists(name, region=None, key=None, keyid=None, profile=None) Check to see if a queue exists. .sp CLI example: .sp .nf .ft C salt myminion boto_sqs.exists myqueue region=us\-east\-1 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_sqs.get_attributes(name, region=None, key=None, keyid=None, profile=None) Check to see if attributes are set on an SQS queue. .sp CLI example: .sp .nf .ft C salt myminion boto_sqs.get_attributes myqueue .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.boto_sqs.set_attributes(name, attributes, region=None, key=None, keyid=None, profile=None) Set attributes on an SQS queue. .sp CLI example to set attributes on a queue: .sp .nf .ft C salt myminion boto_sqs.set_attributes myqueue \(aq{ReceiveMessageWaitTimeSeconds: 20}\(aq region=us\-east\-1 .ft P .fi .UNINDENT .SS salt.modules.brew .sp Homebrew for Mac OS X .INDENT 0.0 .TP .B salt.modules.brew.install(name=None, pkgs=None, taps=None, options=None, **kwargs) Install the passed package(s) with \fBbrew install\fP .INDENT 7.0 .TP .B name The name of the formula to be installed. Note that this parameter is ignored if "pkgs" is passed. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.install .ft P .fi .TP .B taps Unofficial Github repos to use when updating and installing formulas. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.install tap=\(aq\(aq salt \(aq*\(aq pkg.install zlib taps=\(aqhomebrew/dupes\(aq salt \(aq*\(aq pkg.install php54 taps=\(aq["josegonzalez/php", "homebrew/dupes"]\(aq .ft P .fi .TP .B options Options to pass to brew. Only applies to initial install. Due to how brew works, modifying chosen options requires a full uninstall followed by a fresh install. Note that if "pkgs" is used, all options will be passed to all packages. Unreconized options for a package will be silently ignored by brew. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.install tap=\(aq\(aq salt \(aq*\(aq pkg.install php54 taps=\(aq["josegonzalez/php", "homebrew/dupes"]\(aq options=\(aq["\-\-with\-fpm"]\(aq .ft P .fi .UNINDENT .sp Multiple Package Installation Options: .INDENT 7.0 .TP .B pkgs A list of formulas to install. Must be passed as a python list. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.install pkgs=\(aq["foo","bar"]\(aq .ft P .fi .UNINDENT .sp Returns a dict containing the new package names and versions: .sp .nf .ft C {\(aq\(aq: {\(aqold\(aq: \(aq\(aq, \(aqnew\(aq: \(aq\(aq}} .ft P .fi .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.install \(aqpackage package package\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.brew.latest_version(*names, **kwargs) Return the latest version of the named package available for upgrade or installation .sp Note that this currently not fully implemented but needs to return something to avoid a traceback when calling pkg.latest. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.latest_version salt \(aq*\(aq pkg.latest_version .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.brew.list_pkgs(versions_as_list=False, **kwargs) List the packages currently installed in a dict: .sp .nf .ft C {\(aq\(aq: \(aq\(aq} .ft P .fi .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.list_pkgs .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.brew.list_upgrades() Check whether or not an upgrade is available for all packages .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.list_upgrades .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.brew.refresh_db() Update the homebrew package repository. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.refresh_db .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.brew.remove(name=None, pkgs=None, **kwargs) Removes packages with \fBbrew uninstall\fP. .INDENT 7.0 .TP .B name The name of the package to be deleted. .UNINDENT .sp Multiple Package Options: .INDENT 7.0 .TP .B pkgs A list of packages to delete. Must be passed as a python list. The \fBname\fP parameter will be ignored if this option is passed. .UNINDENT .sp New in version 0.16.0. .sp Returns a dict containing the changes. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.remove salt \(aq*\(aq pkg.remove ,, salt \(aq*\(aq pkg.remove pkgs=\(aq["foo", "bar"]\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.brew.upgrade(refresh=True) Upgrade outdated, unpinned brews. .INDENT 7.0 .TP .B refresh Fetch the newest version of Homebrew and all formulae from GitHub before installing. .UNINDENT .sp Return a dict containing the new package names and versions: .sp .nf .ft C {\(aq\(aq: {\(aqold\(aq: \(aq\(aq, \(aqnew\(aq: \(aq\(aq}} .ft P .fi .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.upgrade .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.brew.upgrade_available(pkg) Check whether or not an upgrade is available for a given package .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.upgrade_available .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.brew.version(*names, **kwargs) Returns a string representing the package version or an empty string if not installed. If more than one package name is specified, a dict of name/version pairs is returned. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.version salt \(aq*\(aq pkg.version .ft P .fi .UNINDENT .SS salt.modules.bridge .sp Module for gathering and managing bridging information .INDENT 0.0 .TP .B salt.modules.bridge.add(br=None) Creates a bridge .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq bridge.add br0 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.bridge.addif(br=None, iface=None) Adds an interface to a bridge .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq bridge.addif br0 eth0 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.bridge.delete(br=None) Deletes a bridge .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq bridge.delete br0 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.bridge.delif(br=None, iface=None) Removes an interface from a bridge .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq bridge.delif br0 eth0 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.bridge.find_interfaces(*args) Returns the bridge to which the interfaces are bond to .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq bridge.find_interfaces eth0 [eth1...] .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.bridge.interfaces(br=None) Returns interfaces attached to a bridge .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq bridge.interfaces br0 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.bridge.list() Returns the machine\(aqs bridges list .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq bridge.list .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.bridge.show(br=None) Returns bridges interfaces along with enslaved physical interfaces. If no interface is given, all bridges are shown, else only the specified bridge values are returned. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq bridge.show salt \(aq*\(aq bridge.show br0 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.bridge.stp(br=None, state=\(aqdisable\(aq, iface=None) Sets Spanning Tree Protocol state for a bridge .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq bridge.stp br0 enable salt \(aq*\(aq bridge.stp br0 disable .ft P .fi .sp For BSD\-like operating systems, it is required to add the interface on which to enable the STP. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq bridge.stp bridge0 enable fxp0 salt \(aq*\(aq bridge.stp bridge0 disable fxp0 .ft P .fi .UNINDENT .SS salt.modules.bsd_shadow .sp Manage the password database on BSD systems .INDENT 0.0 .TP .B salt.modules.bsd_shadow.default_hash() Returns the default hash used for unset passwords .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq shadow.default_hash .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.bsd_shadow.info(name) Return information for the specified user .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq shadow.info someuser .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.bsd_shadow.set_password(name, password) Set the password for a named user. The password must be a properly defined hash. The password hash can be generated with this command: .sp \fBpython \-c "import crypt; print crypt.crypt(\(aqpassword\(aq, ciphersalt)"\fP .sp \fBNOTE:\fP When constructing the \fBciphersalt\fP string, you must escape any dollar signs, to avoid them being interpolated by the shell. .sp \fB\(aqpassword\(aq\fP is, of course, the password for which you want to generate a hash. .sp \fBciphersalt\fP is a combination of a cipher identifier, an optional number of rounds, and the cryptographic salt. The arrangement and format of these fields depends on the cipher and which flavor of BSD you are using. For more information on this, see the manpage for \fBcrpyt(3)\fP. On NetBSD, additional information is available in \fBpasswd.conf(5)\fP. .sp It is important to make sure that a supported cipher is used. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq shadow.set_password someuser \(aq$1$UYCIxa628.9qXjpQCjM4a..\(aq .ft P .fi .UNINDENT .SS salt.modules.cassandra .sp Cassandra NoSQL Database Module .INDENT 0.0 .TP .B depends .INDENT 7.0 .IP \(bu 2 pycassa Cassandra Python adapter .UNINDENT .TP .B configuration The location of the \(aqnodetool\(aq command, host, and thrift port needs to be specified via pillar: .sp .nf .ft C cassandra.nodetool: /usr/local/bin/nodetool cassandra.host: localhost cassandra.thrift_port: 9160 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cassandra.column_families(keyspace=None) Return existing column families for all keyspaces or just the provided one. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cassandra.column_families salt \(aq*\(aq cassandra.column_families .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cassandra.column_family_definition(keyspace=None, column_family=None) Return a dictionary of column family definitions for the given keyspace/column_family .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cassandra.column_family_definition .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cassandra.compactionstats() Return compactionstats info .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cassandra.compactionstats .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cassandra.info() Return cassandra node info .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cassandra.info .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cassandra.keyspaces() Return existing keyspaces .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cassandra.keyspaces .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cassandra.netstats() Return netstats info .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cassandra.netstats .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cassandra.ring() Return cassandra ring info .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cassandra.ring .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cassandra.tpstats() Return tpstats info .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cassandra.tpstats .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cassandra.version() Return the cassandra version .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cassandra.version .ft P .fi .UNINDENT .SS salt.modules.chef .sp Execute chef in server or solo mode .INDENT 0.0 .TP .B salt.modules.chef.client(*args, **kwargs) Execute a chef client run and return a dict with the stderr, stdout, return code, etc. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq chef.client server=https://localhost \-l debug .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.chef.ohai(*args, **kwargs) Execute a ohai and return a dict with the stderr, stdout, return code, etc. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq chef.ohai .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.chef.solo(*args, **kwargs) Execute a chef solo run and return a dict with the stderr, stdout, return code, etc. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq chef.solo config=/etc/chef/solo.rb \-l debug .ft P .fi .UNINDENT .SS salt.modules.chocolatey .sp A dead simple module wrapping calls to the Chocolatey package manager (\fI\%http://chocolatey.org\fP) .sp New in version 2014.1.0: (Hydrogen) .INDENT 0.0 .TP .B salt.modules.chocolatey.bootstrap(force=False) Download and install the latest version of the Chocolatey package manager via the official bootstrap. .sp Chocolatey requires Windows PowerShell and the .NET v4.0 runtime. Depending on the host\(aqs version of Windows, chocolatey.bootstrap will attempt to ensure these prerequisites are met by downloading and executing the appropriate installers from Microsoft. .sp Note that if PowerShell is installed, you may have to restart the host machine for Chocolatey to work. .INDENT 7.0 .TP .B force Run the bootstrap process even if Chocolatey is found in the path. .UNINDENT .sp .nf .ft C salt \(aq*\(aq chocolatey.bootstrap salt \(aq*\(aq chocolatey.bootstrap force=True .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.chocolatey.install(name, version=None, source=None, force=False) Instructs Chocolatey to install a package. .INDENT 7.0 .TP .B name The name of the package to be installed. Only accepts a single argument. .TP .B version Install a specific version of the package. Defaults to latest version. .TP .B source Chocolatey repository (directory, share or remote URL feed) the package comes from. Defaults to the official Chocolatey feed. .TP .B force Reinstall the current version of an existing package. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq chocolatey.install salt \(aq*\(aq chocolatey.install version= .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.chocolatey.install_cygwin(name) Instructs Chocolatey to install a package via Cygwin. .INDENT 7.0 .TP .B name The name of the package to be installed. Only accepts a single argument. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq chocolatey.install_cygwin .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.chocolatey.install_gem(name, version=None) Instructs Chocolatey to install a package via Ruby\(aqs Gems. .INDENT 7.0 .TP .B name The name of the package to be installed. Only accepts a single argument. .TP .B version Install a specific version of the package. Defaults to latest version available. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq chocolatey.install_gem salt \(aq*\(aq chocolatey.install_gem version= .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.chocolatey.install_missing(name, version=None, source=None) Instructs Chocolatey to install a package if it doesn\(aqt already exist. .INDENT 7.0 .TP .B name The name of the package to be installed. Only accepts a single argument. .TP .B version Install a specific version of the package. Defaults to latest version available. .TP .B source Chocolatey repository (directory, share or remote URL feed) the package comes from. Defaults to the official Chocolatey feed. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq chocolatey.install_missing salt \(aq*\(aq chocolatey.install_missing version= .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.chocolatey.install_python(name, version=None) Instructs Chocolatey to install a package via Python\(aqs easy_install. .INDENT 7.0 .TP .B name The name of the package to be installed. Only accepts a single argument. .TP .B version Install a specific version of the package. Defaults to latest version available. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq chocolatey.install_python salt \(aq*\(aq chocolatey.install_python version= .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.chocolatey.install_webpi(name) Instructs Chocolatey to install a package via the Microsoft Web PI service. .INDENT 7.0 .TP .B name The name of the package to be installed. Only accepts a single argument. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq chocolatey.install_webpi .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.chocolatey.install_windowsfeatures(name) Instructs Chocolatey to install a Windows Feature via the Deployment Image Servicing and Management tool. .INDENT 7.0 .TP .B name The name of the feature to be installed. Only accepts a single argument. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq chocolatey.install_windowsfeatures .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.chocolatey.list(filter, all_versions=False, pre_versions=False, source=None) Instructs Chocolatey to pull a vague package list from the repository. .INDENT 7.0 .TP .B filter Term used to filter down results. Searches against name/description/tag. .TP .B all_versions Display all available package versions in results. Defaults to False. .TP .B pre_versions Display pre\-release packages in results. Defaults to False. .TP .B source Chocolatey repository (directory, share or remote URL feed) the package comes from. Defaults to the official Chocolatey feed. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq chocolatey.list salt \(aq*\(aq chocolatey.list all_versions=True .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.chocolatey.list_webpi() Instructs Chocolatey to pull a full package list from the Microsoft Web PI repository. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq chocolatey.list_webpi .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.chocolatey.list_windowsfeatures() Instructs Chocolatey to pull a full package list from the Windows Features list, via the Deployment Image Servicing and Management tool. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq chocolatey.list_windowsfeatures .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.chocolatey.uninstall(name, version=None) Instructs Chocolatey to uninstall a package. .INDENT 7.0 .TP .B name The name of the package to be uninstalled. Only accepts a single argument. .TP .B version Uninstalls a specific version of the package. Defaults to latest version installed. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq chocolatey.uninstall salt \(aq*\(aq chocolatey.uninstall version= .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.chocolatey.update(name, source=None, pre_versions=False) Instructs Chocolatey to update packages on the system. .INDENT 7.0 .TP .B name The name of the package to update, or "all" to update everything installed on the system. .TP .B source Chocolatey repository (directory, share or remote URL feed) the package comes from. Defaults to the official Chocolatey feed. .TP .B pre_versions Include pre\-release packages in comparison. Defaults to False. .UNINDENT .sp CLI Example: .sp .nf .ft C salt "*" chocolatey.update all salt "*" chocolatey.update pre_versions=True .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.chocolatey.version(name, check_remote=False, source=None, pre_versions=False) Instructs Chocolatey to check an installed package version, and optionally compare it to one available from a remote feed. .INDENT 7.0 .TP .B name The name of the package to check. .TP .B check_remote Get the version number of the latest package from the remote feed. Defaults to False. .TP .B source Chocolatey repository (directory, share or remote URL feed) the package comes from. Defaults to the official Chocolatey feed. .TP .B pre_versions Include pre\-release packages in comparison. Defaults to False. .UNINDENT .sp CLI Example: .sp .nf .ft C salt "*" chocolatey.version salt "*" chocolatey.version check_remote=True .ft P .fi .UNINDENT .SS salt.modules.cloud .sp Salt\-specific interface for calling Salt Cloud directly .INDENT 0.0 .TP .B salt.modules.cloud.action(fun=None, cloudmap=None, names=None, provider=None, instance=None, **kwargs) Execute a single action on the given provider/instance .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cloud.action start instance=myinstance salt \(aq*\(aq cloud.action stop instance=myinstance salt \(aq*\(aq cloud.action show_image provider=my\-ec2\-config image=ami\-1624987f .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cloud.create(provider, names, **kwargs) Create an instance using Salt Cloud .sp CLI Example: .sp .nf .ft C salt minionname cloud.create my\-ec2\-config myinstance image=ami\-1624987f size=\(aqMicro Instance\(aq ssh_username=ec2\-user securitygroup=default delvol_on_destroy=True .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cloud.destroy(names) Destroy the named VM(s) .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cloud.destroy myinstance .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cloud.full_query(query_type=\(aqlist_nodes_full\(aq) List all available cloud provider data .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cloud.full_query .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cloud.list_images(provider=\(aqall\(aq) List cloud provider images for the given providers .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cloud.list_images my\-gce\-config .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cloud.list_locations(provider=\(aqall\(aq) List cloud provider locations for the given providers .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cloud.list_locations my\-gce\-config .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cloud.list_sizes(provider=\(aqall\(aq) List cloud provider sizes for the given providers .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cloud.list_sizes my\-gce\-config .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cloud.network_create(provider, names, **kwargs) Create private network .sp CLI Example: .sp .nf .ft C salt minionname cloud.network_create my\-nova names=[\(aqsalt\(aq] cidr=\(aq192.168.100.0/24\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cloud.network_list(provider) List private networks .sp CLI Example: .sp .nf .ft C salt minionname cloud.network_list my\-nova .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cloud.profile(profile, names, vm_overrides=None, **kwargs) Spin up an instance using Salt Cloud .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cloud.profile my\-gce\-config myinstance .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cloud.query(query_type=\(aqlist_nodes\(aq) List cloud provider data for all providers .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq cloud.query salt \(aq*\(aq cloud.query list_nodes_full salt \(aq*\(aq cloud.query list_nodes_select .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cloud.select_query(query_type=\(aqlist_nodes_select\(aq) List selected nodes .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cloud.select_query .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cloud.virtual_interface_create(provider, names, **kwargs) Attach private interfaces to a server .sp CLI Example: .sp .nf .ft C salt minionname cloud.virtual_interface_create my\-nova names=[\(aqsalt\-master\(aq] net_name=\(aqsalt\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cloud.virtual_interface_list(provider, names, **kwargs) List virtual interfaces on a server .sp CLI Example: .sp .nf .ft C salt minionname cloud.virtual_interface_list my\-nova names=[\(aqsalt\-master\(aq] .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cloud.volume_attach(provider, names, **kwargs) Attach volume to a server .sp CLI Example: .sp .nf .ft C salt minionname cloud.volume_attach my\-nova myblock server_name=myserver device=\(aq/dev/xvdf\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cloud.volume_create(provider, names, **kwargs) Create volume .sp CLI Example: .sp .nf .ft C salt minionname cloud.volume_create my\-nova myblock size=100 voltype=SSD .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cloud.volume_delete(provider, names, **kwargs) Delete volume .sp CLI Example: .sp .nf .ft C salt minionname cloud.volume_delete my\-nova myblock .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cloud.volume_detach(provider, names, **kwargs) Detach volume from a server .sp CLI Example: .sp .nf .ft C salt minionname cloud.volume_detach my\-nova myblock server_name=myserver .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cloud.volume_list(provider) List block storage volumes .sp CLI Example: .sp .nf .ft C salt minionname cloud.volume_list my\-nova .ft P .fi .UNINDENT .SS salt.modules.cmdmod .sp A module for shelling out .sp Keep in mind that this module is insecure, in that it can give whomever has access to the master root execution access to all salt minions .INDENT 0.0 .TP .B salt.modules.cmdmod.exec_code(lang, code, cwd=None) Pass in two strings, the first naming the executable language, aka \- python2, python3, ruby, perl, lua, etc. the second string containing the code you wish to execute. The stdout and stderr will be returned .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cmd.exec_code ruby \(aqputs "cheese"\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cmdmod.has_exec(cmd) Returns true if the executable is available on the minion, false otherwise .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cmd.has_exec cat .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cmdmod.retcode(cmd, cwd=None, stdin=None, runas=None, shell=\(aq/bin/bash\(aq, python_shell=True, env=None, clean_env=False, template=None, umask=None, output_loglevel=\(aqdebug\(aq, quiet=False, timeout=None, reset_system_locale=True, ignore_retcode=False, saltenv=\(aqbase\(aq, use_vt=False, **kwargs) Execute a shell command and return the command\(aqs return code. .sp Note that \fBenv\fP represents the environment variables for the command, and should be formatted as a dict, or a YAML string which resolves to a dict. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cmd.retcode "file /bin/bash" .ft P .fi .sp The template arg can be set to \(aqjinja\(aq or another supported template engine to render the command arguments before execution. For example: .sp .nf .ft C salt \(aq*\(aq cmd.retcode template=jinja "file {{grains.pythonpath[0]}}/python" .ft P .fi .sp A string of standard input can be specified for the command to be run using the \fBstdin\fP parameter. This can be useful in cases where sensitive information must be read from standard input.: .sp .nf .ft C salt \(aq*\(aq cmd.retcode "grep f" stdin=\(aqone\entwo\enthree\enfour\enfive\en\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cmdmod.run(cmd, cwd=None, stdin=None, runas=None, shell=\(aq/bin/bash\(aq, python_shell=True, env=None, clean_env=False, template=None, rstrip=True, umask=None, output_loglevel=\(aqdebug\(aq, quiet=False, timeout=None, reset_system_locale=True, ignore_retcode=False, saltenv=\(aqbase\(aq, use_vt=False, **kwargs) Execute the passed command and return the output as a string .sp Note that \fBenv\fP represents the environment variables for the command, and should be formatted as a dict, or a YAML string which resolves to a dict. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cmd.run "ls \-l | awk \(aq/foo/{print \e$2}\(aq" .ft P .fi .sp The template arg can be set to \(aqjinja\(aq or another supported template engine to render the command arguments before execution. For example: .sp .nf .ft C salt \(aq*\(aq cmd.run template=jinja "ls \-l /tmp/{{grains.id}} | awk \(aq/foo/{print \e$2}\(aq" .ft P .fi .sp Specify an alternate shell with the shell parameter: .sp .nf .ft C salt \(aq*\(aq cmd.run "Get\-ChildItem C:\e " shell=\(aqpowershell\(aq .ft P .fi .sp A string of standard input can be specified for the command to be run using the \fBstdin\fP parameter. This can be useful in cases where sensitive information must be read from standard input.: .sp .nf .ft C salt \(aq*\(aq cmd.run "grep f" stdin=\(aqone\entwo\enthree\enfour\enfive\en\(aq .ft P .fi .sp If an equal sign (\fB=\fP) appears in an argument to a Salt command it is interpreted as a keyword argument in the format \fBkey=val\fP. That processing can be bypassed in order to pass an equal sign through to the remote shell command by manually specifying the kwarg: .sp .nf .ft C salt \(aq*\(aq cmd.run cmd=\(aqsed \-e s/=/:/g\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cmdmod.run_all(cmd, cwd=None, stdin=None, runas=None, shell=\(aq/bin/bash\(aq, python_shell=True, env=None, clean_env=False, template=None, rstrip=True, umask=None, output_loglevel=\(aqdebug\(aq, quiet=False, timeout=None, reset_system_locale=True, ignore_retcode=False, saltenv=\(aqbase\(aq, use_vt=False, **kwargs) Execute the passed command and return a dict of return data .sp Note that \fBenv\fP represents the environment variables for the command, and should be formatted as a dict, or a YAML string which resolves to a dict. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cmd.run_all "ls \-l | awk \(aq/foo/{print \e$2}\(aq" .ft P .fi .sp The template arg can be set to \(aqjinja\(aq or another supported template engine to render the command arguments before execution. For example: .sp .nf .ft C salt \(aq*\(aq cmd.run_all template=jinja "ls \-l /tmp/{{grains.id}} | awk \(aq/foo/{print \e$2}\(aq" .ft P .fi .sp A string of standard input can be specified for the command to be run using the \fBstdin\fP parameter. This can be useful in cases where sensitive information must be read from standard input.: .sp .nf .ft C salt \(aq*\(aq cmd.run_all "grep f" stdin=\(aqone\entwo\enthree\enfour\enfive\en\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cmdmod.run_chroot(root, cmd) New in version Helium. .sp This function runs \fBcmd.run_all\fP wrapped within a chroot, with dev and proc mounted in the chroot .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cmd.run_chroot /var/lib/lxc/container_name/rootfs \(aqsh /tmp/bootstrap.sh\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cmdmod.run_stderr(cmd, cwd=None, stdin=None, runas=None, shell=\(aq/bin/bash\(aq, python_shell=True, env=None, clean_env=False, template=None, rstrip=True, umask=None, output_loglevel=\(aqdebug\(aq, quiet=False, timeout=None, reset_system_locale=True, ignore_retcode=False, saltenv=\(aqbase\(aq, use_vt=False, **kwargs) Execute a command and only return the standard error .sp Note that \fBenv\fP represents the environment variables for the command, and should be formatted as a dict, or a YAML string which resolves to a dict. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cmd.run_stderr "ls \-l | awk \(aq/foo/{print \e$2}\(aq" .ft P .fi .sp The template arg can be set to \(aqjinja\(aq or another supported template engine to render the command arguments before execution. For example: .sp .nf .ft C salt \(aq*\(aq cmd.run_stderr template=jinja "ls \-l /tmp/{{grains.id}} | awk \(aq/foo/{print \e$2}\(aq" .ft P .fi .sp A string of standard input can be specified for the command to be run using the \fBstdin\fP parameter. This can be useful in cases where sensitive information must be read from standard input.: .sp .nf .ft C salt \(aq*\(aq cmd.run_stderr "grep f" stdin=\(aqone\entwo\enthree\enfour\enfive\en\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cmdmod.run_stdout(cmd, cwd=None, stdin=None, runas=None, shell=\(aq/bin/bash\(aq, python_shell=True, env=None, clean_env=False, template=None, rstrip=True, umask=None, output_loglevel=\(aqdebug\(aq, quiet=False, timeout=None, reset_system_locale=True, ignore_retcode=False, saltenv=\(aqbase\(aq, use_vt=False, **kwargs) Execute a command, and only return the standard out .sp Note that \fBenv\fP represents the environment variables for the command, and should be formatted as a dict, or a YAML string which resolves to a dict. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cmd.run_stdout "ls \-l | awk \(aq/foo/{print \e$2}\(aq" .ft P .fi .sp The template arg can be set to \(aqjinja\(aq or another supported template engine to render the command arguments before execution. For example: .sp .nf .ft C salt \(aq*\(aq cmd.run_stdout template=jinja "ls \-l /tmp/{{grains.id}} | awk \(aq/foo/{print \e$2}\(aq" .ft P .fi .sp A string of standard input can be specified for the command to be run using the \fBstdin\fP parameter. This can be useful in cases where sensitive information must be read from standard input.: .sp .nf .ft C salt \(aq*\(aq cmd.run_stdout "grep f" stdin=\(aqone\entwo\enthree\enfour\enfive\en\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cmdmod.script(source, args=None, cwd=None, stdin=None, runas=None, shell=\(aq/bin/bash\(aq, python_shell=True, env=None, template=\(aqjinja\(aq, umask=None, output_loglevel=\(aqdebug\(aq, quiet=False, timeout=None, reset_system_locale=True, __env__=None, saltenv=\(aqbase\(aq, use_vt=False, **kwargs) Download a script from a remote location and execute the script locally. The script can be located on the salt master file server or on an HTTP/FTP server. .sp The script will be executed directly, so it can be written in any available programming language. .sp The script can also be formatted as a template, the default is jinja. Arguments for the script can be specified as well. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cmd.script salt://scripts/runme.sh salt \(aq*\(aq cmd.script salt://scripts/runme.sh \(aqarg1 arg2 "arg 3"\(aq salt \(aq*\(aq cmd.script salt://scripts/windows_task.ps1 args=\(aq \-Input c:\etmp\einfile.txt\(aq shell=\(aqpowershell\(aq .ft P .fi .sp A string of standard input can be specified for the command to be run using the \fBstdin\fP parameter. This can be useful in cases where sensitive information must be read from standard input.: .sp .nf .ft C salt \(aq*\(aq cmd.script salt://scripts/runme.sh stdin=\(aqone\entwo\enthree\enfour\enfive\en\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cmdmod.script_retcode(source, cwd=None, stdin=None, runas=None, shell=\(aq/bin/bash\(aq, python_shell=True, env=None, template=\(aqjinja\(aq, umask=None, timeout=None, reset_system_locale=True, __env__=None, saltenv=\(aqbase\(aq, output_loglevel=\(aqdebug\(aq, use_vt=False, **kwargs) Download a script from a remote location and execute the script locally. The script can be located on the salt master file server or on an HTTP/FTP server. .sp The script will be executed directly, so it can be written in any available programming language. .sp The script can also be formatted as a template, the default is jinja. .sp Only evaluate the script return code and do not block for terminal output .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cmd.script_retcode salt://scripts/runme.sh .ft P .fi .sp A string of standard input can be specified for the command to be run using the \fBstdin\fP parameter. This can be useful in cases where sensitive information must be read from standard input.: .sp .nf .ft C salt \(aq*\(aq cmd.script_retcode salt://scripts/runme.sh stdin=\(aqone\entwo\enthree\enfour\enfive\en\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cmdmod.which(cmd) Returns the path of an executable available on the minion, None otherwise .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cmd.which cat .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cmdmod.which_bin(cmds) Returns the first command found in a list of commands .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cmd.which_bin \(aq[pip2, pip, pip\-python]\(aq .ft P .fi .UNINDENT .SS salt.modules.composer .sp Use composer to install PHP dependencies for a directory .INDENT 0.0 .TP .B salt.modules.composer.install(dir, composer=None, php=None, runas=None, prefer_source=None, prefer_dist=None, no_scripts=None, no_plugins=None, optimize=None, no_dev=None, quiet=False, composer_home=\(aq/root\(aq) Install composer dependencies for a directory. .sp If composer has not been installed globally making it available in the system PATH & making it executible, the \fBcomposer\fP and \fBphp\fP parameters will need to be set to the location of the executables. .INDENT 7.0 .TP .B dir Directory location of the composer.json file. .TP .B composer Location of the composer.phar file. If not set composer will just execute "composer" as if it is installed globally. (i.e. /path/to/composer.phar) .TP .B php Location of the php executible to use with composer. (i.e. /usr/bin/php) .TP .B runas Which system user to run composer as. .TP .B prefer_source \-\-prefer\-source option of composer. .TP .B prefer_dist \-\-prefer\-dist option of composer. .TP .B no_scripts \-\-no\-scripts option of composer. .TP .B no_plugins \-\-no\-plugins option of composer. .TP .B optimize \-\-optimize\-autoloader option of composer. Recommended for production. .TP .B no_dev \-\-no\-dev option for composer. Recommended for production. .TP .B quiet \-\-quiet option for composer. Whether or not to return output from composer. .TP .B composer_home $COMPOSER_HOME environment variable .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq composer.install /var/www/application salt \(aq*\(aq composer.install /var/www/application no_dev=True optimize=True .ft P .fi .UNINDENT .SS salt.modules.config .sp Return config information .INDENT 0.0 .TP .B salt.modules.config.backup_mode(backup=\(aq\(aq) Return the backup mode .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq config.backup_mode .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.config.dot_vals(value) Pass in a configuration value that should be preceded by the module name and a dot, this will return a list of all read key/value pairs .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq config.dot_vals host .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.config.gather_bootstrap_script(bootstrap=None) Download the salt\-bootstrap script, and return the first location downloaded to. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq config.gather_bootstrap_script .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.config.get(key, default=\(aq\(aq) Attempt to retrieve the named value from opts, pillar, grains or the master config, if the named value is not available return the passed default. The default return is an empty string. .sp The value can also represent a value in a nested dict using a ":" delimiter for the dict. This means that if a dict looks like this: .sp .nf .ft C {\(aqpkg\(aq: {\(aqapache\(aq: \(aqhttpd\(aq}} .ft P .fi .sp To retrieve the value associated with the apache key in the pkg dict this key can be passed: .sp .nf .ft C pkg:apache .ft P .fi .sp This routine traverses these data stores in this order: .INDENT 7.0 .IP \(bu 2 Local minion config (opts) .IP \(bu 2 Minion\(aqs grains .IP \(bu 2 Minion\(aqs pillar .IP \(bu 2 Master config .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq config.get pkg:apache .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.config.manage_mode(mode) Return a mode value, normalized to a string .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq config.manage_mode .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.config.merge(value, default=\(aq\(aq, omit_opts=False, omit_master=False, omit_pillar=False) Retrieves an option based on key, merging all matches. .sp Same as \fBoption()\fP except that it merges all matches, rather than taking the first match. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq config.merge schedule .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.config.option(value, default=\(aq\(aq, omit_opts=False, omit_master=False, omit_pillar=False) Pass in a generic option and receive the value that will be assigned .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq config.option redis.host .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.config.valid_fileproto(uri) Returns a boolean value based on whether or not the URI passed has a valid remote file protocol designation .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq config.valid_fileproto salt://path/to/file .ft P .fi .UNINDENT .SS salt.modules.cp .sp Minion side functions for salt\-cp .INDENT 0.0 .TP .B salt.modules.cp.cache_dir(path, saltenv=\(aqbase\(aq, include_empty=False, include_pat=None, exclude_pat=None, env=None) Download and cache everything under a directory from the master .INDENT 7.0 .TP .B include_pat None Glob or regex to narrow down the files cached from the given path. If matching with a regex, the regex must be prefixed with \fBE@\fP, otherwise the expression will be interpreted as a glob. .sp New in version Helium. .TP .B exclude_pat None Glob or regex to exclude certain files from being cached from the given path. If matching with a regex, the regex must be prefixed with \fBE@\fP, otherwise the expression will be interpreted as a glob. .IP Note If used with \fBinclude_pat\fP, files matching this pattern will be excluded from the subset of files defined by \fBinclude_pat\fP. .RE .sp New in version Helium. .UNINDENT .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq cp.cache_dir salt://path/to/dir salt \(aq*\(aq cp.cache_dir salt://path/to/dir include_pat=\(aqE@*.py$\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cp.cache_file(path, saltenv=\(aqbase\(aq, env=None) Used to cache a single file in the local salt\-master file cache. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cp.cache_file salt://path/to/file .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cp.cache_files(paths, saltenv=\(aqbase\(aq, env=None) Used to gather many files from the master, the gathered files will be saved in the minion cachedir reflective to the paths retrieved from the master. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cp.cache_files salt://pathto/file1,salt://pathto/file1 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cp.cache_local_file(path) Cache a local file on the minion in the localfiles cache .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cp.cache_local_file /etc/hosts .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cp.cache_master(saltenv=\(aqbase\(aq, env=None) Retrieve all of the files on the master and cache them locally .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cp.cache_master .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cp.get_dir(path, dest, saltenv=\(aqbase\(aq, template=None, gzip=None, env=None) Used to recursively copy a directory from the salt master .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cp.get_dir salt://path/to/dir/ /minion/dest .ft P .fi .sp get_dir supports the same template and gzip arguments as get_file. .UNINDENT .INDENT 0.0 .TP .B salt.modules.cp.get_file(path, dest, saltenv=\(aqbase\(aq, makedirs=False, template=None, gzip=None, env=None) Used to get a single file from the salt master .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cp.get_file salt://path/to/file /minion/dest .ft P .fi .sp Template rendering can be enabled on both the source and destination file names like so: .sp .nf .ft C salt \(aq*\(aq cp.get_file "salt://{{grains.os}}/vimrc" /etc/vimrc template=jinja .ft P .fi .sp This example would instruct all Salt minions to download the vimrc from a directory with the same name as their os grain and copy it to /etc/vimrc .sp For larger files, the cp.get_file module also supports gzip compression. Because gzip is CPU\-intensive, this should only be used in scenarios where the compression ratio is very high (e.g. pretty\-printed JSON or YAML files). .sp Use the \fIgzip\fP named argument to enable it. Valid values are 1..9, where 1 is the lightest compression and 9 the heaviest. 1 uses the least CPU on the master (and minion), 9 uses the most. .UNINDENT .INDENT 0.0 .TP .B salt.modules.cp.get_file_str(path, saltenv=\(aqbase\(aq, env=None) Return the contents of a file from a URL .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cp.get_file_str salt://my/file .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cp.get_template(path, dest, template=\(aqjinja\(aq, saltenv=\(aqbase\(aq, env=None, makedirs=False, **kwargs) Render a file as a template before setting it down. Warning, order is not the same as in fileclient.cp for non breaking old API. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cp.get_template salt://path/to/template /minion/dest .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cp.get_url(path, dest, saltenv=\(aqbase\(aq, env=None) Used to get a single file from a URL. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cp.get_url salt://my/file /tmp/mine salt \(aq*\(aq cp.get_url http://www.slashdot.org /tmp/index.html .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cp.hash_file(path, saltenv=\(aqbase\(aq, env=None) Return the hash of a file, to get the hash of a file on the salt master file server prepend the path with salt:// otherwise, prepend the file with / for a local file. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cp.hash_file salt://path/to/file .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cp.is_cached(path, saltenv=\(aqbase\(aq, env=None) Return a boolean if the given path on the master has been cached on the minion .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cp.is_cached salt://path/to/file .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cp.list_master(saltenv=\(aqbase\(aq, prefix=\(aq\(aq, env=None) List all of the files stored on the master .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cp.list_master .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cp.list_master_dirs(saltenv=\(aqbase\(aq, prefix=\(aq\(aq, env=None) List all of the directories stored on the master .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cp.list_master_dirs .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cp.list_master_symlinks(saltenv=\(aqbase\(aq, prefix=\(aq\(aq, env=None) List all of the symlinks stored on the master .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cp.list_master_symlinks .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cp.list_minion(saltenv=\(aqbase\(aq, env=None) List all of the files cached on the minion .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cp.list_minion .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cp.list_states(saltenv=\(aqbase\(aq, env=None) List all of the available state modules in an environment .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cp.list_states .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cp.push(path) Push a file from the minion up to the master, the file will be saved to the salt master in the master\(aqs minion files cachedir (defaults to \fB/var/cache/salt/master/minions/minion\-id/files\fP) .sp Since this feature allows a minion to push a file up to the master server it is disabled by default for security purposes. To enable, set \fBfile_recv\fP to \fBTrue\fP in the master configuration file, and restart the master. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cp.push /etc/fstab .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cp.push_dir(path, glob=None) Push a directory from the minion up to the master, the files will be saved to the salt master in the master\(aqs minion files cachedir (defaults to \fB/var/cache/salt/master/minions/minion\-id/files\fP). It also has a glob for matching specific files using globbing. .sp New in version Helium. .sp Since this feature allows a minion to push files up to the master server it is disabled by default for security purposes. To enable, set \fBfile_recv\fP to \fBTrue\fP in the master configuration file, and restart the master. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cp.push /usr/lib/mysql salt \(aq*\(aq cp.push_dir /etc/modprobe.d/ glob=\(aq*.conf\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cp.recv(files, dest) Used with salt\-cp, pass the files dict, and the destination. .sp This function receives small fast copy files from the master via salt\-cp. It does not work via the CLI. .UNINDENT .SS salt.modules.cron .sp Work with cron .INDENT 0.0 .TP .B salt.modules.cron.list_tab(user) Return the contents of the specified user\(aqs crontab .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cron.list_tab root .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cron.ls(user) Return the contents of the specified user\(aqs crontab .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cron.list_tab root .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cron.raw_cron(user) Return the contents of the user\(aqs crontab .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cron.raw_cron root .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cron.rm(user, cmd, minute=None, hour=None, daymonth=None, month=None, dayweek=None, identifier=None) Remove a cron job for a specified user. If any of the day/time params are specified, the job will only be removed if the specified params match. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cron.rm_job root /usr/local/weekly salt \(aq*\(aq cron.rm_job root /usr/bin/foo dayweek=1 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cron.rm_env(user, name) Remove cron environment variable for a specified user. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cron.rm_env root MAILTO .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cron.rm_job(user, cmd, minute=None, hour=None, daymonth=None, month=None, dayweek=None, identifier=None) Remove a cron job for a specified user. If any of the day/time params are specified, the job will only be removed if the specified params match. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cron.rm_job root /usr/local/weekly salt \(aq*\(aq cron.rm_job root /usr/bin/foo dayweek=1 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cron.set_env(user, name, value=None) Set up an environment variable in the crontab. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cron.set_env root MAILTO user@example.com .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cron.set_job(user, minute, hour, daymonth, month, dayweek, cmd, comment=None, identifier=None) Sets a cron job up for a specified user. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cron.set_job root \(aq*\(aq \(aq*\(aq \(aq*\(aq \(aq*\(aq 1 /usr/local/weekly .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cron.set_special(user, special, cmd) Set up a special command in the crontab. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cron.set_special root @hourly \(aqecho foobar\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cron.write_cron_file(user, path) Writes the contents of a file to a user\(aqs crontab .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cron.write_cron_file root /tmp/new_cron .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.cron.write_cron_file_verbose(user, path) Writes the contents of a file to a user\(aqs crontab and return error message on error .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq cron.write_cron_file_verbose root /tmp/new_cron .ft P .fi .UNINDENT .SS salt.modules.daemontools .sp daemontools service module. This module will create daemontools type service watcher. .sp This module is compatible with the \fBservice\fP states, so it can be used to maintain services using the \fBprovider\fP argument: .sp .nf .ft C myservice: service: \- running \- provider: daemontools .ft P .fi .INDENT 0.0 .TP .B salt.modules.daemontools.available(name) Returns \fBTrue\fP if the specified service is available, otherwise returns \fBFalse\fP. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq daemontools.available foo .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.daemontools.full_restart(name) Calls daemontools.restart() function .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq daemontools.full_restart .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.daemontools.get_all() Return a list of all available services .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq daemontools.get_all .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.daemontools.missing(name) The inverse of daemontools.available. Returns \fBTrue\fP if the specified service is not available, otherwise returns \fBFalse\fP. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq daemontools.missing foo .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.daemontools.reload(name) Wrapper for term() .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq daemontools.reload .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.daemontools.restart(name) Restart service via daemontools. This will stop/start service .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq daemontools.restart .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.daemontools.start(name) Starts service via daemontools .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq daemontools.start .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.daemontools.status(name, sig=None) Return the status for a service via daemontools, return pid if running .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq daemontools.status .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.daemontools.stop(name) Stops service via daemontools .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq daemontools.stop .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.daemontools.term(name) Send a TERM to service via daemontools .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq daemontools.term .ft P .fi .UNINDENT .SS salt.modules.darwin_sysctl .sp Module for viewing and modifying sysctl parameters .INDENT 0.0 .TP .B salt.modules.darwin_sysctl.assign(name, value) Assign a single sysctl parameter for this minion .INDENT 7.0 .TP .B name The name of ths sysctl value to edit. .TP .B value The sysctl value to apply. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq sysctl.assign net.inet.icmp.icmplim 50 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.darwin_sysctl.get(name) Return a single sysctl parameter for this minion .INDENT 7.0 .TP .B name The name of the sysctl value to display. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq sysctl.get hw.physmem .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.darwin_sysctl.persist(name, value, config=\(aq/etc/sysctl.conf\(aq, apply_change=False) Assign and persist a simple sysctl parameter for this minion .INDENT 7.0 .TP .B name The name of the sysctl value to edit. .TP .B value The sysctl value to apply. .TP .B config The location of the sysctl configuration file. .TP .B apply_change Default is False; Default behavior only creates or edits the sysctl.conf file. If apply is set to True, the changes are applied to the system. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq sysctl.persist net.inet.icmp.icmplim 50 salt \(aq*\(aq sysctl.persist coretemp_load NO config=/etc/sysctl.conf .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.darwin_sysctl.show() Return a list of sysctl parameters for this minion .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq sysctl.show .ft P .fi .UNINDENT .SS salt.modules.data .sp Manage a local persistent data structure that can hold any arbitrary data specific to the minion .INDENT 0.0 .TP .B salt.modules.data.cas(key, value, old_value) Check and set a value in the minion datastore .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq data.cas .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.data.clear() Clear out all of the data in the minion datastore, this function is destructive! .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq data.clear .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.data.dump(new_data) Replace the entire datastore with a passed data structure .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq data.dump \(aq{\(aqeggs\(aq: \(aqspam\(aq}\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.data.getval(key) Get a value from the minion datastore .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq data.getval .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.data.getvals(*keys) Get values from the minion datastore .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq data.getvals [ ...] .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.data.load() Return all of the data in the minion datastore .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq data.load .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.data.update(key, value) Update a key with a value in the minion datastore .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq data.update .ft P .fi .UNINDENT .SS salt.modules.ddns .sp Support for RFC 2136 dynamic DNS updates. .INDENT 0.0 .TP .B depends .INDENT 7.0 .IP \(bu 2 dnspython Python module .UNINDENT .TP .B configuration If you want to use TSIG authentication for the server, there are a couple of optional configuration parameters made available to support this (the keyname is only needed if the keyring contains more than one key): .sp .nf .ft C keyring: keyring file (default=None) keyname: key name in file (default=None) .ft P .fi .sp The keyring file needs to be in json format and the key name needs to end with an extra period in the file, similar to this: .sp .nf .ft C {"keyname.": "keycontent"} .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.ddns.add_host(zone, name, ttl, ip, nameserver=\(aq127.0.0.1\(aq, replace=True, **kwargs) Add, replace, or update the A and PTR (reverse) records for a host. .sp CLI Example: .sp .nf .ft C salt ns1 ddns.add_host example.com host1 60 10.1.1.1 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.ddns.delete(zone, name, rdtype=None, data=None, nameserver=\(aq127.0.0.1\(aq, **kwargs) Delete a DNS record. .sp CLI Example: .sp .nf .ft C salt ns1 ddns.delete example.com host1 A .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.ddns.delete_host(zone, name, nameserver=\(aq127.0.0.1\(aq, **kwargs) Delete the forward and reverse records for a host. .sp Returns true if any records are deleted. .sp CLI Example: .sp .nf .ft C salt ns1 ddns.delete_host example.com host1 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.ddns.update(zone, name, ttl, rdtype, data, nameserver=\(aq127.0.0.1\(aq, replace=False, **kwargs) Add, replace, or update a DNS record. nameserver must be an IP address and the minion running this module must have update privileges on that server. If replace is true, first deletes all records for this name and type. .sp CLI Example: .sp .nf .ft C salt ns1 ddns.update example.com host1 60 A 10.0.0.1 .ft P .fi .UNINDENT .SS salt.modules.deb_apache .sp Support for Apache .sp Please note: The functions in here are Debian\-specific. Placing them in this separate file will allow them to load only on Debian\-based systems, while still loading under the \fBapache\fP namespace. .INDENT 0.0 .TP .B salt.modules.deb_apache.a2dismod(mod) Runs a2dismod for the given mod. .sp This will only be functional on Debian\-based operating systems (Ubuntu, Mint, etc). .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq apache.a2dismod vhost_alias .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.deb_apache.a2dissite(site) Runs a2dissite for the given site. .sp This will only be functional on Debian\-based operating systems (Ubuntu, Mint, etc). .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq apache.a2dissite example.com .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.deb_apache.a2enmod(mod) Runs a2enmod for the given mod. .sp This will only be functional on Debian\-based operating systems (Ubuntu, Mint, etc). .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq apache.a2enmod vhost_alias .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.deb_apache.a2ensite(site) Runs a2ensite for the given site. .sp This will only be functional on Debian\-based operating systems (Ubuntu, Mint, etc). .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq apache.a2ensite example.com .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.deb_apache.check_mod_enabled(mod) Checks to see if the specific mod symlink is in /etc/apache2/mods\-enabled. .sp This will only be functional on Debian\-based operating systems (Ubuntu, Mint, etc). .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq apache.check_mod_enabled status.conf salt \(aq*\(aq apache.check_mod_enabled status.load .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.deb_apache.check_site_enabled(site) Checks to see if the specific Site symlink is in /etc/apache2/sites\-enabled. .sp This will only be functional on Debian\-based operating systems (Ubuntu, Mint, etc). .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq apache.check_site_enabled example.com .ft P .fi .UNINDENT .SS salt.modules.debconfmod .sp Support for Debconf .INDENT 0.0 .TP .B salt.modules.debconfmod.get_selections(fetchempty=True) Answers to debconf questions for all packages in the following format: .sp .nf .ft C {\(aqpackage\(aq: [[\(aqquestion\(aq, \(aqtype\(aq, \(aqvalue\(aq], ...]} .ft P .fi .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq debconf.get_selections .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debconfmod.set(package, question, type, value, *extra) Set answers to debconf questions for a package. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq debconf.set [ ...] .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debconfmod.set_file(path, saltenv=\(aqbase\(aq, **kwargs) Set answers to debconf questions from a file. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq debconf.set_file salt://pathto/pkg.selections .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debconfmod.show(name) Answers to debconf questions for a package in the following format: .sp .nf .ft C [[\(aqquestion\(aq, \(aqtype\(aq, \(aqvalue\(aq], ...] .ft P .fi .sp If debconf doesn\(aqt know about a package, we return None. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq debconf.show .ft P .fi .UNINDENT .SS salt.modules.debian_ip .sp The networking module for Debian based distros .sp References: .INDENT 0.0 .IP \(bu 2 \fI\%http://www.debian.org/doc/manuals/debian-reference/ch05.en.html\fP .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_ip.apply_network_settings(**settings) Apply global network configuration. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq ip.apply_network_settings .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_ip.build_bond(iface, **settings) Create a bond script in /etc/modprobe.d with the passed settings and load the bonding kernel module. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq ip.build_bond bond0 mode=balance\-alb .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_ip.build_interface(iface, iface_type, enabled, **settings) Build an interface script for a network interface. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq ip.build_interface eth0 eth .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_ip.build_network_settings(**settings) Build the global network script. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq ip.build_network_settings .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_ip.build_routes(iface, **settings) Add route scripts for a network interface using up commands. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq ip.build_routes eth0 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_ip.down(iface, iface_type) Shutdown a network interface .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq ip.down eth0 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_ip.get_bond(iface) Return the content of a bond script .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq ip.get_bond bond0 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_ip.get_interface(iface) Return the contents of an interface script .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq ip.get_interface eth0 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_ip.get_network_settings() Return the contents of the global network script. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq ip.get_network_settings .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_ip.get_routes(iface) Return the routes for the interface .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq ip.get_interface eth0 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_ip.up(iface, iface_type) Start up a network interface .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq ip.up eth0 .ft P .fi .UNINDENT .SS salt.modules.debian_service .sp Service support for Debian systems (uses update\-rc.d and /sbin/service) .INDENT 0.0 .TP .B salt.modules.debian_service.available(name) Returns \fBTrue\fP if the specified service is available, otherwise returns \fBFalse\fP. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.available sshd .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_service.disable(name, **kwargs) Disable the named service to start at boot .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.disable .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_service.disabled(name) Return True if the named service is enabled, false otherwise .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.disabled .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_service.enable(name, **kwargs) Enable the named service to start at boot .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.enable .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_service.enabled(name) Return True if the named service is enabled, false otherwise .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.enabled .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_service.force_reload(name) Force\-reload the named service .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.force_reload .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_service.get_all() Return all available boot services .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.get_all .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_service.get_disabled() Return a set of services that are installed but disabled .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.get_disabled .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_service.get_enabled() Return a list of service that are enabled on boot .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.get_enabled .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_service.missing(name) The inverse of service.available. Returns \fBTrue\fP if the specified service is not available, otherwise returns \fBFalse\fP. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.missing sshd .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_service.reload(name) Reload the named service .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.reload .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_service.restart(name) Restart the named service .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.restart .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_service.start(name) Start the specified service .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.start .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_service.status(name, sig=None) Return the status for a service, pass a signature to use to find the service via ps .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.status .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.debian_service.stop(name) Stop the specified service .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.stop .ft P .fi .UNINDENT .SS salt.modules.defaults .INDENT 0.0 .TP .B salt.modules.defaults.get(key, default=\(aq\(aq) defaults.get is used much like pillar.get except that it will read a default value for a pillar from defaults.json or defaults.yaml files that are stored in the root of a salt formula. .sp When called from the CLI it works exactly like pillar.get. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq defaults.get core:users:root .ft P .fi .sp When called from an SLS file, it works by first reading a defaults.json and second a defaults.yaml file. If the key exists in these files and does not exist in a pillar named after the formula, the value from the defaults file is used. .sp Example core/defaults.json file for the \(aqcore\(aq formula: .sp .nf .ft C { "users": { "root": 0 } } .ft P .fi .sp With this, from a state file you can use salt[\(aqdefaults.get\(aq](\(aqusers:root\(aq) to read the \(aq0\(aq value from defaults.json if a core:users:root pillar key is not defined. .UNINDENT .SS salt.modules.dig .sp Compendium of generic DNS utilities .INDENT 0.0 .TP .B salt.modules.dig.A(host, nameserver=None) Return the A record for \fBhost\fP. .sp Always returns a list. .sp CLI Example: .sp .nf .ft C salt ns1 dig.A www.google.com .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dig.AAAA(host, nameserver=None) Return the AAAA record for \fBhost\fP. .sp Always returns a list. .sp CLI Example: .sp .nf .ft C salt ns1 dig.AAAA www.google.com .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dig.MX(domain, resolve=False, nameserver=None) Return a list of lists for the MX of \fBdomain\fP. .sp If the \fBresolve\fP argument is True, resolve IPs for the servers. .sp It\(aqs limited to one IP, because although in practice it\(aqs very rarely a round robin, it is an acceptable configuration and pulling just one IP lets the data be similar to the non\-resolved version. If you think an MX has multiple IPs, don\(aqt use the resolver here, resolve them in a separate step. .sp CLI Example: .sp .nf .ft C salt ns1 dig.MX google.com .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dig.NS(domain, resolve=True, nameserver=None) Return a list of IPs of the nameservers for \fBdomain\fP .sp If \fBresolve\fP is False, don\(aqt resolve names. .sp CLI Example: .sp .nf .ft C salt ns1 dig.NS google.com .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dig.SPF(domain, record=\(aqSPF\(aq, nameserver=None) Return the allowed IPv4 ranges in the SPF record for \fBdomain\fP. .sp If record is \fBSPF\fP and the SPF record is empty, the TXT record will be searched automatically. If you know the domain uses TXT and not SPF, specifying that will save a lookup. .sp CLI Example: .sp .nf .ft C salt ns1 dig.SPF google.com .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dig.TXT(host, nameserver=None) Return the TXT record for \fBhost\fP. .sp Always returns a list. .sp CLI Example: .sp .nf .ft C salt ns1 dig.TXT google.com .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dig.check_ip(addr) Check if address is a valid IP. returns True if valid, otherwise False. .sp CLI Example: .sp .nf .ft C salt ns1 dig.check_ip 127.0.0.1 salt ns1 dig.check_ip 1111:2222:3333:4444:5555:6666:7777:8888 .ft P .fi .UNINDENT .SS salt.modules.disk .sp Module for gathering disk information .INDENT 0.0 .TP .B salt.modules.disk.blkid(device=None) Return block device attributes: UUID, LABEL, etc. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq disk.blkid salt \(aq*\(aq disk.blkid /dev/sda .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.disk.inodeusage(args=None) Return inode usage information for volumes mounted on this minion .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq disk.inodeusage .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.disk.percent(args=None) Return partion information for volumes mounted on this minion .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq disk.percent /var .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.disk.usage(args=None) Return usage information for volumes mounted on this minion .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq disk.usage .ft P .fi .UNINDENT .SS salt.modules.djangomod .sp Manage Django sites .INDENT 0.0 .TP .B salt.modules.djangomod.collectstatic(settings_module, bin_env=None, no_post_process=False, ignore=None, dry_run=False, clear=False, link=False, no_default_ignore=False, pythonpath=None, env=None) Collect static files from each of your applications into a single location that can easily be served in production. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq django.collectstatic .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.djangomod.command(settings_module, command, bin_env=None, pythonpath=None, env=None, *args, **kwargs) Run arbitrary django management command .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq django.command .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.djangomod.createsuperuser(settings_module, username, email, bin_env=None, database=None, pythonpath=None, env=None) Create a super user for the database. This function defaults to use the \fB\-\-noinput\fP flag which prevents the creation of a password for the superuser. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq django.createsuperuser user user@example.com .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.djangomod.loaddata(settings_module, fixtures, bin_env=None, database=None, pythonpath=None, env=None) Load fixture data .INDENT 7.0 .TP .B Fixtures: comma separated list of fixtures to load .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq django.loaddata .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.djangomod.syncdb(settings_module, bin_env=None, migrate=False, database=None, pythonpath=None, env=None, noinput=True) Run syncdb .sp Execute the Django\-Admin syncdb command, if South is available on the minion the \fBmigrate\fP option can be passed as \fBTrue\fP calling the migrations to run after the syncdb completes .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq django.syncdb .ft P .fi .UNINDENT .SS salt.modules.dnsmasq .sp Module for managing dnsmasq .INDENT 0.0 .TP .B salt.modules.dnsmasq.fullversion() Shows installed version of dnsmasq and compile options. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq dnsmasq.version .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dnsmasq.get_config(config_file=\(aq/etc/dnsmasq.conf\(aq) Dumps all options from the config file. .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq dnsmasq.get_config salt \(aq*\(aq dnsmasq.get_config file=/etc/dnsmasq.conf .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dnsmasq.set_config(config_file=\(aq/etc/dnsmasq.conf\(aq, follow=True, **kwargs) Sets a value or a set of values in the specified file. By default, if conf\-dir is configured in this file, salt will attempt to set the option in any file inside the conf\-dir where it has already been enabled. If it does not find it inside any files, it will append it to the main config file. Setting follow to False will turn off this behavior. .sp If a config option currently appears multiple times (such as dhcp\-host, which is specified at least once per host), the new option will be added to the end of the main config file (and not to any includes). If you need an option added to a specific include file, specify it as the config_file. .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq dnsmasq.set_config domain=mydomain.com salt \(aq*\(aq dnsmasq.set_config follow=False domain=mydomain.com salt \(aq*\(aq dnsmasq.set_config file=/etc/dnsmasq.conf domain=mydomain.com .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dnsmasq.version() Shows installed version of dnsmasq. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq dnsmasq.version .ft P .fi .UNINDENT .SS salt.modules.dnsutil .sp Compendium of generic DNS utilities .INDENT 0.0 .TP .B salt.modules.dnsutil.A(host, nameserver=None) Return the A record for \(aqhost\(aq. .sp Always returns a list. .sp CLI Example: .sp .nf .ft C salt ns1 dig.A www.google.com .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dnsutil.MX(domain, resolve=False, nameserver=None) Return a list of lists for the MX of \fBdomain\fP. .sp If the \(aqresolve\(aq argument is True, resolve IPs for the servers. .sp It\(aqs limited to one IP, because although in practice it\(aqs very rarely a round robin, it is an acceptable configuration and pulling just one IP lets the data be similar to the non\-resolved version. If you think an MX has multiple IPs, don\(aqt use the resolver here, resolve them in a separate step. .sp CLI Example: .sp .nf .ft C salt ns1 dig.MX google.com .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dnsutil.NS(domain, resolve=True, nameserver=None) Return a list of IPs of the nameservers for \fBdomain\fP .sp If \(aqresolve\(aq is False, don\(aqt resolve names. .sp CLI Example: .sp .nf .ft C salt ns1 dig.NS google.com .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dnsutil.SPF(domain, record=\(aqSPF\(aq, nameserver=None) Return the allowed IPv4 ranges in the SPF record for \fBdomain\fP. .sp If record is \fBSPF\fP and the SPF record is empty, the TXT record will be searched automatically. If you know the domain uses TXT and not SPF, specifying that will save a lookup. .sp CLI Example: .sp .nf .ft C salt ns1 dig.SPF google.com .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dnsutil.check_ip(ip_addr) Check that string ip_addr is a valid IP .sp CLI Example: .sp .nf .ft C salt ns1 dig.check_ip 127.0.0.1 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dnsutil.hosts_append(hostsfile=\(aq/etc/hosts\(aq, ip_addr=None, entries=None) Append a single line to the /etc/hosts file. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq dnsutil.hosts_append /etc/hosts 127.0.0.1 ad1.yuk.co,ad2.yuk.co .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dnsutil.hosts_remove(hostsfile=\(aq/etc/hosts\(aq, entries=None) Remove a host from the /etc/hosts file. If doing so will leave a line containing only an IP address, then the line will be deleted. This function will leave comments and blank lines intact. .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq dnsutil.hosts_remove /etc/hosts ad1.yuk.co salt \(aq*\(aq dnsutil.hosts_remove /etc/hosts ad2.yuk.co,ad1.yuk.co .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dnsutil.parse_hosts(hostsfile=\(aq/etc/hosts\(aq, hosts=None) Parse /etc/hosts file. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq dnsutil.parse_hosts .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dnsutil.parse_zone(zonefile=None, zone=None) Parses a zone file. Can be passed raw zone data on the API level. .sp CLI Example: .sp .nf .ft C salt ns1 dnsutil.parse_zone /var/lib/named/example.com.zone .ft P .fi .UNINDENT .SS salt.modules.dockerio .SS Management of dockers .sp New in version 2014.1.0: (Hydrogen) .IP Note The DockerIO integration is still in beta; the API is subject to change .RE .SS General notes .INDENT 0.0 .IP \(bu 2 As we use states, we don\(aqt want to be continuously popping dockers, so we will map each container id (or image) with a grain whenever it is relevant. .IP \(bu 2 As a corollary, we will resolve a container id either directly by the id or try to find a container id matching something stocked in grain. .UNINDENT .SS Installation prerequisites .INDENT 0.0 .IP \(bu 2 You will need the \(aqdocker\-py\(aq python package in your python installation running salt. The version of docker\-py should support \fI\%version 1.12 of docker remote API.\fP. .IP \(bu 2 For now, you need docker\-py 0.3.2 .INDENT 2.0 .INDENT 3.5 pip install docker\-py==0.3.2 .UNINDENT .UNINDENT .UNINDENT .SS Prerequisite pillar configuration for authentication .INDENT 0.0 .IP \(bu 2 To push or pull you will need to be authenticated as the docker\-py bindings require it .IP \(bu 2 For this to happen, you will need to configure a mapping in the pillar representing your per URL authentication bits: .sp .nf .ft C docker\-registries: registry_url: email: foo@foo.com password: s3cr3t username: foo .ft P .fi .IP \(bu 2 You need at least an entry to the default docker index: .sp .nf .ft C docker\-registries: https://index.docker.io/v1: email: foo@foo.com password: s3cr3t username: foo .ft P .fi .UNINDENT .sp you can define multiple registries blocks for them to be aggregated, their id just must finish with \-docker\-registries: .sp .nf .ft C ac\-docker\-registries: https://index.bar.io/v1: email: foo@foo.com password: s3cr3t username: foo ab\-docker\-registries: https://index.foo.io/v1: email: foo@foo.com password: s3cr3t username: foo .ft P .fi .sp Would be the equivalent to: .sp .nf .ft C docker\-registries: https://index.bar.io/v1: email: foo@foo.com password: s3cr3t username: foo https://index.foo.io/v1: email: foo@foo.com password: s3cr3t username: foo .ft P .fi .SS Registry dialog methods .INDENT 0.0 .IP \(bu 2 login .IP \(bu 2 push .IP \(bu 2 pull .UNINDENT .SS Docker management .INDENT 0.0 .IP \(bu 2 version .IP \(bu 2 info .UNINDENT .SS Image management .sp You have those methods: .INDENT 0.0 .IP \(bu 2 search .IP \(bu 2 inspect_image .IP \(bu 2 get_images .IP \(bu 2 remove_image .IP \(bu 2 import_image .IP \(bu 2 build .IP \(bu 2 tag .UNINDENT .SS Container management .sp You have those methods: .INDENT 0.0 .IP \(bu 2 start .IP \(bu 2 stop .IP \(bu 2 restart .IP \(bu 2 kill .IP \(bu 2 wait .IP \(bu 2 get_containers .IP \(bu 2 inspect_container .IP \(bu 2 remove_container .IP \(bu 2 is_running .IP \(bu 2 top .IP \(bu 2 ports .IP \(bu 2 logs .IP \(bu 2 diff .IP \(bu 2 commit .IP \(bu 2 create_container .IP \(bu 2 export .IP \(bu 2 get_container_root .UNINDENT .SS Runtime execution within a specific already existing and running container .INDENT 0.0 .IP \(bu 2 Idea is to use lxc\-attach to execute inside the container context. .IP \(bu 2 We do not use a "docker run command" but want to execute something inside a running container. .UNINDENT .sp You have those methods: .INDENT 0.0 .IP \(bu 2 retcode .IP \(bu 2 run .IP \(bu 2 run_all .IP \(bu 2 run_stderr .IP \(bu 2 run_stdout .IP \(bu 2 script .IP \(bu 2 script_retcode .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.build(path=None, tag=None, quiet=False, fileobj=None, nocache=False, rm=True, timeout=None) Build a docker image from a dockerfile or an URL .sp You can either: .INDENT 7.0 .INDENT 3.5 .INDENT 0.0 .IP \(bu 2 give the url/branch/docker_dir .IP \(bu 2 give a path on the file system .UNINDENT .UNINDENT .UNINDENT .INDENT 7.0 .TP .B path URL or path in the filesystem to the dockerfile .TP .B tag Tag of the image .TP .B quiet quiet mode .TP .B nocache do not use docker image cache .TP .B rm remove intermediate commits .TP .B timeout timeout is seconds before aborting .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.build .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.commit(container, repository=None, tag=None, message=None, author=None, conf=None) Commit a container (promotes it to an image) .INDENT 7.0 .TP .B container container id .TP .B repository repository/imageName to commit to .TP .B tag optional tag .TP .B message optional commit message .TP .B author optional author .TP .B conf optional conf .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.commit .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.create_container(image, command=None, hostname=None, user=None, detach=True, stdin_open=False, tty=False, mem_limit=0, ports=None, environment=None, dns=None, volumes=None, volumes_from=None, name=None) Create a new container .INDENT 7.0 .TP .B image image to create the container from .TP .B command command to execute while starting .TP .B hostname hostname of the container .TP .B user user to run docker as .TP .B detach daemon mode .TP .B environment environment variable mapping ({\(aqfoo\(aq:\(aqBAR\(aq}) .TP .B ports ports redirections ({\(aq222\(aq: {}}) .TP .B volumes list of volumes mapping: .sp .nf .ft C ([\(aq/mountpoint/in/container:/guest/foo\(aq, \(aq/same/path/mounted/point\(aq]) .ft P .fi .TP .B tty attach ttys .TP .B stdin_open let stdin open .TP .B name name given to container .UNINDENT .sp EG: .INDENT 7.0 .INDENT 3.5 salt\-call docker.create_container o/ubuntu volumes="[\(aq/s\(aq,\(aq/m:/f\(aq]" .UNINDENT .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.create_container .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.diff(container) Get container diffs .INDENT 7.0 .TP .B container container id .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.diff .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.exists(container) Check if a given container exists .INDENT 7.0 .TP .B Parameters \fBcontainer\fP (\fI\%string\fP) \-\- Container id .TP .B Return type boolean: .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.exists .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.export(container, path) Export a container to a file .INDENT 7.0 .TP .B container container id .TP .B path path to the export .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.export .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.get_container_root(container) Get the container rootfs path .INDENT 7.0 .TP .B container container id or grain .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.get_container_root .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.get_containers(all=True, trunc=False, since=None, before=None, limit=\-1, host=False) Get a list of mappings representing all containers .INDENT 7.0 .TP .B all Return all containers .TP .B trunc Set it to True to have the short ID .TP .B host Include the Docker host\(aqs ipv4 and ipv6 address in return .UNINDENT .sp Returns a mapping of something which looks like container .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.get_containers salt \(aq*\(aq docker.get_containers host=True .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.get_images(name=None, quiet=False, all=True) List docker images .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBname\fP (\fI\%string\fP) \-\- A repository name to filter on .IP \(bu 2 \fBquiet\fP (\fIboolean\fP) \-\- Only show image ids .IP \(bu 2 \fBall\fP (\fIboolean\fP) \-\- Show all images .UNINDENT .TP .B Return type dict .TP .B Returns A status message with the command output .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.get_images [name] [quiet=True|False] [all=True|False] .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.import_image(src, repo, tag=None) Import content from a local tarball or a URL to a docker image .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBsrc\fP (\fI\%string\fP) \-\- The content to import (URL, absolute path to a tarball) .IP \(bu 2 \fBrepo\fP (\fI\%string\fP) \-\- The repository to import to .IP \(bu 2 \fBtag\fP (\fI\%string\fP) \-\- An optional tag to set .UNINDENT .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.import_image [tag] .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.info() Get the version information about docker .INDENT 7.0 .TP .B Return type dict .TP .B Returns A status message with the command output .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.info .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.inspect_container(container) Get container information. This is similar to the docker inspect command. .INDENT 7.0 .TP .B Parameters \fBcontainer\fP (\fI\%string\fP) \-\- The id of the container to inspect .TP .B Return type dict .TP .B Returns A status message with the command output .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.inspect_container .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.inspect_image(image) Inspect the status of an image and return relative data .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.inspect_image .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.is_running(container) Is this container running .INDENT 7.0 .TP .B container Container id .UNINDENT .sp Return boolean .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.is_running .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.kill(container) Kill a running container .INDENT 7.0 .TP .B Parameters \fBcontainer\fP (\fI\%string\fP) \-\- The container id to kill .TP .B Return type dict .TP .B Returns A status message with the command output ex: .sp .nf .ft C {\(aqid\(aq: \(aqabcdef123456789\(aq, \(aqstatus\(aq: True} .ft P .fi .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.kill .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.login(url=None, username=None, password=None, email=None) Wrapper to the docker.py login method, does not do much yet .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.login .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.logs(container) Return logs for a specified container .INDENT 7.0 .TP .B container container id .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.logs .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.port(container, private_port) Private/Public for a specific port mapping allocation information This method is broken on docker\-py side Just use the result of inspect to mangle port allocation .INDENT 7.0 .TP .B container container id .TP .B private_port private port on the container to query for .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.port .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.pull(repo, tag=None) Pulls an image from any registry. See above documentation for how to configure authenticated access. .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBrepo\fP (\fI\%string\fP) \-\- .sp The repository to pull. [registryurl://]REPOSITORY_NAME_image eg: .sp .nf .ft C index.docker.io:MyRepo/image superaddress.cdn:MyRepo/image MyRepo/image .ft P .fi .IP \(bu 2 \fBtag\fP (\fI\%string\fP) \-\- The specific tag to pull .UNINDENT .TP .B Return type dict .TP .B Returns A status message with the command output Example: .sp .nf .ft C \-\-\-\-\-\-\-\-\-\- comment: Image NAME was pulled (ID id: None out: \-\-\-\-\-\-\-\-\-\- \- id: 2c80228370c9 \- status: Download complete \-\-\-\-\-\-\-\-\-\- \- id: 2c80228370c9 \- progress: [=========================> ] \- status: Downloading \-\-\-\-\-\-\-\-\-\- \- id: 2c80228370c9 \- status Pulling image (latest) from foo/ubuntubox \-\-\-\-\-\-\-\-\-\- \- status: Pulling repository foo/ubuntubox status: True .ft P .fi .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.pull [tag] .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.push(repo) Pushes an image from any registry See this top level documentation to know how to configure authenticated access .INDENT 7.0 .TP .B repo [registryurl://]REPOSITORY_NAME_image eg: .sp .nf .ft C index.docker.io:MyRepo/image superaddress.cdn:MyRepo/image MyRepo/image .ft P .fi .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.push .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.remove_container(container, force=False, v=False) Removes a container from a docker installation .INDENT 7.0 .TP .B container Container id to remove .TP .B force By default, do not remove a running container, set this to remove it unconditionally .TP .B v verbose mode .UNINDENT .sp Return True or False in the status mapping and also any information about docker in status[\(aqout\(aq] .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.remove_container .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.remove_image(image) Remove an image from a system. .INDENT 7.0 .TP .B Parameters \fBimage\fP (\fI\%string\fP) \-\- The image to remove .TP .B Return type string .TP .B Returns A status message. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.remove_image .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.restart(container, timeout=10) Restart a running container .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBcontainer\fP (\fI\%string\fP) \-\- The container id to restart .IP \(bu 2 \fBtimeout\fP \-\- Wait for a timeout to let the container exit gracefully before killing it .UNINDENT .TP .B Return type dict .TP .B Returns A status message with the command output ex: .sp .nf .ft C {\(aqid\(aq: \(aqabcdef123456789\(aq, \(aqstatus\(aq: True} .ft P .fi .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.restart .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.retcode(container, cmd) Wrapper for cmdmod.retcode inside a container context .INDENT 7.0 .TP .B container container id (or grain) .TP .B Other params: See cmdmod documentation .UNINDENT .sp The return is a bit different as we use the docker struct, The output of the command is in \(aqout\(aq The result is false if command failed .INDENT 7.0 .TP .B WARNING: Be advised that this function allows for raw shell access to the named container! If allowing users to execute this directly it may allow more rights than intended! .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.retcode \(aqls \-l /etc\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.run(container, cmd) Wrapper for cmdmod.run inside a container context .INDENT 7.0 .TP .B container container id (or grain) .TP .B Other params: See cmdmod documentation .UNINDENT .sp The return is a bit different as we use the docker struct, The output of the command is in \(aqout\(aq The result is always True .INDENT 7.0 .TP .B WARNING: Be advised that this function allows for raw shell access to the named container! If allowing users to execute this directly it may allow more rights than intended! .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.run \(aqls \-l /etc\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.run_all(container, cmd) Wrapper for cmdmod.run_all inside a container context .INDENT 7.0 .TP .B container container id (or grain) .TP .B Other params: See cmdmod documentation .UNINDENT .sp The return is a bit different as we use the docker struct, The output of the command is in \(aqout\(aq The result if false if command failed .INDENT 7.0 .TP .B WARNING: Be advised that this function allows for raw shell access to the named container! If allowing users to execute this directly it may allow more rights than intended! .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.run_all \(aqls \-l /etc\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.run_stderr(container, cmd) Wrapper for cmdmod.run_stderr inside a container context .INDENT 7.0 .TP .B container container id (or grain) .TP .B Other params: See cmdmod documentation .UNINDENT .sp The return is a bit different as we use the docker struct, The output of the command is in \(aqout\(aq The result is always True .INDENT 7.0 .TP .B WARNING: Be advised that this function allows for raw shell access to the named container! If allowing users to execute this directly it may allow more rights than intended! .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.run_stderr \(aqls \-l /etc\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.run_stdout(container, cmd) Wrapper for cmdmod.run_stdout inside a container context .INDENT 7.0 .TP .B container container id (or grain) .TP .B Other params: See cmdmod documentation .UNINDENT .sp The return is a bit different as we use the docker struct, The output of the command is in \(aqout\(aq The result is always True .INDENT 7.0 .TP .B WARNING: Be advised that this function allows for raw shell access to the named container! If allowing users to execute this directly it may allow more rights than intended! .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.run_stdout \(aqls \-l /etc\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.script(container, source, args=None, cwd=None, stdin=None, runas=None, shell=\(aq/bin/bash\(aq, env=None, template=\(aqjinja\(aq, umask=None, timeout=None, reset_system_locale=True, no_clean=False, saltenv=\(aqbase\(aq) Same usage as cmd.script but running inside a container context .INDENT 7.0 .TP .B container container id or grain .TP .B others params and documentation See cmd.retcode .TP .B WARNING: Be advised that this function allows for raw shell access to the named container! If allowing users to execute this directly it may allow more rights than intended! .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.script salt://docker_script.py .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.script_retcode(container, source, cwd=None, stdin=None, runas=None, shell=\(aq/bin/bash\(aq, env=None, template=\(aqjinja\(aq, umask=None, timeout=None, reset_system_locale=True, no_clean=False, saltenv=\(aqbase\(aq) Same usage as cmd.script_retcode but running inside a container context .INDENT 7.0 .TP .B container container id or grain .TP .B others params and documentation See cmd.retcode .TP .B WARNING: Be advised that this function allows for raw shell access to the named container! If allowing users to execute this directly it may allow more rights than intended! .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.script_retcode salt://docker_script.py .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.search(term) Search for an image on the registry .INDENT 7.0 .TP .B Parameters \fBterm\fP (\fI\%string\fP) \-\- The search keyword to query .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.search .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.start(container, binds=None, port_bindings=None, lxc_conf=None, publish_all_ports=None, links=None, privileged=False, dns=None, volumes_from=None, network_mode=None) Restart the specified container .INDENT 7.0 .TP .B container Container id .TP .B Returns the status mapping as usual .INDENT 7.0 .TP .B {\(aqid\(aq: id of the container, \(aqstatus\(aq: True if started } .UNINDENT .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.start .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.stop(container, timeout=10) Stop a running container .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBcontainer\fP (\fI\%string\fP) \-\- The container id to stop .IP \(bu 2 \fBtimeout\fP (\fI\%int\fP) \-\- Wait for a timeout to let the container exit gracefully before killing it .UNINDENT .TP .B Return type dict .TP .B Returns A status message with the command output ex: .sp .nf .ft C {\(aqid\(aq: \(aqabcdef123456789\(aq, \(aqstatus\(aq: True} .ft P .fi .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.stop .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.tag(image, repository, tag=None, force=False) Tag an image into a repository .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBimage\fP (\fI\%string\fP) \-\- The image to tag .IP \(bu 2 \fBrepository\fP (\fI\%string\fP) \-\- The repository to tag the image .IP \(bu 2 \fBtag\fP (\fI\%string\fP) \-\- The tag to apply .IP \(bu 2 \fBforce\fP (\fIboolean\fP) \-\- Forces application of the tag .UNINDENT .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.tag [tag] [force=(True|False)] .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.top(container) Run the docker top command on a specific container .INDENT 7.0 .TP .B container Container id .UNINDENT .sp Returns in the \(aqout\(aq status mapping a mapping for those running processes: .sp .nf .ft C { \(aqTitles\(aq: top titles list, \(aqprocesses\(aq: list of ordered by titles processes information, \(aqmprocesses\(aq: list of mappings processes information constructed above the upon information } .ft P .fi .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.top .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.version() Get docker version .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.version .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dockerio.wait(container) Blocking wait for a container exit gracefully without timeout killing it .INDENT 7.0 .TP .B container Container id .TP .B Return container id if successful .INDENT 7.0 .TP .B {\(aqid\(aq: id of the container, \(aqstatus\(aq: True if stopped } .UNINDENT .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq docker.wait .ft P .fi .UNINDENT .SS salt.modules.dpkg .sp Support for DEB packages .INDENT 0.0 .TP .B salt.modules.dpkg.file_dict(*packages) List the files that belong to a package, grouped by package. Not specifying any packages will return a list of _every_ file on the system\(aqs package database (not generally recommended). .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq lowpkg.file_list httpd salt \(aq*\(aq lowpkg.file_list httpd postfix salt \(aq*\(aq lowpkg.file_list .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dpkg.file_list(*packages) List the files that belong to a package. Not specifying any packages will return a list of _every_ file on the system\(aqs package database (not generally recommended). .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq lowpkg.file_list httpd salt \(aq*\(aq lowpkg.file_list httpd postfix salt \(aq*\(aq lowpkg.file_list .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dpkg.list_pkgs(*packages) List the packages currently installed in a dict: .sp .nf .ft C {\(aq\(aq: \(aq\(aq} .ft P .fi .sp External dependencies: .sp .nf .ft C Virtual package resolution requires aptitude. Because this function uses dpkg, virtual packages will be reported as not installed. .ft P .fi .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq lowpkg.list_pkgs salt \(aq*\(aq lowpkg.list_pkgs httpd .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.dpkg.unpurge(*packages) Change package selection for each package specified to \(aqinstall\(aq .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq lowpkg.unpurge curl .ft P .fi .UNINDENT .SS salt.modules.ebuild .sp Support for Portage .INDENT 0.0 .TP .B optdepends .INDENT 7.0 .IP \(bu 2 portage Python adapter .UNINDENT .UNINDENT .sp For now all package names \fIMUST\fP include the package category, i.e. \fB\(aqvim\(aq\fP will not work, \fB\(aqapp\-editors/vim\(aq\fP will. .INDENT 0.0 .TP .B salt.modules.ebuild.check_db(*names, **kwargs) New in version 0.17.0. .sp Returns a dict containing the following information for each specified package: .INDENT 7.0 .IP 1. 3 A key \fBfound\fP, which will be a boolean value denoting if a match was found in the package database. .IP 2. 3 If \fBfound\fP is \fBFalse\fP, then a second key called \fBsuggestions\fP will be present, which will contain a list of possible matches. This list will be empty if the package name was specified in \fBcategory/pkgname\fP format, since the suggestions are only intended to disambiguate ambiguous package names (ones submitted without a category). .UNINDENT .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq pkg.check_db .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.ebuild.check_extra_requirements(pkgname, pkgver) Check if the installed package already has the given requirements. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.check_extra_requirements \(aqsys\-devel/gcc\(aq \(aq~>4.1.2:4.1::gentoo[nls,fortran]\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.ebuild.depclean(name=None, slot=None, fromrepo=None, pkgs=None) Portage has a function to remove unused dependencies. If a package is provided, it will only removed the package if no other package depends on it. .INDENT 7.0 .TP .B name The name of the package to be cleaned. .TP .B slot Restrict the remove to a specific slot. Ignored if \fBname\fP is None. .TP .B fromrepo Restrict the remove to a specific slot. Ignored if \fBname\fP is None. .TP .B pkgs Clean multiple packages. \fBslot\fP and \fBfromrepo\fP arguments are ignored if this argument is present. Must be passed as a python list. .UNINDENT .sp Return a list containing the removed packages: .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.depclean .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.ebuild.ex_mod_init(low) If the config option \fBebuild.enforce_nice_config\fP is set to True, this module will enforce a nice tree structure for /etc/portage/package.* configuration files. .sp New in version 0.17.0: Initial automatic enforcement added when pkg is used on a Gentoo system. .sp Changed in version 2014.1.0-Hydrogen: Configure option added to make this behaviour optional, defaulting to off. .IP "See also" .sp \fBebuild.ex_mod_init\fP is called automatically when a state invokes a pkg state on a Gentoo system. \fBsalt.states.pkg.mod_init()\fP .sp \fBebuild.ex_mod_init\fP uses \fBportage_config.enforce_nice_config\fP to do the lifting. \fBsalt.modules.portage_config.enforce_nice_config()\fP .RE .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.ex_mod_init .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.ebuild.install(name=None, refresh=False, pkgs=None, sources=None, slot=None, fromrepo=None, uses=None, **kwargs) Install the passed package(s), add refresh=True to sync the portage tree before package is installed. .INDENT 7.0 .TP .B name The name of the package to be installed. Note that this parameter is ignored if either "pkgs" or "sources" is passed. Additionally, please note that this option can only be used to emerge a package from the portage tree. To install a tbz2 package manually, use the "sources" option described below. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.install .ft P .fi .TP .B refresh Whether or not to sync the portage tree before installing. .TP .B version Install a specific version of the package, e.g. 1.0.9\-r1. Ignored if "pkgs" or "sources" is passed. .TP .B slot Similar to version, but specifies a valid slot to be installed. It will install the latest available version in the specified slot. Ignored if "pkgs" or "sources" or "version" is passed. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.install sys\-devel/gcc slot=\(aq4.4\(aq .ft P .fi .TP .B fromrepo Similar to slot, but specifies the repository from the package will be installed. It will install the latest available version in the specified repository. Ignored if "pkgs" or "sources" or "version" is passed. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.install salt fromrepo=\(aqgentoo\(aq .ft P .fi .TP .B uses Similar to slot, but specifies a list of use flag. Ignored if "pkgs" or "sources" or "version" is passed. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.install sys\-devel/gcc uses=\(aq["nptl","\-nossp"]\(aq .ft P .fi .UNINDENT .sp Multiple Package Installation Options: .INDENT 7.0 .TP .B pkgs A list of packages to install from the portage tree. Must be passed as a python list. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.install pkgs=\(aq["foo","bar","~category/package:slot::repository[use]"]\(aq .ft P .fi .TP .B sources A list of tbz2 packages to install. Must be passed as a list of dicts, with the keys being package names, and the values being the source URI or local path to the package. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.install sources=\(aq[{"foo": "salt://foo.tbz2"},{"bar": "salt://bar.tbz2"}]\(aq .ft P .fi .UNINDENT .sp Returns a dict containing the new package names and versions: .sp .nf .ft C {\(aq\(aq: {\(aqold\(aq: \(aq\(aq, \(aqnew\(aq: \(aq\(aq}} .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.ebuild.latest_version(*names, **kwargs) Return the latest version of the named package available for upgrade or installation. If more than one package name is specified, a dict of name/version pairs is returned. .sp If the latest version of a given package is already installed, an empty string will be returned for that package. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.latest_version salt \(aq*\(aq pkg.latest_version ... .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.ebuild.list_pkgs(versions_as_list=False, **kwargs) List the packages currently installed in a dict: .sp .nf .ft C {\(aq\(aq: \(aq\(aq} .ft P .fi .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.list_pkgs .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.ebuild.list_upgrades(refresh=True) List all available package upgrades. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.list_upgrades .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.ebuild.porttree_matches(name) Returns a list containing the matches for a given package name from the portage tree. Note that the specific version of the package will not be provided for packages that have several versions in the portage tree, but rather the name of the package (i.e. "dev\-python/paramiko"). .UNINDENT .INDENT 0.0 .TP .B salt.modules.ebuild.purge(name=None, slot=None, fromrepo=None, pkgs=None, **kwargs) Portage does not have a purge, this function calls remove followed by depclean to emulate a purge process .INDENT 7.0 .TP .B name The name of the package to be deleted. .TP .B slot Restrict the remove to a specific slot. Ignored if name is None. .TP .B fromrepo Restrict the remove to a specific slot. Ignored if \fBname\fP is None. .UNINDENT .sp Multiple Package Options: .INDENT 7.0 .TP .B pkgs Uninstall multiple packages. \fBslot\fP and \fBfromrepo\fP arguments are ignored if this argument is present. Must be passed as a python list. .UNINDENT .sp New in version 0.16.0. .sp Returns a dict containing the changes. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.purge salt \(aq*\(aq pkg.purge slot=4.4 salt \(aq*\(aq pkg.purge ,, salt \(aq*\(aq pkg.purge pkgs=\(aq["foo", "bar"]\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.ebuild.refresh_db() Updates the portage tree (emerge \-\-sync). Uses eix\-sync if available. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.refresh_db .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.ebuild.remove(name=None, slot=None, fromrepo=None, pkgs=None, **kwargs) Remove packages via emerge \-\-unmerge. .INDENT 7.0 .TP .B name The name of the package to be deleted. .TP .B slot Restrict the remove to a specific slot. Ignored if \fBname\fP is None. .TP .B fromrepo Restrict the remove to a specific slot. Ignored if \fBname\fP is None. .UNINDENT .sp Multiple Package Options: .INDENT 7.0 .TP .B pkgs Uninstall multiple packages. \fBslot\fP and \fBfromrepo\fP arguments are ignored if this argument is present. Must be passed as a python list. .UNINDENT .sp New in version 0.16.0. .sp Returns a dict containing the changes. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.remove salt \(aq*\(aq pkg.remove slot=4.4 fromrepo=gentoo salt \(aq*\(aq pkg.remove ,, salt \(aq*\(aq pkg.remove pkgs=\(aq["foo", "bar"]\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.ebuild.update(pkg, slot=None, fromrepo=None, refresh=False) Updates the passed package (emerge \-\-update package) .INDENT 7.0 .TP .B slot Restrict the update to a particular slot. It will update to the latest version within the slot. .TP .B fromrepo Restrict the update to a particular repository. It will update to the latest version within the repository. .UNINDENT .sp Return a dict containing the new package names and versions: .sp .nf .ft C {\(aq\(aq: {\(aqold\(aq: \(aq\(aq, \(aqnew\(aq: \(aq\(aq}} .ft P .fi .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.update .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.ebuild.upgrade(refresh=True) Run a full system upgrade (emerge \-\-update world) .sp Return a dict containing the new package names and versions: .sp .nf .ft C {\(aq\(aq: {\(aqold\(aq: \(aq\(aq, \(aqnew\(aq: \(aq\(aq}} .ft P .fi .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.upgrade .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.ebuild.upgrade_available(name) Check whether or not an upgrade is available for a given package .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.upgrade_available .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.ebuild.version(*names, **kwargs) Returns a string representing the package version or an empty string if not installed. If more than one package name is specified, a dict of name/version pairs is returned. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.version salt \(aq*\(aq pkg.version ... .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.ebuild.version_clean(version) Clean the version string removing extra data. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.version_clean .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.ebuild.version_cmp(pkg1, pkg2) Do a cmp\-style comparison on two packages. Return \-1 if pkg1 < pkg2, 0 if pkg1 == pkg2, and 1 if pkg1 > pkg2. Return None if there was a problem making the comparison. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.version_cmp \(aq0.2.4\-0\(aq \(aq0.2.4.1\-0\(aq .ft P .fi .UNINDENT .SS salt.modules.eix .sp Support for Eix .INDENT 0.0 .TP .B salt.modules.eix.sync() Sync portage/overlay trees and update the eix database .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq eix.sync .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.eix.update() Update the eix database .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq eix.update .ft P .fi .UNINDENT .SS salt.modules.environ .sp Support for getting and setting the environment variables of the current salt process. .INDENT 0.0 .TP .B salt.modules.environ.get(key, default=\(aq\(aq) Get a single salt process environment variable. .INDENT 7.0 .TP .B key String used as the key for environment lookup. .TP .B default If the key is not found in the enironment, return this value. Default: \(aq\(aq .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq environ.get foo salt \(aq*\(aq environ.get baz default=False .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.environ.has_value(key, value=None) Determine whether the key exists in the current salt process environment dictionary. Optionally compare the current value of the environment against the supplied value string. .INDENT 7.0 .TP .B key Must be a string. Used as key for environment lookup. .TP .B value: Optional. If key exists in the environment, compare the current value with this value. Return True if they are equal. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq environ.has_value foo .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.environ.item(keys, default=\(aq\(aq) Get one or more salt process environment variables. Returns a dict. .INDENT 7.0 .TP .B keys Either a string or a list of strings that will be used as the keys for environment lookup. .TP .B default If the key is not found in the enironment, return this value. Default: \(aq\(aq .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq environ.item foo salt \(aq*\(aq environ.item \(aq[foo, baz]\(aq default=None .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.environ.items() Return a dict of the entire environment set for the salt process .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq environ.items .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.environ.setenv(environ, false_unsets=False, clear_all=False, update_minion=False) Set multiple salt process environment variables from a dict. Returns a dict. .INDENT 7.0 .TP .B environ Must be a dict. The top\-level keys of the dict are the names of the environment variables to set. Each key\(aqs value must be a string or False. Refer to the \(aqfalse_unsets\(aq parameter for behavior when a value set to False. .TP .B false_unsets If a key\(aqs value is False and false_unsets is True, then the key will be removed from the salt processes environment dict entirely. If a key\(aqs value is Flase and false_unsets is not True, then the key\(aqs value will be set to an empty string. Default: False .TP .B clear_all USE WITH CAUTION! This option can unset environment variables needed for salt to function properly. If clear_all is True, then any environment variables not defined in the environ dict will be deleted. Default: False .TP .B update_minion If True, apply these environ changes to the main salt\-minion process. If False, the environ changes will only affect the current salt subprocess. Default: False .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq environ.setenv \(aq{"foo": "bar", "baz": "quux"}\(aq salt \(aq*\(aq environ.setenv \(aq{"a": "b", "c": False}\(aq false_unsets=True .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.environ.setval(key, val, false_unsets=False) Set a single salt process environment variable. Returns True on success. .INDENT 7.0 .TP .B key The environment key to set. Must be a string. .TP .B val The value to set. Must be a string or False. Refer to the \(aqfalse_unsets\(aq parameter for behavior when set to False. .TP .B false_unsets If val is False and false_unsets is True, then the key will be removed from the salt processes environment dict entirely. If val is False and false_unsets is not True, then the key\(aqs value will be set to an empty string. Default: False. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq environ.setval foo bar salt \(aq*\(aq environ.setval baz val=False false_unsets=True .ft P .fi .UNINDENT .SS salt.modules.eselect .sp Support for eselect, Gentoo\(aqs configuration and management tool. .INDENT 0.0 .TP .B salt.modules.eselect.exec_action(module, action, parameter=\(aq\(aq, state_only=False) Execute an arbitrary action on a module. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq eselect.exec_action [parameter] .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.eselect.get_current_target(module) Get the currently selected target for the given module. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq eselect.get_current_target .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.eselect.get_modules() Get available modules list. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq eselect.get_modules .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.eselect.get_target_list(module) Get available target for the given module. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq eselect.get_target_list .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.eselect.set_target(module, target) Set the target for the given module. Target can be specified by index or name. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq eselect.set_target .ft P .fi .UNINDENT .SS salt.modules.etcd_mod .sp Execution module to work with etcd .INDENT 0.0 .TP .B depends .INDENT 7.0 .IP \(bu 2 python\-etcd .UNINDENT .UNINDENT .sp In order to use an etcd server, a profile should be created in the master configuration file: .sp .nf .ft C my_etd_config: etcd.host: 127.0.0.1 etcd.port: 4001 .ft P .fi .sp It is technically possible to configure etcd without using a profile, but this is not consided to be a best practice, especially when multiple etcd servers or clusters are available. .sp .nf .ft C etcd.host: 127.0.0.1 etcd.port: 4001 .ft P .fi .INDENT 0.0 .TP .B salt.modules.etcd_mod.get(key, recurse=False, profile=None) New in version Helium. .sp Get a value from etcd, by direct path .sp CLI Examples: .sp .nf .ft C salt myminion etcd.get /path/to/key salt myminion etcd.get /path/to/key profile=my_etcd_config salt myminion etcd.get /path/to/key recurse=True profile=my_etcd_config .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.etcd_mod.ls(path=\(aq/\(aq, profile=None) New in version Helium. .sp Return all keys and dirs inside a specific path .sp CLI Example: .sp .nf .ft C salt myminion etcd.ls /path/to/dir/ salt myminion etcd.ls /path/to/dir/ profile=my_etcd_config .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.etcd_mod.rm(key, recurse=False, profile=None) New in version Helium. .sp Delete a key from etcd .sp CLI Example: .sp .nf .ft C salt myminion etcd.rm /path/to/key salt myminion etcd.rm /path/to/key profile=my_etcd_config salt myminion etcd.rm /path/to/dir recurse=True profile=my_etcd_config .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.etcd_mod.set(key, value, profile=None) New in version Helium. .sp Set a value in etcd, by direct path .sp CLI Example: .sp .nf .ft C salt myminion etcd.set /path/to/key value salt myminion etcd.set /path/to/key value profile=my_etcd_config .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.etcd_mod.tree(path=\(aq/\(aq, profile=None) New in version Helium. .sp Recurse through etcd and return all values .sp CLI Example: .sp .nf .ft C salt myminion etcd.tree salt myminion etcd.tree profile=my_etcd_config salt myminion etcd.tree /path/to/keys profile=my_etcd_config .ft P .fi .UNINDENT .SS salt.modules.event .sp Use the \fBSalt Event System\fP to fire events from the master to the minion and vice\-versa. .INDENT 0.0 .TP .B salt.modules.event.fire(data, tag) Fire an event on the local minion event bus. Data must be formed as a dict. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq event.fire \(aq{"data":"my event data"}\(aq \(aqtag\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.event.fire_master(data, tag, preload=None) Fire an event off up to the master server .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq event.fire_master \(aq{"data":"my event data"}\(aq \(aqtag\(aq .ft P .fi .UNINDENT .SS salt.modules.extfs .sp Module for managing ext2/3/4 file systems .INDENT 0.0 .TP .B salt.modules.extfs.attributes(device, args=None) Return attributes from dumpe2fs for a specified device .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq extfs.attributes /dev/sda1 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.extfs.blocks(device, args=None) Return block and inode info from dumpe2fs for a specified device .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq extfs.blocks /dev/sda1 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.extfs.dump(device, args=None) Return all contents of dumpe2fs for a specified device .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq extfs.dump /dev/sda1 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.extfs.mkfs(device, fs_type, **kwargs) Create a file system on the specified device .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq extfs.mkfs /dev/sda1 fs_type=ext4 opts=\(aqacl,noexec\(aq .ft P .fi .sp Valid options are: .sp .nf .ft C block_size: 1024, 2048 or 4096 check: check for bad blocks direct: use direct IO ext_opts: extended file system options (comma\-separated) fragment_size: size of fragments force: setting force to True will cause mke2fs to specify the \-F option twice (it is already set once); this is truly dangerous blocks_per_group: number of blocks in a block group number_of_groups: ext4 option for a virtual block group bytes_per_inode: set the bytes/inode ratio inode_size: size of the inode journal: set to True to create a journal (default on ext3/4) journal_opts: options for the fs journal (comma separated) blocks_file: read bad blocks from file label: label to apply to the file system reserved: percentage of blocks reserved for super\-user last_dir: last mounted directory test: set to True to not actually create the file system (mke2fs \-n) number_of_inodes: override default number of inodes creator_os: override "creator operating system" field opts: mount options (comma separated) revision: set the filesystem revision (default 1) super: write superblock and group descriptors only fs_type: set the filesystem type (REQUIRED) usage_type: how the filesystem is going to be used uuid: set the UUID for the file system .ft P .fi .sp See the \fBmke2fs(8)\fP manpage for a more complete description of these options. .UNINDENT .INDENT 0.0 .TP .B salt.modules.extfs.tune(device, **kwargs) Set attributes for the specified device (using tune2fs) .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq extfs.tune /dev/sda1 force=True label=wildstallyns opts=\(aqacl,noexec\(aq .ft P .fi .sp Valid options are: .sp .nf .ft C max: max mount count count: mount count error: error behavior extended_opts: extended options (comma separated) force: force, even if there are errors (set to True) group: group name or gid that can use the reserved blocks interval: interval between checks journal: set to True to create a journal (default on ext3/4) journal_opts: options for the fs journal (comma separated) label: label to apply to the file system reserved: percentage of blocks reserved for super\-user last_dir: last mounted directory opts: mount options (comma separated) feature: set or clear a feature (comma separated) mmp_check: mmp check interval reserved: reserved blocks count quota_opts: quota options (comma separated) time: time last checked user: user or uid who can use the reserved blocks uuid: set the UUID for the file system .ft P .fi .sp See the \fBmke2fs(8)\fP manpage for a more complete description of these options. .UNINDENT .SS salt.modules.file .sp Manage information about regular files, directories, and special files on the minion, set/read user, group, mode, and data .INDENT 0.0 .TP .B salt.modules.file.access(path, mode) New in version 2014.1.0: (Hydrogen) .sp Test whether the Salt process has the specified access to the file. One of the following modes must be specified: .INDENT 7.0 .INDENT 3.5 f: Test the existence of the path r: Test the readability of the path w: Test the writability of the path x: Test whether the path can be executed .UNINDENT .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.access /path/to/file f salt \(aq*\(aq file.access /path/to/file x .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.append(path, *args) New in version 0.9.5. .sp Append text to the end of a file .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.append /etc/motd \e "With all thine offerings thou shalt offer salt." \e "Salt is what makes things taste bad when it isn\(aqt in them." .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.blockreplace(path, marker_start=\(aq#\-\- start managed zone \-\-\(aq, marker_end=\(aq#\-\- end managed zone \-\-\(aq, content=\(aq\(aq, append_if_not_found=False, prepend_if_not_found=False, backup=\(aq.bak\(aq, dry_run=False, show_changes=True) New in version 2014.1.0: (Hydrogen) .sp Replace content of a text block in a file, delimited by line markers .sp A block of content delimited by comments can help you manage several lines entries without worrying about old entries removal. .IP Note This function will store two copies of the file in\-memory (the original version and the edited version) in order to detect changes and only edit the targeted file if necessary. .RE .INDENT 7.0 .TP .B path Filesystem path to the file to be edited .TP .B marker_start The line content identifying a line as the start of the content block. Note that the whole line containing this marker will be considered, so whitespaces or extra content before or after the marker is included in final output .TP .B marker_end The line content identifying a line as the end of the content block. Note that the whole line containing this marker will be considered, so whitespaces or extra content before or after the marker is included in final output .TP .B content The content to be used between the two lines identified by marker_start and marker_stop. .TP .B append_if_not_found False If markers are not found and set to \fBTrue\fP then, the markers and content will be appended to the file. .TP .B prepend_if_not_found False If markers are not found and set to \fBTrue\fP then, the markers and content will be prepended to the file. .TP .B backup The file extension to use for a backup of the file if any edit is made. Set to \fBFalse\fP to skip making a backup. .TP .B dry_run Don\(aqt make any edits to the file. .TP .B show_changes Output a unified diff of the old file and the new file. If \fBFalse\fP, return a boolean if any changes were made. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.blockreplace /etc/hosts \(aq#\-\- start managed zone foobar : DO NOT EDIT \-\-\(aq \e \(aq#\-\- end managed zone foobar \-\-\(aq $\(aq10.0.1.1 foo.foobar\en10.0.1.2 bar.foobar\(aq True .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.check_file_meta(name, sfn, source, source_sum, user, group, mode, saltenv, template=None, contents=None) Check for the changes in the file metadata. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.check_file_meta /etc/httpd/conf.d/httpd.conf salt://http/httpd.conf \(aq{hash_type: \(aqmd5\(aq, \(aqhsum\(aq: }\(aq root, root, \(aq755\(aq base .ft P .fi .IP Note Supported hash types include sha512, sha384, sha256, sha224, sha1, and md5. .RE .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.check_hash(path, file_hash) Check if a file matches the given hash string .sp Returns true if the hash matched, otherwise false. Raises ValueError if the hash was not formatted correctly. .INDENT 7.0 .TP .B path A file path .TP .B hash A string in the form :. For example: \fBmd5:e138491e9d5b97023cea823fe17bac22\fP .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.check_hash /etc/fstab md5: .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.check_managed(name, source, source_hash, user, group, mode, template, context, defaults, saltenv, contents=None, **kwargs) Check to see what changes need to be made for a file .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.check_managed /etc/httpd/conf.d/httpd.conf salt://http/httpd.conf \(aq{hash_type: \(aqmd5\(aq, \(aqhsum\(aq: }\(aq root, root, \(aq755\(aq jinja True None None base .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.check_perms(name, ret, user, group, mode, follow_symlinks=False) Check the permissions on files and chown if needed .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.check_perms /etc/sudoers \(aq{}\(aq root root 400 .ft P .fi .sp Changed in version 2014.1.3: \fBfollow_symlinks\fP option added .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.chgrp(path, group) Change the group of a file .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.chgrp /etc/passwd root .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.chown(path, user, group) Chown a file, pass the file the desired user and group .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.chown /etc/passwd root root .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.comment(path, regex, char=\(aq#\(aq, backup=\(aq.bak\(aq) Deprecated since version 0.17.0: Use \fBreplace()\fP instead. .sp Comment out specified lines in a file .INDENT 7.0 .TP .B path The full path to the file to be edited .TP .B regex A regular expression used to find the lines that are to be commented; this pattern will be wrapped in parenthesis and will move any preceding/trailing \fB^\fP or \fB$\fP characters outside the parenthesis (e.g., the pattern \fB^foo$\fP will be rewritten as \fB^(foo)$\fP) .TP .B char \fB#\fP The character to be inserted at the beginning of a line in order to comment it out .TP .B backup \fB.bak\fP The file will be backed up before edit with this file extension .IP Warning This backup will be overwritten each time \fBsed\fP / \fBcomment\fP / \fBuncomment\fP is called. Meaning the backup will only be useful after the first invocation. .RE .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.comment /etc/modules pcspkr .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.contains(path, text) Deprecated since version 0.17.0: Use \fBsearch()\fP instead. .sp Return \fBTrue\fP if the file at \fBpath\fP contains \fBtext\fP .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.contains /etc/crontab \(aqmymaintenance.sh\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.contains_glob(path, glob_expr) Deprecated since version 0.17.0: Use \fBsearch()\fP instead. .sp Return \fBTrue\fP if the given glob matches a string in the named file .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.contains_glob /etc/foobar \(aq*cheese*\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.contains_regex(path, regex, lchar=\(aq\(aq) Deprecated since version 0.17.0: Use \fBsearch()\fP instead. .sp Return True if the given regular expression matches on any line in the text of a given file. .sp If the lchar argument (leading char) is specified, it will strip \fIlchar\fP from the left side of each line before trying to match .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.contains_regex /etc/crontab .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.contains_regex_multiline(path, regex) Deprecated since version 0.17.0: Use \fBsearch()\fP instead. .sp Return True if the given regular expression matches anything in the text of a given file .sp Traverses multiple lines at a time, via the salt BufferedReader (reads in chunks) .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.contains_regex_multiline /etc/crontab \(aq^maint\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.copy(src, dst, recurse=False, remove_existing=False) Copy a file or directory from source to dst .sp In order to copy a directory, the recurse flag is required, and will by default overwrite files in the destination with the same path, and retain all other existing files. (similar to cp \-r on unix) .sp remove_existing will remove all files in the target directory, and then copy files from the source. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.copy /path/to/src /path/to/dst salt \(aq*\(aq file.copy /path/to/src_dir /path/to/dst_dir recurse=True salt \(aq*\(aq file.copy /path/to/src_dir /path/to/dst_dir recurse=True remove_existing=True .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.delete_backup(path, backup_id) New in version 0.17.0. .sp Delete a previous version of a file that was backed up using Salt\(aqs \fBfile state backup\fP system. .INDENT 7.0 .TP .B path The path on the minion to check for backups .TP .B backup_id The numeric id for the backup you wish to delete, as found using \fBfile.list_backups\fP .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.restore_backup /foo/bar/baz.txt 0 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.directory_exists(path) Tests to see if path is a valid directory. Returns True/False. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.directory_exists /etc .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.extract_hash(hash_fn, hash_type=\(aqmd5\(aq, file_name=\(aq\(aq) This routine is called from the \fBfile.managed\fP state to pull a hash from a remote file. Regular expressions are used line by line on the \fBsource_hash\fP file, to find a potential candidate of the indicated hash type. This avoids many problems of arbitrary file lay out rules. It specifically permits pulling hash codes from debian \fB*.dsc\fP files. .sp For example: .sp .nf .ft C openerp_7.0\-latest\-1.tar.gz: file.managed: \- name: /tmp/openerp_7.0\-20121227\-075624\-1_all.deb \- source: http://nightly.openerp.com/7.0/nightly/deb/openerp_7.0\-20121227\-075624\-1.tar.gz \- source_hash: http://nightly.openerp.com/7.0/nightly/deb/openerp_7.0\-20121227\-075624\-1.dsc .ft P .fi .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.extract_hash /etc/foo sha512 /path/to/hash/file .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.file_exists(path) Tests to see if path is a valid file. Returns True/False. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.file_exists /etc/passwd .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.find(path, **kwargs) Approximate the Unix \fBfind(1)\fP command and return a list of paths that meet the specified criteria. .sp The options include match criteria: .sp .nf .ft C name = path\-glob # case sensitive iname = path\-glob # case insensitive regex = path\-regex # case sensitive iregex = path\-regex # case insensitive type = file\-types # match any listed type user = users # match any listed user group = groups # match any listed group size = [+\-]number[size\-unit] # default unit = byte mtime = interval # modified since date grep = regex # search file contents .ft P .fi .sp and/or actions: .sp .nf .ft C delete [= file\-types] # default type = \(aqf\(aq exec = command [arg ...] # where {} is replaced by pathname print [= print\-opts] .ft P .fi .sp The default action is \(aqprint=path\(aq. .sp file\-glob: .sp .nf .ft C * = match zero or more chars ? = match any char [abc] = match a, b, or c [!abc] or [^abc] = match anything except a, b, and c [x\-y] = match chars x through y [!x\-y] or [^x\-y] = match anything except chars x through y {a,b,c} = match a or b or c .ft P .fi .sp path\-regex: a Python re (regular expression) pattern to match pathnames .sp file\-types: a string of one or more of the following: .sp .nf .ft C a: all file types b: block device c: character device d: directory p: FIFO (named pipe) f: plain file l: symlink s: socket .ft P .fi .sp users: a space and/or comma separated list of user names and/or uids .sp groups: a space and/or comma separated list of group names and/or gids .sp size\-unit: .sp .nf .ft C b: bytes k: kilobytes m: megabytes g: gigabytes t: terabytes .ft P .fi .sp interval: .sp .nf .ft C [w] [d] [h] [m] [s] where: w: week d: day h: hour m: minute s: second .ft P .fi .sp print\-opts: a comma and/or space separated list of one or more of the following: .sp .nf .ft C group: group name md5: MD5 digest of file contents mode: file permissions (as integer) mtime: last modification time (as time_t) name: file basename path: file absolute path size: file size in bytes type: file type user: user name .ft P .fi .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq file.find / type=f name=\e*.bak size=+10m salt \(aq*\(aq file.find /var mtime=+30d size=+10m print=path,size,mtime salt \(aq*\(aq file.find /var/log name=\e*.[0\-9] mtime=+30d size=+10m delete .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.get_devmm(name) Get major/minor info from a device .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.get_devmm /dev/chr .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.get_diff(minionfile, masterfile, env=None, saltenv=\(aqbase\(aq) Return unified diff of file compared to file on master .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.get_diff /home/fred/.vimrc salt://users/fred/.vimrc .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.get_gid(path, follow_symlinks=True) Return the id of the group that owns a given file .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.get_gid /etc/passwd .ft P .fi .sp Changed in version 0.16.4: \fBfollow_symlinks\fP option added .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.get_group(path, follow_symlinks=True) Return the group that owns a given file .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.get_group /etc/passwd .ft P .fi .sp Changed in version 0.16.4: \fBfollow_symlinks\fP option added .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.get_hash(path, form=\(aqmd5\(aq, chunk_size=4096) Get the hash sum of a file .INDENT 7.0 .TP .B This is better than \fBget_sum\fP for the following reasons: .INDENT 7.0 .IP \(bu 2 It does not read the entire file into memory. .IP \(bu 2 .INDENT 2.0 .TP .B It does not return a string on error. The returned value of \fBget_sum\fP cannot really be trusted since it is vulnerable to collisions: \fBget_sum(..., \(aqxyz\(aq) == \(aqHash xyz not supported\(aq\fP .UNINDENT .UNINDENT .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.get_hash /etc/shadow .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.get_managed(name, template, source, source_hash, user, group, mode, saltenv, context, defaults, **kwargs) Return the managed file data for file.managed .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.get_managed /etc/httpd/conf.d/httpd.conf jinja salt://http/httpd.conf \(aq{hash_type: \(aqmd5\(aq, \(aqhsum\(aq: }\(aq root root \(aq755\(aq base None None .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.get_mode(path, follow_symlinks=True) Return the mode of a file .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.get_mode /etc/passwd .ft P .fi .sp Changed in version 2014.1.0: \fBfollow_symlinks\fP option added .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.get_selinux_context(path) Get an SELinux context from a given path .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.get_selinux_context /etc/hosts .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.get_sum(path, form=\(aqmd5\(aq) Return the sum for the given file, default is md5, sha1, sha224, sha256, sha384, sha512 are supported .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.get_sum /etc/passwd sha512 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.get_uid(path, follow_symlinks=True) Return the id of the user that owns a given file .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.get_uid /etc/passwd .ft P .fi .sp Changed in version 0.16.4: \fBfollow_symlinks\fP option added .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.get_user(path, follow_symlinks=True) Return the user that owns a given file .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.get_user /etc/passwd .ft P .fi .sp Changed in version 0.16.4: \fBfollow_symlinks\fP option added .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.gid_to_group(gid) Convert the group id to the group name on this system .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.gid_to_group 0 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.grep(path, pattern, *args) Grep for a string in the specified file .IP Note This function\(aqs return value is slated for refinement in future versions of Salt .RE .INDENT 7.0 .TP .B path A file path .TP .B pattern A string. For example: \fBtest\fP \fBa[0\-5]\fP .TP .B args grep options. For example: \fB" \-v"\fP \fB" \-i \-B2"\fP .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.grep /etc/passwd nobody salt \(aq*\(aq file.grep /etc/sysconfig/network\-scripts/ifcfg\-eth0 ipaddr " \-i" salt \(aq*\(aq file.grep /etc/sysconfig/network\-scripts/ifcfg\-eth0 ipaddr " \-i \-B2" salt \(aq*\(aq file.grep "/etc/sysconfig/network\-scripts/*" ipaddr " \-i \-l" .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.group_to_gid(group) Convert the group to the gid on this system .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.group_to_gid root .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.is_blkdev(name) Check if a file exists and is a block device. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.is_blkdev /dev/blk .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.is_chrdev(name) Check if a file exists and is a character device. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.is_chrdev /dev/chr .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.is_fifo(name) Check if a file exists and is a FIFO. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.is_fifo /dev/fifo .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.is_link(path) Check if the path is a symlink .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.is_link /path/to/link .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.join(*args) Return a normalized file system path for the underlying OS .sp New in version Helium. .sp This can be useful at the CLI but is frequently useful when scripting combining path variables: .sp .nf .ft C {% set www_root = \(aq/var\(aq %} {% set app_dir = \(aqmyapp\(aq %} myapp_config: file: \- managed \- name: {{ salt[\(aqfile.join\(aq](www_root, app_dir, \(aqconfig.yaml\(aq) }} .ft P .fi .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.join \(aq/\(aq \(aqusr\(aq \(aqlocal\(aq \(aqbin\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.lchown(path, user, group) Chown a file, pass the file the desired user and group without following symlinks. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.chown /etc/passwd root root .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.link(src, path) New in version 2014.1.0: (Hydrogen) .sp Create a hard link to a file .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.link /path/to/file /path/to/link .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.list_backups(path, limit=None) New in version 0.17.0. .sp Lists the previous versions of a file backed up using Salt\(aqs \fBfile state backup\fP system. .INDENT 7.0 .TP .B path The path on the minion to check for backups .TP .B limit Limit the number of results to the most recent N backups .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.list_backups /foo/bar/baz.txt .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.lstat(path) New in version 2014.1.0: (Hydrogen) .sp Returns the lstat attributes for the given file or dir. Does not support symbolic links. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.lstat /path/to/file .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.makedirs(path, user=None, group=None, mode=None) Ensure that the directory containing this path is available. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.makedirs /opt/code .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.makedirs_perms(name, user=None, group=None, mode=\(aq0755\(aq) Taken and modified from os.makedirs to set user, group and mode for each directory created. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.makedirs_perms /opt/code .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.manage_file(name, sfn, ret, source, source_sum, user, group, mode, saltenv, backup, makedirs=False, template=None, show_diff=True, contents=None, dir_mode=None, follow_symlinks=True) Checks the destination against what was retrieved with get_managed and makes the appropriate modifications (if necessary). .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.manage_file /etc/httpd/conf.d/httpd.conf \(aq{}\(aq salt://http/httpd.conf \(aq{hash_type: \(aqmd5\(aq, \(aqhsum\(aq: }\(aq root root \(aq755\(aq base \(aq\(aq .ft P .fi .sp Changed in version Helium: \fBfollow_symlinks\fP option added .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.mkdir(dir_path, user=None, group=None, mode=None) Ensure that a directory is available. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.mkdir /opt/jetty/context .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.mknod(name, ntype, major=0, minor=0, user=None, group=None, mode=\(aq0600\(aq) New in version 0.17.0. .sp .nf .ft C salt \(aq*\(aq file.mknod /dev/chr c 180 31 salt \(aq*\(aq file.mknod /dev/blk b 8 999 salt \(aq*\(aq file.nknod /dev/fifo p .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.mknod_blkdev(name, major, minor, user=None, group=None, mode=\(aq0660\(aq) New in version 0.17.0. .sp Create a block device. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.mknod_blkdev /dev/blk 8 999 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.mknod_chrdev(name, major, minor, user=None, group=None, mode=\(aq0660\(aq) New in version 0.17.0. .sp Create a character device. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.mknod_chrdev /dev/chr 180 31 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.mknod_fifo(name, user=None, group=None, mode=\(aq0660\(aq) New in version 0.17.0. .sp Create a FIFO pipe. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.mknod_fifo /dev/fifo .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.open_files(by_pid=False) Return a list of all physical open files on the system. .sp CLI Examples: .INDENT 7.0 .INDENT 3.5 salt \(aq*\(aq file.open_files salt \(aq*\(aq file.open_files by_pid=True .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.pardir() Return the relative parent directory path symbol for underlying OS .sp New in version Helium. .sp This can be useful when constructing Salt Formulas. .sp .nf .ft C {% set pardir = salt[\(aqfile.pardir\(aq]() %} {% set final_path = salt[\(aqfile.join\(aq](\(aqsubdir\(aq, pardir, \(aqconfdir\(aq) %} .ft P .fi .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.pardir .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.patch(originalfile, patchfile, options=\(aq\(aq, dry_run=False) New in version 0.10.4. .sp Apply a patch to a file .sp Equivalent to: .sp .nf .ft C patch .ft P .fi .INDENT 7.0 .TP .B originalfile The full path to the file or directory to be patched .TP .B patchfile A patch file to apply to \fBoriginalfile\fP .TP .B options Options to pass to patch. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.patch /opt/file.txt /tmp/file.txt.patch .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.path_exists_glob(path) Tests to see if path after expansion is a valid path (file or directory). Expansion allows usage of ? * and character ranges []. Tilde expansion is not supported. Returns True/False. .sp New in version Hellium. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.path_exists_glob /etc/pam*/pass* .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.prepend(path, *args) New in version Helium. .sp Prepend text to the beginning of a file .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.prepend /etc/motd \e "With all thine offerings thou shalt offer salt." \e "Salt is what makes things taste bad when it isn\(aqt in them." .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.psed(path, before, after, limit=\(aq\(aq, backup=\(aq.bak\(aq, flags=\(aqgMS\(aq, escape_all=False, multi=False) Deprecated since version 0.17.0: Use \fBreplace()\fP instead. .sp Make a simple edit to a file (pure Python version) .sp Equivalent to: .sp .nf .ft C sed "// s/// " .ft P .fi .INDENT 7.0 .TP .B path The full path to the file to be edited .TP .B before A pattern to find in order to replace with \fBafter\fP .TP .B after Text that will replace \fBbefore\fP .TP .B limit \fB\(aq\(aq\fP An initial pattern to search for before searching for \fBbefore\fP .TP .B backup \fB.bak\fP The file will be backed up before edit with this file extension; \fBWARNING:\fP each time \fBsed\fP/\fBcomment\fP/\fBuncomment\fP is called will overwrite this backup .TP .B flags \fBgMS\fP.INDENT 7.0 .TP .B Flags to modify the search. Valid values are: .INDENT 7.0 .IP \(bu 2 \fBg\fP: Replace all occurrences of the pattern, not just the first. .IP \(bu 2 \fBI\fP: Ignore case. .IP \(bu 2 \fBL\fP: Make \fB\ew\fP, \fB\eW\fP, \fB\eb\fP, \fB\eB\fP, \fB\es\fP and \fB\eS\fP dependent on the locale. .IP \(bu 2 \fBM\fP: Treat multiple lines as a single line. .IP \(bu 2 \fBS\fP: Make \fI.\fP match all characters, including newlines. .IP \(bu 2 \fBU\fP: Make \fB\ew\fP, \fB\eW\fP, \fB\eb\fP, \fB\eB\fP, \fB\ed\fP, \fB\eD\fP, \fB\es\fP and \fB\eS\fP dependent on Unicode. .IP \(bu 2 \fBX\fP: Verbose (whitespace is ignored). .UNINDENT .UNINDENT .TP .B multi: \fBFalse\fP If True, treat the entire file as a single line .UNINDENT .sp Forward slashes and single quotes will be escaped automatically in the \fBbefore\fP and \fBafter\fP patterns. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.sed /etc/httpd/httpd.conf \(aqLogLevel warn\(aq \(aqLogLevel info\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.readdir(path) New in version 2014.1.0: (Hydrogen) .sp Return a list containing the contents of a directory .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.readdir /path/to/dir/ .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.readlink(path) New in version 2014.1.0: (Hydrogen) .sp Return the path that a symlink points to .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.readlink /path/to/link .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.remove(path) Remove the named file .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.remove /tmp/foo .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.rename(src, dst) Rename a file or directory .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.rename /path/to/src /path/to/dst .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.replace(path, pattern, repl, count=0, flags=0, bufsize=1, append_if_not_found=False, prepend_if_not_found=False, not_found_content=None, backup=\(aq.bak\(aq, dry_run=False, search_only=False, show_changes=True) New in version 0.17.0. .sp Replace occurrences of a pattern in a file .sp This is a pure Python implementation that wraps Python\(aqs \fI\%sub()\fP. .INDENT 7.0 .TP .B Parameters .INDENT 7.0 .IP \(bu 2 \fBpath\fP \-\- Filesystem path to the file to be edited .IP \(bu 2 \fBpattern\fP \-\- The Python\(aqs regular expression search .IP \(bu 2 \fBrepl\fP \-\- The replacement text .IP \(bu 2 \fBcount\fP \-\- Maximum number of pattern occurrences to be replaced .IP \(bu 2 \fBflags\fP (\fIlist or int\fP) \-\- A list of flags defined in the \fI\%re module documentation\fP. Each list item should be a string that will correlate to the human\-friendly flag name. E.g., \fB[\(aqIGNORECASE\(aq, \(aqMULTILINE\(aq]\fP. Note: multiline searches must specify \fBfile\fP as the \fBbufsize\fP argument below. .IP \(bu 2 \fBbufsize\fP (\fIint or str\fP) \-\- How much of the file to buffer into memory at once. The default value \fB1\fP processes one line at a time. The special value \fBfile\fP may be specified which will read the entire file into memory before processing. Note: multiline searches must specify \fBfile\fP buffering. .IP \(bu 2 \fBappend_if_not_found\fP \-\- .sp If pattern is not found and set to \fBTrue\fP then, the content will be appended to the file. .sp New in version Helium. .IP \(bu 2 \fBprepend_if_not_found\fP \-\- .sp If pattern is not found and set to \fBTrue\fP then, the content will be appended to the file. .sp New in version Helium. .IP \(bu 2 \fBnot_found_content\fP \-\- .sp Content to use for append/prepend if not found. If None (default), uses repl. Useful when repl uses references to group in pattern. .sp New in version Helium. .IP \(bu 2 \fBbackup\fP \-\- The file extension to use for a backup of the file before editing. Set to \fBFalse\fP to skip making a backup. .IP \(bu 2 \fBdry_run\fP \-\- Don\(aqt make any edits to the file .IP \(bu 2 \fBsearch_only\fP \-\- Just search for the pattern; ignore the replacement; stop on the first match .IP \(bu 2 \fBshow_changes\fP \-\- Output a unified diff of the old file and the new file. If \fBFalse\fP return a boolean if any changes were made. Note: using this option will store two copies of the file in\-memory (the original version and the edited version) in order to generate the diff. .UNINDENT .TP .B Return type bool or str .UNINDENT .sp If an equal sign (\fB=\fP) appears in an argument to a Salt command it is interpreted as a keyword argument in the format \fBkey=val\fP. That processing can be bypassed in order to pass an equal sign through to the remote shell command by manually specifying the kwarg: .sp .nf .ft C salt \(aq*\(aq file.replace /path/to/file pattern=\(aq=\(aq repl=\(aq:\(aq .ft P .fi .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.replace /etc/httpd/httpd.conf \(aqLogLevel warn\(aq \(aqLogLevel info\(aq salt \(aq*\(aq file.replace /some/file \(aqbefore\(aq \(aqafter\(aq flags=\(aq[MULTILINE, IGNORECASE]\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.restore_backup(path, backup_id) New in version 0.17.0. .sp Restore a previous version of a file that was backed up using Salt\(aqs \fBfile state backup\fP system. .INDENT 7.0 .TP .B path The path on the minion to check for backups .TP .B backup_id The numeric id for the backup you wish to restore, as found using \fBfile.list_backups\fP .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.restore_backup /foo/bar/baz.txt 0 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.restorecon(path, recursive=False) Reset the SELinux context on a given path .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.restorecon /home/user/.ssh/authorized_keys .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.rmdir(path) New in version 2014.1.0: (Hydrogen) .sp Remove the specified directory. Fails if a directory is not empty. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.rmdir /tmp/foo/ .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.search(path, pattern, flags=0, bufsize=1) New in version 0.17.0. .sp Search for occurrences of a pattern in a file .sp Params are identical to \fBreplace()\fP. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.search /etc/crontab \(aqmymaintenance.sh\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.sed(path, before, after, limit=\(aq\(aq, backup=\(aq.bak\(aq, options=\(aq\-r \-e\(aq, flags=\(aqg\(aq, escape_all=False, negate_match=False) Deprecated since version 0.17.0: Use \fBreplace()\fP instead. .sp Make a simple edit to a file .sp Equivalent to: .sp .nf .ft C sed "// s/// " .ft P .fi .INDENT 7.0 .TP .B path The full path to the file to be edited .TP .B before A pattern to find in order to replace with \fBafter\fP .TP .B after Text that will replace \fBbefore\fP .TP .B limit \fB\(aq\(aq\fP An initial pattern to search for before searching for \fBbefore\fP .TP .B backup \fB.bak\fP The file will be backed up before edit with this file extension; \fBWARNING:\fP each time \fBsed\fP/\fBcomment\fP/\fBuncomment\fP is called will overwrite this backup .TP .B options \fB\-r \-e\fP Options to pass to sed .TP .B flags \fBg\fP Flags to modify the sed search; e.g., \fBi\fP for case\-insensitve pattern matching .TP .B negate_match False Negate the search command (\fB!\fP) .sp New in version 0.17.0. .UNINDENT .sp Forward slashes and single quotes will be escaped automatically in the \fBbefore\fP and \fBafter\fP patterns. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.sed /etc/httpd/httpd.conf \(aqLogLevel warn\(aq \(aqLogLevel info\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.sed_contains(path, text, limit=\(aq\(aq, flags=\(aqg\(aq) Deprecated since version 0.17.0: Use \fBsearch()\fP instead. .sp Return True if the file at \fBpath\fP contains \fBtext\fP. Utilizes sed to perform the search (line\-wise search). .sp Note: the \fBp\fP flag will be added to any flags you pass in. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.contains /etc/crontab \(aqmymaintenance.sh\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.seek_read(path, size, offset) New in version 2014.1.0: (Hydrogen) .sp Seek to a position on a file and write to it .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.seek_read /path/to/file 4096 0 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.seek_write(path, data, offset) New in version 2014.1.0: (Hydrogen) .sp Seek to a position on a file and write to it .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.seek_write /path/to/file \(aqsome data\(aq 4096 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.set_mode(path, mode) Set the mode of a file .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.set_mode /etc/passwd 0644 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.set_selinux_context(path, user=None, role=None, type=None, range=None) Set a specific SELinux label on a given path .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.set_selinux_context path .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.source_list(source, source_hash, saltenv) Check the source list and return the source to use .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.source_list salt://http/httpd.conf \(aq{hash_type: \(aqmd5\(aq, \(aqhsum\(aq: }\(aq base .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.stats(path, hash_type=None, follow_symlinks=True) Return a dict containing the stats for a given file .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.stats /etc/passwd .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.statvfs(path) New in version 2014.1.0: (Hydrogen) .sp Perform a statvfs call against the filesystem that the file resides on .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.statvfs /path/to/file .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.symlink(src, path) Create a symbolic link to a file .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.symlink /path/to/file /path/to/link .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.touch(name, atime=None, mtime=None) New in version 0.9.5. .sp Just like the \fBtouch\fP command, create a file if it doesn\(aqt exist or simply update the atime and mtime if it already does. .INDENT 7.0 .TP .B atime: Access time in Unix epoch time .TP .B mtime: Last modification in Unix epoch time .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.touch /var/log/emptyfile .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.truncate(path, length) New in version 2014.1.0: (Hydrogen) .sp Seek to a position on a file and delete everything after that point .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.truncate /path/to/file 512 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.uid_to_user(uid) Convert a uid to a user name .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.uid_to_user 0 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.uncomment(path, regex, char=\(aq#\(aq, backup=\(aq.bak\(aq) Deprecated since version 0.17.0: Use \fBreplace()\fP instead. .sp Uncomment specified commented lines in a file .INDENT 7.0 .TP .B path The full path to the file to be edited .TP .B regex A regular expression used to find the lines that are to be uncommented. This regex should not include the comment character. A leading \fB^\fP character will be stripped for convenience (for easily switching between comment() and uncomment()). .TP .B char \fB#\fP The character to remove in order to uncomment a line .TP .B backup \fB.bak\fP The file will be backed up before edit with this file extension; \fBWARNING:\fP each time \fBsed\fP/\fBcomment\fP/\fBuncomment\fP is called will overwrite this backup .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.uncomment /etc/hosts.deny \(aqALL: PARANOID\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.user_to_uid(user) Convert user name to a uid .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.user_to_uid root .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.file.write(path, *args) New in version Helium. .sp Write text to a file, overwriting any existing contents. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq file.write /etc/motd \e "With all thine offerings thou shalt offer salt." .ft P .fi .UNINDENT .SS salt.modules.freebsd_sysctl .sp Module for viewing and modifying sysctl parameters .INDENT 0.0 .TP .B salt.modules.freebsd_sysctl.assign(name, value) Assign a single sysctl parameter for this minion .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq sysctl.assign net.inet.icmp.icmplim 50 .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsd_sysctl.get(name) Return a single sysctl parameter for this minion .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq sysctl.get hw.physmem .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsd_sysctl.persist(name, value, config=\(aq/etc/sysctl.conf\(aq) Assign and persist a simple sysctl parameter for this minion .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq sysctl.persist net.inet.icmp.icmplim 50 salt \(aq*\(aq sysctl.persist coretemp_load NO config=/boot/loader.conf .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsd_sysctl.show() Return a list of sysctl parameters for this minion .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq sysctl.show .ft P .fi .UNINDENT .SS salt.modules.freebsdjail .sp The jail module for FreeBSD .INDENT 0.0 .TP .B salt.modules.freebsdjail.fstab(jail) Display contents of a fstab(5) file defined in specified jail\(aqs configuration. If no file is defined, return False. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq jail.fstab .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdjail.get_enabled() Return which jails are set to be run .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq jail.get_enabled .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdjail.is_enabled() See if jail service is actually enabled on boot .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq jail.is_enabled .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdjail.restart(jail=\(aq\(aq) Restart the specified jail or all, if none specified .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq jail.restart [] .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdjail.show_config(jail) Display specified jail\(aqs configuration .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq jail.show_config .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdjail.start(jail=\(aq\(aq) Start the specified jail or all, if none specified .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq jail.start [] .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdjail.status(jail) See if specified jail is currently running .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq jail.status .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdjail.stop(jail=\(aq\(aq) Stop the specified jail or all, if none specified .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq jail.stop [] .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdjail.sysctl() Dump all jail related kernel states (sysctl) .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq jail.sysctl .ft P .fi .UNINDENT .SS salt.modules.freebsdkmod .sp Module to manage FreeBSD kernel modules .INDENT 0.0 .TP .B salt.modules.freebsdkmod.available() Return a list of all available kernel modules .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq kmod.available .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdkmod.check_available(mod) Check to see if the specified kernel module is available .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq kmod.check_available kvm .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdkmod.load(mod) Load the specified kernel module .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq kmod.load kvm .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdkmod.lsmod() Return a dict containing information about currently loaded modules .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq kmod.lsmod .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdkmod.remove(mod) Remove the specified kernel module .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq kmod.remove kvm .ft P .fi .UNINDENT .SS salt.modules.freebsdpkg .sp Remote package support using \fBpkg_add(1)\fP .IP Warning This module has been completely rewritten. Up to and including version 0.17.0, it supported \fBpkg_add(1)\fP, but checked for the existence of a pkgng local database and, if found, would provide some of pkgng\(aqs functionality. The rewrite of this module has removed all pkgng support, and moved it to the \fBpkgng\fP execution module. For verisions <= 0.17.0, the documentation here should not be considered accurate. If your Minion is running one of these versions, then the documentation for this module can be viewed using the \fBsys.doc\fP function: .sp .nf .ft C salt bsdminion sys.doc pkg .ft P .fi .RE .sp This module acts as the default package provider for FreeBSD 9 and older. If you need to use pkgng on a FreeBSD 9 system, you will need to override the \fBpkg\fP provider by setting the \fBproviders\fP parameter in your Minion config file, in order to use pkgng. .sp .nf .ft C providers: pkg: pkgng .ft P .fi .sp More information on pkgng support can be found in the documentation for the \fBpkgng\fP module. .sp This module will respect the \fBPACKAGEROOT\fP and \fBPACKAGESITE\fP environment variables, if set, but these values can also be overridden in several ways: .INDENT 0.0 .IP 1. 3 \fBSalt configuration parameters.\fP The configuration parameters \fBfreebsdpkg.PACKAGEROOT\fP and \fBfreebsdpkg.PACKAGESITE\fP are recognized. These config parameters are looked up using \fBconfig.get\fP and can thus be specified in the Master config file, Grains, Pillar, or in the Minion config file. Example: .sp .nf .ft C freebsdpkg.PACKAGEROOT: ftp://ftp.freebsd.org/ freebsdpkg.PACKAGESITE: ftp://ftp.freebsd.org/pub/FreeBSD/ports/ia64/packages\-9\-stable/Latest/ .ft P .fi .IP 2. 3 \fBCLI arguments.\fP Both the \fBpackageroot\fP (used interchangeably with \fBfromrepo\fP for API compatibility) and \fBpackagesite\fP CLI arguments are recognized, and override their config counterparts from section 1 above. .sp .nf .ft C salt \-G \(aqos:FreeBSD\(aq pkg.install zsh fromrepo=ftp://ftp2.freebsd.org/ salt \-G \(aqos:FreeBSD\(aq pkg.install zsh packageroot=ftp://ftp2.freebsd.org/ salt \-G \(aqos:FreeBSD\(aq pkg.install zsh packagesite=ftp://ftp2.freebsd.org/pub/FreeBSD/ports/ia64/packages\-9\-stable/Latest/ \&.. note:: These arguments can also be passed through in states: .. code\-block:: yaml zsh: pkg.installed: \- fromrepo: ftp://ftp2.freebsd.org/ .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdpkg.file_dict(*packages) List the files that belong to a package, grouped by package. Not specifying any packages will return a list of _every_ file on the system\(aqs package database (not generally recommended). .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq pkg.file_list httpd salt \(aq*\(aq pkg.file_list httpd postfix salt \(aq*\(aq pkg.file_list .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdpkg.file_list(*packages) List the files that belong to a package. Not specifying any packages will return a list of _every_ file on the system\(aqs package database (not generally recommended). .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq pkg.file_list httpd salt \(aq*\(aq pkg.file_list httpd postfix salt \(aq*\(aq pkg.file_list .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdpkg.install(name=None, refresh=False, fromrepo=None, pkgs=None, sources=None, **kwargs) Install package(s) using \fBpkg_add(1)\fP .INDENT 7.0 .TP .B name The name of the package to be installed. .TP .B refresh Whether or not to refresh the package database before installing. .TP .B fromrepo or packageroot Specify a package repository from which to install. Overrides the system default, as well as the PACKAGEROOT environment variable. .TP .B packagesite Specify the exact directory from which to install the remote package. Overrides the PACKAGESITE environment variable, if present. .UNINDENT .sp Multiple Package Installation Options: .INDENT 7.0 .TP .B pkgs A list of packages to install from a software repository. Must be passed as a python list. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.install pkgs=\(aq["foo", "bar"]\(aq .ft P .fi .TP .B sources A list of packages to install. Must be passed as a list of dicts, with the keys being package names, and the values being the source URI or local path to the package. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.install sources=\(aq[{"foo": "salt://foo.deb"}, {"bar": "salt://bar.deb"}]\(aq .ft P .fi .UNINDENT .sp Return a dict containing the new package names and versions: .sp .nf .ft C {\(aq\(aq: {\(aqold\(aq: \(aq\(aq, \(aqnew\(aq: \(aq\(aq}} .ft P .fi .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.install .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdpkg.latest_version(*names, **kwargs) \fBpkg_add(1)\fP is not capable of querying for remote packages, so this function will always return results as if there is no package available for install or upgrade. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.latest_version salt \(aq*\(aq pkg.latest_version ... .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdpkg.list_pkgs(versions_as_list=False, with_origin=False, **kwargs) List the packages currently installed as a dict: .sp .nf .ft C {\(aq\(aq: \(aq\(aq} .ft P .fi .INDENT 7.0 .TP .B with_origin False Return a nested dictionary containing both the origin name and version for each installed package. .sp New in version 2014.1.0: (Hydrogen) .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.list_pkgs .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdpkg.refresh_db() \fBpkg_add(1)\fP does not use a local database of available packages, so this function simply returns \fBTrue\fP. it exists merely for API compatibility. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.refresh_db .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdpkg.remove(name=None, pkgs=None, **kwargs) Remove packages using \fBpkg_delete(1)\fP .INDENT 7.0 .TP .B name The name of the package to be deleted. .UNINDENT .sp Multiple Package Options: .INDENT 7.0 .TP .B pkgs A list of packages to delete. Must be passed as a python list. The \fBname\fP parameter will be ignored if this option is passed. .UNINDENT .sp New in version 0.16.0. .sp Returns a dict containing the changes. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.remove salt \(aq*\(aq pkg.remove ,, salt \(aq*\(aq pkg.remove pkgs=\(aq["foo", "bar"]\(aq .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdpkg.upgrade() Upgrades are not supported with \fBpkg_add(1)\fP. This function is included for API compatibility only and always returns an empty dict. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.upgrade .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdpkg.version(*names, **kwargs) Returns a string representing the package version or an empty string if not installed. If more than one package name is specified, a dict of name/version pairs is returned. .INDENT 7.0 .TP .B with_origin False Return a nested dictionary containing both the origin name and version for each specified package. .sp New in version 2014.1.0: (Hydrogen) .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq pkg.version salt \(aq*\(aq pkg.version ... .ft P .fi .UNINDENT .SS salt.modules.freebsdports .sp Install software from the FreeBSD \fBports(7)\fP system .sp New in version 2014.1.0: (Hydrogen) .sp This module allows you to install ports using \fBBATCH=yes\fP to bypass configuration prompts. It is recommended to use the the \fBports state\fP to install ports, but it it also possible to use this module exclusively from the command line. .sp .nf .ft C salt minion\-id ports.config security/nmap IPV6=off salt minion\-id ports.install security/nmap .ft P .fi .INDENT 0.0 .TP .B salt.modules.freebsdports.config(name, reset=False, **kwargs) Modify configuration options for a given port. Multiple options can be specified. To see the available options for a port, use \fBports.showconfig\fP. .INDENT 7.0 .TP .B name The port name, in \fBcategory/name\fP format .TP .B reset False If \fBTrue\fP, runs a \fBmake rmconfig\fP for the port, clearing its configuration before setting the desired options .UNINDENT .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq ports.config security/nmap IPV6=off .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdports.deinstall(name) De\-install a port. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq ports.deinstall security/nmap .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdports.install(name, clean=True) Install a port from the ports tree. Installs using \fBBATCH=yes\fP for non\-interactive building. To set config options for a given port, use \fBports.config\fP. .INDENT 7.0 .TP .B clean True If \fBTrue\fP, cleans after installation. Equivalent to running \fBmake install clean BATCH=yes\fP. .UNINDENT .IP Note It may be helpful to run this function using the \fB\-t\fP option to set a higher timeout, since compiling a port may cause the Salt command to exceed the default timeout. .RE .sp CLI Example: .sp .nf .ft C salt \-t 1200 \(aq*\(aq ports.install security/nmap .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdports.list_all() Lists all ports available. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq ports.list_all .ft P .fi .IP Warning Takes a while to run, and returns a \fBLOT\fP of output .RE .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdports.rmconfig(name) Clear the cached options for the specified port; run a \fBmake rmconfig\fP .INDENT 7.0 .TP .B name The name of the port to clear .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq ports.rmconfig security/nmap .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdports.search(name) Search for matches in the ports tree. Globs are supported, and the category is optional .sp CLI Examples: .sp .nf .ft C salt \(aq*\(aq ports.search \(aqsecurity/*\(aq salt \(aq*\(aq ports.search \(aqsecurity/n*\(aq salt \(aq*\(aq ports.search nmap .ft P .fi .IP Warning Takes a while to run .RE .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdports.showconfig(name, default=False, dict_return=False) Show the configuration options for a given port. .INDENT 7.0 .TP .B default False Show the default options for a port (not necessarily the same as the current configuration) .TP .B dict_return False Instead of returning the output of \fBmake showconfig\fP, return the data in an dictionary .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq ports.showconfig security/nmap salt \(aq*\(aq ports.showconfig security/nmap default=True .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdports.update(extract=False) Update the ports tree .INDENT 7.0 .TP .B extract False If \fBTrue\fP, runs a \fBportsnap extract\fP after fetching, should be used for first\-time installation of the ports tree. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq ports.update .ft P .fi .UNINDENT .SS salt.modules.freebsdservice .sp The service module for FreeBSD .INDENT 0.0 .TP .B salt.modules.freebsdservice.available(name) Check that the given service is available. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.available sshd .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdservice.disable(name, **kwargs) Disable the named service to start at boot .sp Arguments the same as for enable() .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.disable .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdservice.disabled(name) Return True if the named service is enabled, false otherwise .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.disabled .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdservice.enable(name, **kwargs) Enable the named service to start at boot .INDENT 7.0 .TP .B name service name .TP .B config /etc/rc.conf Config file for managing service. If config value is empty string, then /etc/rc.conf.d/ used. See man rc.conf(5) for details. .sp Also service.config variable can be used to change default. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.enable .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdservice.enabled(name) Return True if the named service is enabled, false otherwise .INDENT 7.0 .TP .B name Service name .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.enabled .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdservice.get_all() Return a list of all available services .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.get_all .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdservice.get_disabled() Return what services are available but not enabled to start at boot .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.get_disabled .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdservice.get_enabled() Return what services are set to run on boot .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.get_enabled .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdservice.missing(name) The inverse of service.available. Returns \fBTrue\fP if the specified service is not available, otherwise returns \fBFalse\fP. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.missing sshd .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdservice.reload(name) Restart the named service .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.reload .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdservice.restart(name) Restart the named service .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.restart .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdservice.start(name) Start the specified service .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.start .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdservice.status(name, sig=None) Return the status for a service (True or False). .INDENT 7.0 .TP .B name Name of service .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.status .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.freebsdservice.stop(name) Stop the specified service .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.stop .ft P .fi .UNINDENT .SS salt.modules.gem .sp Manage ruby gems. .INDENT 0.0 .TP .B salt.modules.gem.install(gems, ruby=None, runas=None, version=None, rdoc=False, ri=False) Installs one or several gems. .INDENT 7.0 .TP .B gems The gems to install .TP .B ruby None If RVM or rbenv are installed, the ruby version and gemset to use. .TP .B runas None The user to run gem as. .TP .B version None Specify the version to install for the gem. Doesn\(aqt play nice with multiple gems at once .TP .B rdoc False Generate RDoc documentation for the gem(s). .TP .B ri False Generate RI documentation for the gem(s). .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq gem.install vagrant .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.gem.list(prefix=\(aq\(aq, ruby=None, runas=None) List locally installed gems. .INDENT 7.0 .TP .B prefix : Only list gems when the name matches this prefix. .TP .B ruby None If RVM or rbenv are installed, the ruby version and gemset to use. .TP .B runas None The user to run gem as. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq gem.list .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.gem.sources_add(source_uri, ruby=None, runas=None) Add a gem source. .INDENT 7.0 .TP .B source_uri The source URI to add. .TP .B ruby None If RVM or rbenv are installed, the ruby version and gemset to use. .TP .B runas None The user to run gem as. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq gem.sources_add http://rubygems.org/ .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.gem.sources_list(ruby=None, runas=None) List the configured gem sources. .INDENT 7.0 .TP .B ruby None If RVM or rbenv are installed, the ruby version and gemset to use. .TP .B runas None The user to run gem as. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq gem.sources_list .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.gem.sources_remove(source_uri, ruby=None, runas=None) Remove a gem source. .INDENT 7.0 .TP .B source_uri The source URI to remove. .TP .B ruby None If RVM or rbenv are installed, the ruby version and gemset to use. .TP .B runas None The user to run gem as. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq gem.sources_remove http://rubygems.org/ .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.gem.uninstall(gems, ruby=None, runas=None) Uninstall one or several gems. .INDENT 7.0 .TP .B gems The gems to uninstall. .TP .B ruby None If RVM or rbenv are installed, the ruby version and gemset to use. .TP .B runas None The user to run gem as. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq gem.uninstall vagrant .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.gem.update(gems, ruby=None, runas=None) Update one or several gems. .INDENT 7.0 .TP .B gems The gems to update. .TP .B ruby None If RVM or rbenv are installed, the ruby version and gemset to use. .TP .B runas None The user to run gem as. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq gem.update vagrant .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.gem.update_system(version=\(aq\(aq, ruby=None, runas=None) Update rubygems. .INDENT 7.0 .TP .B version (newest) The version of rubygems to install. .TP .B ruby None If RVM or rbenv are installed, the ruby version and gemset to use. .TP .B runas None The user to run gem as. .UNINDENT .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq gem.update_system .ft P .fi .UNINDENT .SS salt.modules.genesis .sp Module for managing container and VM images .sp New in version Helium. .INDENT 0.0 .TP .B salt.modules.genesis.avail_platforms() Return which platforms are available .sp CLI Example: .INDENT 7.0 .INDENT 3.5 salt myminion genesis.avail_platforms .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.modules.genesis.bootstrap(platform, root, img_format=\(aqdir\(aq, fs_format=\(aqext2\(aq, arch=None, flavor=None, repo_url=None, static_qemu=None) Create an image for a specific platform. .sp Please note that this function \fIMUST\fP be run as root, as images that are created make files belonging to root. .INDENT 7.0 .TP .B platform Which platform to use to create the image. Currently supported platforms are rpm, deb and pacman. .TP .B root Local path to create the root of the image filesystem. .TP .B img_format Which format to create the image in. By default, just copies files into a directory on the local fileysstem (\fBdir\fP). Future support will exist for \fBsparse\fP. .TP .B fs_format When using a non\-\fBdir\fP img_format, which filesystem to format the image to. By default, \fBext2\fP. .TP .B arch Architecture to install packages for, if supported by the underlying bootstrap tool. Currently only used for deb. .TP .B flavor Which flavor of operating system to install. This correlates to a specific directory on the distribution repositories. For instance, \fBwheezy\fP on Debian. .TP .B repo_url Mainly important for Debian\-based repos. Base URL for the mirror to install from. (e.x.: \fI\%http://ftp.debian.org/debian/\fP) .TP .B static_qemu Local path to the static qemu binary required for this arch. (e.x.: /usr/bin/qemu\-amd64\-static) .TP .B pkg_confs The location of the conf files to copy into the image, to point the installer to the right repos and configuration. .UNINDENT .sp CLI Examples: .INDENT 7.0 .INDENT 3.5 salt myminion genesis.bootstrap pacman /root/arch salt myminion genesis.bootstrap rpm /root/redhat salt myminion genesis.bootstrap deb /root/wheezy arch=amd64 flavor=wheezy static_qemu=/usr/bin/qemu\-x86_64\-static .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.modules.genesis.pack(name, root, path=None, pack_format=\(aqtar\(aq, compress=\(aqbzip2\(aq) Pack up a directory structure, into a specific format .sp CLI Examples: .INDENT 7.0 .INDENT 3.5 salt myminion genesis.pack centos /root/centos salt myminion genesis.pack centos /root/centos pack_format=\(aqtar\(aq .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .TP .B salt.modules.genesis.unpack(name, dest=None, path=None, pack_format=\(aqtar\(aq, compress=\(aqbz2\(aq) Unpack an image into a directory structure .sp CLI Example: .INDENT 7.0 .INDENT 3.5 salt myminion genesis.unpack centos /root/centos .UNINDENT .UNINDENT .UNINDENT .SS salt.modules.gentoo_service .sp Top level package command wrapper, used to translate the os detected by grains to the correct service manager .INDENT 0.0 .TP .B salt.modules.gentoo_service.available(name) Returns \fBTrue\fP if the specified service is available, otherwise returns \fBFalse\fP. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.available sshd .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.gentoo_service.disable(name, **kwargs) Disable the named service to start at boot .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.disable .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.gentoo_service.disabled(name) Return True if the named service is enabled, false otherwise .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.disabled .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.gentoo_service.enable(name, **kwargs) Enable the named service to start at boot .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.enable .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.gentoo_service.enabled(name) Return True if the named service is enabled, false otherwise .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.enabled .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.gentoo_service.get_all() Return all available boot services .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.get_all .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.gentoo_service.get_disabled() Return a set of services that are installed but disabled .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.get_disabled .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.gentoo_service.get_enabled() Return a list of service that are enabled on boot .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.get_enabled .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.gentoo_service.missing(name) The inverse of service.available. Returns \fBTrue\fP if the specified service is not available, otherwise returns \fBFalse\fP. .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.missing sshd .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.gentoo_service.restart(name) Restart the named service .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.restart .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.gentoo_service.start(name) Start the specified service .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.start .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.gentoo_service.status(name, sig=None) Return the status for a service, returns the PID or an empty string if the service is running or not, pass a signature to use to find the service via ps .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.status [service signature] .ft P .fi .UNINDENT .INDENT 0.0 .TP .B salt.modules.gentoo_service.stop(name) Stop the specified service .sp CLI Example: .sp .nf .ft C salt \(aq*\(aq service.stop .ft P .fi .UNINDENT .SS salt.modules.gentoolkitmod .sp Support for Gentoolkit .INDENT 0.0 .TP .B salt.modules.gentoolkitmod.eclean_dist(destructive=False, package_names=False, size_limit=0, time_limit=0, fetch_restricted=False, exclude_file=\(aq/etc/eclean/distfiles.exclude\(aq) Clean obsolete portage sources .INDENT 7.0 .TP .B destructive Only keep minimum for reinstallation .TP .B package_names Protect all versions of installed packages. Only meaningful if used with destructive=True .TP .B size_limit Don\(aqt delete distfiles bigger than . is a size specification: "10M" is "ten megabytes", "200K" is "two hundreds kilobytes", etc. Units are: G, M, K and B. .TP .B time_limit