Merge pull request #31668 from rallytime/testing-docs

Some more testing documentation improvements
This commit is contained in:
Mike Place 2016-03-04 13:48:57 -07:00
commit 97127a8b83
6 changed files with 381 additions and 107 deletions

View File

@ -1,6 +1,79 @@
.. _salt-test-suite:
=================
Salt's Test Suite
=================
Salt comes with a powerful integration and unit test suite allowing for
the fully automated run of integration and/or unit tests from a single
interface.
To learn the basics of how Salt's test suite works, be sure to check
out the :ref:`Salt's Test Suite: An Introduction <tutorial-salt-testing>`_
tutorial.
Test Directory Structure
========================
Salt's test suite is located in the ``tests`` directory in the root of
Salt's codebase. The test suite is divided into two main groups:
* :ref:`Integration Tests <integration-tests>`_
* :ref:`Unit Tests <unit-tests>`_
Within each of these groups, the directory structure roughly mirrors the
structure of Salt's own codebase. Notice that there are directories for
``states``, ``modules``, ``runners``, ``output``, and more in each testing
group.
The files that are housed in the ``modules`` directory of either the unit
or the integration testing factions contain respective integration or unit
test files for Salt execution modules.
Integration Tests
-----------------
The Integration section of Salt's test suite start up a number of Salt
daemons to test functionality in a live environment. These daemons
include two Salt Masters, one Syndic, and two Minions. This allows the
Syndic interface to be tested and Master/Minion communication to be
verified. All of the integration tests are executed as live Salt commands
sent through the started daemons.
Integration tests are particularly good at testing modules, states, and
shell commands, among other segments of Salt's ecosystem. By utilizing
the integration test daemons, integration tests are easy to write. They
are also SaltStack's gerneally preferred method of adding new tests.
The discussion in the :ref:`Integration vs. Unit <integration-vs-unit>`_
section of the :ref:`testing tutorial <tutorial-salt-testing>`_ is
beneficial in learning why you might want to write integration tests
vs. unit tests. Both testing arenas add value to Salt's test suite and
you should consider adding both types of tests if possible and appropriate
when contributing to Salt.
* :ref:`Integration Test Documentation <integration-tests>`_
Unit Tests
----------
Unit tests do not spin up any Salt daemons, but instead find their value
in testing singular implementations of individual functions. Instead of
testing against specific interactions, unit tests should be used to test
a function's logic as well as any ``return`` or ``raises`` statements.
Unit tests also rely heavily on mocking external resources.
The discussion in the :ref:`Integration vs. Unit <integration-vs-unit>`_
section of the :ref:`testing tutorial <tutorial-salt-testing>`_ is useful
in determining when you should consider writing unit tests instead of,
or in addition to, integration tests when contributing to Salt.
* :ref:`Unit Test Documentation <unit-tests>`_
Running The Tests
=================
@ -28,9 +101,19 @@ the lines below, depending on the relevant Python version:
ImportError: No module named salttesting
Once all require requirements are set, use ``tests/runtests.py`` to
run all of the tests included in Salt's test suite. For more information,
see ``--help``.
Once all requirements are installed, use ``tests/runtests.py`` to
run all of the tests included in Salt's test suite:
.. code-block:: bash
python tests/runtests.py
For more information about options you can pass the test runner, see the
``--help`` option:
.. code-block:: bash
python tests/runtests.py --help
An alternative way of invoking the test suite is available in ``setup.py``:
@ -38,18 +121,30 @@ An alternative way of invoking the test suite is available in ``setup.py``:
./setup.py test
Instead of running the entire test suite, there are several ways to run only
specific groups of tests or individual tests:
Running Test Subsections
------------------------
Instead of running the entire test suite all at once, which can take a long time,
there are several ways to run only specific groups of tests or individual tests:
* Run unit tests only: ``./tests/runtests.py --unit-tests``
* Run unit and integration tests for states: ``./tests/runtests.py --state``
* Run integration tests for an individual module: ``./tests/runtests.py -n integration.modules.virt``
* Run unit tests for an individual module: ``./tests/runtests.py -n unit.modules.virt_test``
* Run an individual test by using the class and test name (this example is for the ``test_default_kvm_profile`` test in the ``integration.module.virt``): ``./tests/runtests.py -n integration.module.virt.VirtTest.test_default_kvm_profile``
* Run an individual test by using the class and test name (this example is for the
``test_default_kvm_profile`` test in the ``integration.module.virt``):
``./tests/runtests.py -n integration.module.virt.VirtTest.test_default_kvm_profile``
For more specific examples of how to run various test subsections or individual
tests, please see the :ref:`Test Selection Options <test-selection-options>`_
documentation or the :ref:`Running Specific Tests <running-specific-tests>`_
section of the :ref:`Salt's Test Suite: An Introduction <tutorial-salt-testing>`_
tutorial.
Running Unit Tests Without Integration Test Daemons
===================================================
---------------------------------------------------
Since the unit tests do not require a master or minion to execute, it is often useful to be able to
run unit tests individually, or as a whole group, without having to start up the integration testing
@ -66,7 +161,7 @@ apply.
Running Destructive Integration Tests
=====================================
-------------------------------------
Salt is used to change the settings and behavior of systems. In order to
effectively test Salt's functionality, some integration tests are written to
@ -93,7 +188,7 @@ To run tests marked as destructive, set the ``--run-destructive`` flag:
Running Cloud Provider Tests
============================
----------------------------
Salt's testing suite also includes integration tests to assess the successful
creation and deletion of cloud instances using :ref:`Salt-Cloud<salt-cloud>` for
@ -140,7 +235,7 @@ cloud provider tests can be run by setting the ``--cloud-provider-tests`` flag:
Running The Tests In A Docker Container
=======================================
---------------------------------------
The test suite can be executed under a `docker`_ container using the
``--docked`` option flag. The `docker`_ container must be properly configured
@ -176,7 +271,6 @@ against Salt's `docker Salt test containers`_ repository.
.. _`docker Salt test containers`: https://github.com/saltstack/docker-containers
===================
Automated Test Runs
===================
@ -185,40 +279,49 @@ across supported platforms. The tests executed from Salt's Jenkins server
create fresh virtual machines for each test run, then execute destructive
tests on the new, clean virtual machine.
When a pull request is submitted to Salt's repository on GitHub, Jenkins
runs Salt's test suite on a couple of virtual machines to gauge the pull
request's viability to merge into Salt's develop branch. If these initial
tests pass, the pull request can then merged into Salt's develop branch
SaltStack's Jenkins server continuously runs the entire test suite,
including destructive tests, on an array of various supported operating
systems throughout the day. Each actively supported branch of Salt's
repository runs the tests located in the respective branch's code. Each set
of branch tests also includes a pylint run. These branch tests help ensure
the viability of Salt code at any given point in time as pull requests
are merged into branches throughout the day.
In addition to branch tests, SaltStack's Jenkins server also runs tests
on pull requests. These pull request tests include a smaller set of
virtual machines that run on the branch tests. The pull request tests,
like the branch tests, include a pylint test as well.
When a pull request is submitted to Salt's repository on GitHub, the suite
of pull request tests are started by Jenkins. These tests are used to
gauge the pull request's viability to merge into Salt's codebase. If these
initial tests pass, the pull request can then merged into the Salt branch
by one of Salt's core developers, pending their discretion. If the initial
tests fail, core developers may request changes to the pull request. If the
failure is unrelated to the changes in question, core developers may merge
the pull request despite the initial failure.
Once the pull request is merged into Salt's develop branch, a new set of
Jenkins virtual machines will begin executing the test suite. The develop
branch tests have many more virtual machines to provide more comprehensive
results.
As soon as the pull request is merged, the changes will be added to the
next branch test run on Jenkins.
There are a few other groups of virtual machines that Jenkins tests against,
including past and current release branches. For a full list of currently
running test environments, go to http://jenkins.saltstack.com.
For a full list of currently running test environments, go to
http://jenkins.saltstack.com.
Using Salt-Cloud on Jenkins
===========================
---------------------------
For testing Salt on Jenkins, SaltStack uses :ref:`Salt-Cloud<salt-cloud>` to
spin up virtual machines. The script using Salt-Cloud to accomplish this is
open source and can be found here: :blob:`tests/jenkins.py`
=============
Writing Tests
=============
The salt testing infrastructure is divided into two classes of tests,
integration tests and unit tests. These terms may be defined differently in
other contexts, but for salt they are defined this way:
integration tests and unit tests. These terms may be defined differently in
other contexts, but for Salt they are defined this way:
- Unit Test: Tests which validate isolated code blocks and do not require
external interfaces such as ``salt-call`` or any of the salt daemons.
@ -227,8 +330,12 @@ other contexts, but for salt they are defined this way:
Salt testing uses unittest2 from the python standard library and MagicMock.
* :ref:`Writing integration tests <integration-tests>`_
* :ref:`Writing unit tests <unit-tests>`_
Naming Conventions
==================
------------------
Any function in either integration test files or unit test files that is doing
the actual testing, such as functions containing assertions, must start with
@ -241,41 +348,64 @@ the actual testing, such as functions containing assertions, must start with
When functions in test files are not prepended with ``test_``, the function
acts as a normal, helper function and is not run as a test by the test suite.
Integration Tests
=================
The integration tests start up a number of salt daemons to test functionality
in a live environment. These daemons include 2 salt masters, 1 syndic, and 2
minions. This allows the syndic interface to be tested and master/minion
communication to be verified. All of the integration tests are executed as live
salt commands sent through the started daemons.
Submitting New Tests
--------------------
Integration tests are particularly good at testing modules, states, and shell
commands.
* :doc:`Writing integration tests <integration>`
Unit Tests
==========
Unit tests are good for ensuring consistent results for functions that do not
require more than a few mocks.
Mocking all external dependencies for unit tests is encouraged but not required
as sometimes the isolation provided by completely mocking the external
dependencies is not worth the effort of mocking those dependencies.
Overly detailed mocking can also result in decreased test readability and
brittleness as the tests are more likely to fail when the code or its
dependencies legitimately change. In these cases, it is better to add
dependencies to the test runner dependency state,
https://github.com/saltstack/salt-jenkins/blob/master/git/salt.sls.
* :doc:`Writing unit tests <unit>`
Which branch of the Salt codebase should new tests be written against? The location
of where new tests should be submitted depends largely on the reason you're writing
the tests.
.. toctree::
:hidden:
Tests for New Features
~~~~~~~~~~~~~~~~~~~~~~
integration
unit
If you are adding new functionality to Salt, please write the tests for this new
feature in the same pull request as the new feature. New features should always be
submitted to the ``develop`` branch.
If you have already submitted the new feature, but did not write tests in the original
pull request that has already been merged, please feel free to submit a new pull
request containing tests. If the feature was recently added to Salt's ``develop``
branch, then the tests should be added there as well. However, if the feature was
added to ``develop`` some time ago and is already present in one or more release
branches, please refer to the `Tests for Entire Files or Functions`_ section below
for more details about where to submit tests for functions or files that do not
already have tests.
Tests to Accompany a Bugfix
~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you are writing tests for code that fixes a bug in Salt, please write the test
in the same pull request as the bugfix. If you're unsure of where to submit your
bugfix and accompanying test, please review the
:ref:`Which Salt Branch? <which-salt-branch>`_ documentation in Salt's
:ref:`Contributing <contributing>`_ guide.
Tests for Entire Files or Functions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sometimes entire files in Salt are completely untested. If you are writing tests for
a file that doesn't have any tests written for it, write your test against the
earliest supported release branch that contains the file or function you're testing.
Once your tests are submitted in a pull request and is merged into the branch in
question, the tests you wrote will be merged-forward by SaltStack core engineers and
the new tests will propagate to the newer release branches. That way the tests you
wrote will apply to all current and relevant release branches, and not just the ``develop``
branch, for example. This methodology will help protect against regressions on older
files in Salt's codebase.
There may be times when the tests you write against an older branch fail in the
merge-forward process because functionality has changed in newer release branches.
In these cases, a Salt core developer may reach out to you for advice on the tests in
question if the path forward is unclear.
.. note::
If tests are written against a file in an older release branch and then merged forward,
there may be new functionality in the file that is present in the new release branch
that is untested.It would be wise to see if new functionality could use additional
testing once the test file has propagated to newer release branches.

View File

@ -38,79 +38,135 @@ Integration Classes
===================
The integration classes are located in ``tests/integration/__init__.py`` and
can be extended therein. There are three classes available to extend:
can be extended therein. There are four classes available to extend:
* `ModuleCase`_
* `ShellCase`_
* `SSHCase`_
* `SyndicCase`_
ModuleCase
----------
Used to define executions run via the master to minions and to call
single modules and states.
single modules and states. The available testing functions are:
The available methods are as follows:
run_function
~~~~~~~~~~~~
run_function:
Run a single salt function and condition the return down to match the
behavior of the raw function call. This will run the command and only
return the results from a single minion to verify.
Run a single salt function and condition the return down to match the
behavior of the raw function call. This will run the command and only
return the results from a single minion to verify.
state_result:
Return the result data from a single state return
run_state
~~~~~~~~~
run_state:
Run the state.single command and return the state return structure
Run the state.single command and return the state return structure.
SyndicCase
----------
minion_run
~~~~~~~~~~
Used to execute remote commands via a syndic, only used to verify the
capabilities of the Syndic.
Run a single salt function on the 'minion' target and condition the
return down to match the behavior of the raw function call.
The available methods are as follows:
run_function:
Run a single salt function and condition the return down to match the
behavior of the raw function call. This will run the command and only
return the results from a single minion to verify.
ShellCase
---------
Shell out to the scripts which ship with Salt.
Shell out to the scripts which ship with Salt. The testing functions are:
The available methods are as follows:
run_cp
~~~~~~
run_script:
Execute a salt script with the given argument string
Execute salt-cp. Pass in the argument string as it would be
passed on the command line.
run_salt:
Execute the salt command, pass in the argument string as it would be
passed on the command line.
run_call
~~~~~~~~
run_run:
Execute the salt-run command, pass in the argument string as it would be
passed on the command line.
Execute salt-call, pass in the argument string as it would be
passed on the command line.
run_run_plus:
Execute Salt run and the salt run function and return the data from
each in a dict
run_cloud
~~~~~~~~~
run_key:
Execute the salt-key command, pass in the argument string as it would be
passed on the command line.
Execute the salt-cloud command. Pass in the argument string as
it would be passed on the command line.
run_cp:
Execute salt-cp, pass in the argument string as it would be
passed on the command line.
run_key
~~~~~~~
Execute the salt-key command. Pass in the argument string as it
would be passed on the command line.
run_run
~~~~~~~
Execute the salt-run command. Pass in the argument string as it
would be passed on the command line.
run_run_plus
~~~~~~~~~~~~
Execute Salt run and the salt run function and return the data from
each in a dict.
run_salt
~~~~~~~~
Execute the salt command. Pass in the argument string as it would be
passed on the command line.
run_script
~~~~~~~~~~
Execute a salt script with the given argument string.
run_ssh
~~~~~~~
Execute the salt-ssh. Pass in the argument string as it would be
passed on the command line.
SSHCase
-------
Used to execute remote commands via salt-ssh. The available methods are
as follows:
run_function
~~~~~~~~~~~~
Run a single salt function via salt-ssh and condition the return down to
match the behavior of the raw function call. This will run the command
and only return the results from a single minion to verify.
SyndicCase
----------
Used to execute remote commands via a syndic and is only used to verify
the capabilities of the Salt Syndic. The available methods are as follows:
run_function
~~~~~~~~~~~~
Run a single salt function and condition the return down to match the
behavior of the raw function call. This will run the command and only
return the results from a single minion to verify.
run_call:
Execute salt-call, pass in the argument string as it would be
passed on the command line.
.. _integration-class-examples:
Examples
========
The following sections define simple integration tests present in Salt's
integration test suite for each type of testing class.
Module Example via ModuleCase Class
-----------------------------------
@ -121,7 +177,6 @@ Now the workhorse method ``run_function`` can be used to test a module:
.. code-block:: python
import os
import integration
@ -142,6 +197,16 @@ Now the workhorse method ``run_function`` can be used to test a module:
'''
self.assertEqual(self.run_function('test.echo', ['text']), 'text')
The fist example illustrates the testing master issuing a ``test.ping`` call
to a testing minion. The test asserts that the minion returned with a ``True``
value to the master from the ``test.ping`` call.
The second example similarly verifies that the minion executed the
``test.echo`` command with the ``text`` argument. The ``assertEqual`` call
maintains that the minion ran the function and returned the data as expected
to the master.
Shell Example via ShellCase
---------------------------
@ -179,6 +244,55 @@ This example verifies that the ``salt-key`` command executes and returns as
expected by making use of the ``run_key`` method.
SSH Example via SSHCase
-----------------------
Testing salt-ssh functionality can be done using the SSHCase test class:
.. code-block:: python
import integration
class SSHGrainsTest(integration.SSHCase):
'''
Test salt-ssh grains functionality
Depend on proper environment set by integration.SSHCase class
'''
def test_grains_id(self):
'''
Test salt-ssh grains id work for localhost.
'''
cmd = self.run_function('grains.get', ['id'])
self.assertEqual(cmd, 'localhost')
Syndic Example via SyndicCase
-----------------------------
Testing Salt's Syndic can be done via the SyndicCase test class:
.. code-block:: python
import integration
class TestSyndic(integration.SyndicCase):
'''
Validate the syndic interface by testing the test module
'''
def test_ping(self):
'''
test.ping
'''
self.assertTrue(self.run_function('test.ping'))
This example verifies that a ``test.ping`` command is issued from the testing
master, is passed through to the testing syndic, down to the minion, and back
up again by using the ``run_function`` located with in the ``SyndicCase`` test
class.
Integration Test Files
======================

View File

@ -12,6 +12,27 @@ integration testing and unit testing. While integration testing focuses on the
interaction between components in a sandboxed environment, unit testing focuses
on the singular implementation of individual functions.
Unit tests should be used specifically to test a function's logic. Unit tests
rely on mocking external resources.
While unit tests are good for ensuring consistent results, they are most
useful when they do not require more than a few mocks. Effort should be
made to mock as many external resources as possible. This effort is encouraged,
but not required. Sometimes the isolation provided by completely mocking the
external dependencies is not worth the effort of mocking those dependencies.
In these cases, requiring an external library to be installed on the
system before running the test file is a useful way to strike this balance.
For example, the unit tests for the MySQL execution module require the
presence of the MySQL python bindings on the system running the test file
before proceeding to run the tests.
Overly detailed mocking can also result in decreased test readability and
brittleness as the tests are more likely to fail when the code or its
dependencies legitimately change. In these cases, it is better to add
dependencies to the test runner dependency state.
Preparing to Write a Unit Test
==============================

View File

@ -59,6 +59,8 @@ files located within ``salt/modules``.
files located in ``tests`` are outside the scope of this tutorial.
.. _integration-vs-unit:
Integration vs. Unit
--------------------
@ -154,6 +156,8 @@ executed by Salt's test suite runner and is asserting that the minion returned
with a ``True`` response.
.. _test-selection-options:
Test Selection Options
~~~~~~~~~~~~~~~~~~~~~~
@ -190,6 +194,8 @@ do and execute very quickly compared to the integration tests.
./tests/runtests.py --unit
.. _running-specific-tests:
Running Specific Tests
----------------------

View File

@ -1142,7 +1142,7 @@ class ModuleCase(TestCase, SaltClientTestCaseMixIn):
@property
def sub_minion_opts(self):
'''
Return the options used for the minion
Return the options used for the sub_minion
'''
return self.get_config('sub_minion')
@ -1273,6 +1273,9 @@ class ShellCase(AdaptedConfigurationTestCaseMixIn, ShellTestCase):
return self.run_script('salt-cp', arg_str, with_retcode=with_retcode, catch_stderr=catch_stderr)
def run_call(self, arg_str, with_retcode=False, catch_stderr=False):
'''
Execute salt-call.
'''
arg_str = '--config-dir {0} {1}'.format(self.get_config_dir(), arg_str)
return self.run_script('salt-call', arg_str, with_retcode=with_retcode, catch_stderr=catch_stderr)

View File

@ -34,8 +34,8 @@ class SSHGrainsTest(integration.SSHCase):
'''
Test salt-ssh grains id work for localhost.
'''
cmd = self.run_function("grains.get", ["id"])
self.assertEqual(cmd, "localhost")
cmd = self.run_function('grains.get', ['id'])
self.assertEqual(cmd, 'localhost')
if __name__ == '__main__':