- make sure that the results dir is created
- ruggedize observer listen socket creation
- fix support for non-cuttlefish builds
- observer disk autodetection
The erlang cover tool tells you what lines of code were executed during
a run. It works by starting a main server that will collect coverage
data sent from any number of connected nodes where a secondary cover
server is running. Modules for which coverage is collected are
instrumented and reloaded on all participating nodes.
To disable cover analysis, set the 'cover_enabled' application variable
to false. It is true by default. When true, it will determine which
modules to instrument by looking first at the 'cover_apps' variable. If
set, it will instrument all modules for each application directory in
the lib folder of the first devrel node of the current version of Riak.
Code coverage can not be used accross multiple versions of the code, so
only nodes running current code will participate.
If 'cover_apps' is not set, the 'cover_modules' variable is tried next,
which should contain a list of module names (as atoms, apps should be
specified as atoms as well).
This change tries to start a secondary cover server on each node started
with current code, and stop it before it is killed. If your code kills
the node in an unusual ways, you could run into trouble and may need to
stop cover manually in the test.
At the end of the run, aggregate coverage data for all tests that ran
are written as HTML in the coverage directory (relative to working
dir). You could point to a different place using the 'cover_output'
variable. In there you will find an index.html containing total coverage
numbers, coverage per app and per module, with links for each module
file that contains coverage data per line of code.
riak_test needs to be able to find the source code for each instrumented
beam, and should have permission to write it temporarily in the same
directory as its corresponding beam (so cover can find it).
Killing a riak_test run with cover enabled leaves the participating
nodes in a stuck state unfortunately. So code was added to kill -9 the
nodes when the next run starts