This is useful if you have N workers you want to run the same test suite
against, but would prefer to stagger the start point each worker takes,
so you can get at each test run at least once as soon as possible. The
two required configuration parameters are offset and workers.
Offset is the *relative* offset to start at, not an absolute number. So
example, if you have 6 workers, set the offset to 3 and have 20 tests to
run, that particular worker will start at test #9. The reason absolute
offsets are not used is that if the # of the tests to run changes, you
don't have to reconfigure anything. You'll only need to reconfigure once
the # of workers you're using to run the tests changes.
There is way to ensure what portion of the full harvest is returned by
a coverage request during handoffs. A node may start accepting
requests before the data has been handed off and there is no way to
prevent or detect this and no way to have a deterministic test
condition that checks for the presence of all expected keys. This
change removes any checks during transfers. It also and adds a
directive to wait until transfers have completed after a node is added
or removed and before listkeys result verification resumes.
Since we can't use riak_core node watcher service, we need to block
until the gen_server initializes, is available, and routes have been
added to the webmachine dispatcher.
Remove the receive clause for port data. This can cause
`wait_for_cmd` to needlessly loop through all stdout msgs before
getting to the port exit. By just doing selective recv on the exit
msg Erlang will implicit use the "save queue" and place the data msgs
back on the mailbox in the original order of arrival. In fact, the
previous method could rearrange the stdout ordering.
Using staged joins, when building a 4 node cluster, is about a minute
faster than doing 4 sequential joins due to the elimination of redundant
handoff.