Prevent a situtation where the auto-reconnect hasn't triggered yet
causing the result to be an error, instead of ok, on the next operation
after reconnecting. Force a disconnect and reconnect to make sure the
test is deterministic.
Changing to fetching the list of peers first, then check if the riak_kv
service is up. If the service is up, then check the peers. Otherwise it
is possible to see the service down, then peers up because it went up in
the interim.
Also, making KV vnode delay configurable.
Now ensemble peers are prevented from starting up until the riak_kv
service is up to avoid nasty races that could even lead to node crashes
as the ensembles frantically query for data that isn't ready.
You can now choose to save the intercept modules in basho-patches so
that they are loaded on a restart. This should be useful to modify
Riak's behavior at startup time.
Re-initiate fullsync after 100 failed checks for completion. The
number of retries of the 'start fullsync and then check for
completion' cycle is configurable using
repl_util:start_and_wait_until_fullsync_complete/4 and defaults to 20
retries. This change is to avoid spurious test failures due to a rare
condition where the rpc call to start fullsync fails to actually
initiate the fullsync. A very similar changed for the version of the
start_and_wait_until_fullsync_complete in the replication module
introduced in 0a36f9974c has had good
effect at avoiding this condition for v2 replication tests.
Part of the condition checking done in the replication_object_reformat
test is to validate the results of a fullsync using
repl_util:validate_completed_fullsync/6. The way in which the the
function is called from the test expects fullsync to complete with 0
error_exit or retry_exit conditions occurring. This requires that sink
cluster be in a steady state with all partitions available. The test
failed to wait for such conditions to occur and instead relied on
performing a node downgrade asynchronously and waiting for up to 60
seconds for a completion message before continuing with the test. The
test was continually failing after a node was downgraded to `previous`
due to partitions being reported as `down` on that node. To resolve
the issue the node downgrade process is now done in the primary test
process instead of in a separate spawned process. After the version
downgrade is complete, the test now waits for the riak_repl and the
riak_kv services, calls rt:wait_until_nodes_ready/1, calls
rt:wait_until_no_pending_changes/1, and finally waits for the
riak_repl2_fs_node_reserver named process to be registered on the
downgraded node. This process is responsible for handling partition
reservation requests and is key to determining the the new node is
able to handle a fullsync without partition errors.
AAE fullsync tests can encounter {error, wrong_node} messages
resulting from rpc calls to riak_kv_vnode:hashtree_pid. Add handling
of error responses to rt:index_built_fun.
Update the calls to rt:systest_read in repl_util and
repl_aae_fullsync_util to treat identical siblings resulting from the
use of DVV as a single value. These changes are specifically to
address failures seen in the repl_aae_fullsync_custom_n and
replication_object_reformat tests, but should be generally useful for
replication tests using the utility modules that and that have
allow_mult set to true.
Add optional parameter to rt:systest_read that provides the ability to
compare siblings and treat those with identical values and metadata
excluding the dot as a single value for the purposes of testing.
This is to facilitate testing with DVV enabled since there are several
known cases where use of DVV leads to siblings created internally by
Riak. One example of such a case is a write during handoff that is
forwarded by the vnode, but also executed locally.
The existing behavior of rt:systest_read is to fail when any siblings
are encountered with a badmatch. This behavior is maintained as the
default so existing tests maintain their current semantics without
explicit change.
Fix problem with cacertdir specification in replication_ssl test. The
code used load cert files in v2 replication expects the path specific
by the cacertdir key to only be a directory. With v3 replication the
code used is flexible enough to allow a directory or a file. Also
correct a typo in the certfile path for the SSLConfig1 configuration.
* Do not attempt to cancel fullsync if the initial attempt to start
and wait for completion fails. It has not been observed that the
problem is fullsync starting and not completing in time, but rather
the issue is that the initial call to start fullsync does not take
effect. Therefore the cancellation is unnecessary.
* Replace the call to repl_util:wait_for_connection/2 in the node
upgrade process with a call to
replication:wait_until_connection/1. This function is geared towards
v2 replication and should speed up test execution.
Replace use of a 40 second sleep in the test_supervision test case
with a wait condition to better handle variances in the time it takes
to progress through 10 retry attempts.
AAE is disabled. (defect https://github.com/basho/riak_kv/issues/959)
- Adds additional console output to reset-current-env to explain
configuration and steps being executed
- Adds the -n option to the reset-current-env script to specify the
number of nodes to build. By default, 5 will be created.
As of commit 3044839456 tests that
return something other than the prescribed success atom 'pass' to
indicate success result in test failure. Change the
replication_upgrade and replication2_upgrade tests that return the
result of the a call to lists:foreach/2 to instead return 'pass' to
indicate success.