Part of the condition checking done in the replication_object_reformat
test is to validate the results of a fullsync using
repl_util:validate_completed_fullsync/6. The way in which the the
function is called from the test expects fullsync to complete with 0
error_exit or retry_exit conditions occurring. This requires that sink
cluster be in a steady state with all partitions available. The test
failed to wait for such conditions to occur and instead relied on
performing a node downgrade asynchronously and waiting for up to 60
seconds for a completion message before continuing with the test. The
test was continually failing after a node was downgraded to `previous`
due to partitions being reported as `down` on that node. To resolve
the issue the node downgrade process is now done in the primary test
process instead of in a separate spawned process. After the version
downgrade is complete, the test now waits for the riak_repl and the
riak_kv services, calls rt:wait_until_nodes_ready/1, calls
rt:wait_until_no_pending_changes/1, and finally waits for the
riak_repl2_fs_node_reserver named process to be registered on the
downgraded node. This process is responsible for handling partition
reservation requests and is key to determining the the new node is
able to handle a fullsync without partition errors.
When performing the test of object reformatting through replication,
assert that if we happen to downgrade the format we can still read the
keys which have been replicated.