overriding default_bucket_props in advanced_config without
explicitly setting these returns different values with the fix for
allow_mult turning to true with an app.config file present.
There were some race conditions we found in this test where reads
to an ensemble may fail for a brief period of time after recovery. By
using the waitfor function, we ensure that the tests will still pass as
long as things eventually recover and start working properly again.
Use riak_test_runner:metadata/0 to get the configured backend instead of
defaulting to bitcask. Additionally we use rt:clean_data_dir/2 to safely
remove backend directories.
This is the first iteration of creating byzantine dataloss tests that
show both recoverable and unrecoverable, but detectable errors. This tests the
following scenarios.
* Lose one partition worth of data, but no synctrees and recover.
* Lose all but one partition of ensemble data, but no synctrees and
recover.
* Lose minority of synctrees. Only the peers with the missing
synctrees are restarted. System remains available.
* Loss of majority of synctrees. Majority peers are restarted. System
recovers when they all come back online.
* Loss of majority of synctrees with one node partitioned. All peers
restarted except partitioned one. System does not recover with that
node partitioned. When the partition is healed the system recovers.
* Loss of all data and synctree except on one peer recovers.
* Backing up and restoring old data but not synctrees results in
detected errors. Restoring newer data fixes this.
* Delete all data on all nodes, but not synctrees. This is detected and
an error returned to the user.