Use a synchronous command to populate the search buckets, rather than
an asynchronous command. Otherwise, there is a data race between data
being rewritten and search queries being performed.
Change the search verify script to have the proper solr path, otherwise
the script always fails because it queries the wrong index.
Modify the search tester to allow search failures while the cluster has
mixed versions. There is currently a known bug in Riak that causes search
queries to potentially fail against a mixed cluster. The test now only
requires the tester to succeed after all nodes have been upgraded.
Add loaded_upgrade test that spawns k/v and map/reduce testers that run
concurrently as cluster nodes are upgraded. Each tester is spawned before
upgrading a node, and then verified after the upgrade. If the test fails,
the test is then re-run until it passes (or timeout) in order to distinguish
between transient failure and permanent data loss failures.
Current k/v and map/reduce testers use basho_bench.