even in the rare and pathological case where the cluster is partitioned before all 3 nodes
have received the update. riakc_flag:disable(F) requires context, which isn’t there in the
new map that would be created on the side of the partition with no data.
Well, that's not true. They break riak_kv's context operations on Maps.
This change works around that breakage by turning the context off for
the operations in this test. It is a temporary thing, when the context fix
work has been done, we'll be changing back.
Set handoff concurrency higher and tick time lower so handoff
happens sooner.
Changed req params on final check to ensure we get a converged value
as soon as possible.
This requires a build of Riak using:
* basho/riak_kv#650
* basho/riak_pb#52
riak_test should be built with:
* basho/riak_pb#52 (same as above)
* basho/riak-erlang-client#114
This test follows the same outline as the verify_counter_converge
test, which is to:
1. Write a new datatype into the cluster.
2. Check that the value was correctly stored, via connecting to some
other node.
3. Partition the cluster.
4. Perform updates on both sides, verifying that the new values do not
cross the partition.
5. Heal the cluster and show that the data converges according to the
desired behavior.
There is still one outstanding problem with this test, in that maps
have a "reset" behavior when removing a field; namely, embedded types
would be semantically set to a value that resembles their bottom
value, but is actually higher in the partial-order/lattice. We have
decided this behavior is too surprising, especially in the case of
counters, where concurrent reset-removes would result in a
double-counting problem. See L255 for an example.