expand minion reauth scalability documentation

Related to #25447.
This commit is contained in:
Justin Findlay 2015-07-29 23:26:51 -06:00
parent d9ab4bb989
commit 39a82467f1

View File

@ -67,17 +67,23 @@ subsequent retry until reaching `acceptance_wait_time_max`.
Too many minions re-authing Too many minions re-authing
--------------------------- ---------------------------
This is most likely to happen in the testing phase, when all minion keys have This is most likely to happen in the testing phase of a salt deployment, when
already been accepted, the framework is being tested and parameters change all minion keys have already been accepted, but the framework is being tested
frequently in the masters configuration file. and parameters are frequently changed in the salt master's configuration
file(s).
In a few cases (master restart, remove minion key, etc.) the salt-master generates The salt master generates a new AES key to encrypt its publications at certain
a new AES-key to encrypt its publications with. The minions aren't notified of events such as a master restart or the removal of a minion key. If you are
this but will realize this on the next pub job they receive. When the minion encountering this problem of too many minions re-authing against the master,
receives such a job it will then re-auth with the master. Since Salt does minion-side you will need to recalibrate your setup to reduce the rate of events like a
filtering this means that all the minions will re-auth on the next command published master restart or minion key removal (``salt-key -d``).
on the master-- causing another "thundering herd". This can be avoided by
setting the When the master generates a new AES key, the minions aren't notified of this
but will discover it on the next pub job they receive. When the minion
receives such a job it will then re-auth with the master. Since Salt does
minion-side filtering this means that all the minions will re-auth on the next
command published on the master-- causing another "thundering herd". This can
be avoided by setting the
.. code-block:: yaml .. code-block:: yaml
@ -247,4 +253,4 @@ If the job cache is necessary there are (currently) 2 options:
- ext_job_cache: this will have the minions store their return data directly - ext_job_cache: this will have the minions store their return data directly
into a returner (not sent through the master) into a returner (not sent through the master)
- master_job_cache (New in `2014.7.0`): this will make the master store the job - master_job_cache (New in `2014.7.0`): this will make the master store the job
data using a returner (instead of the local job cache on disk). data using a returner (instead of the local job cache on disk).