Channel factory now has usage argument this is used by RAETChannel to determine routing destination addressing
Fixed three usages of RAETChannel
usage == 'local_cmd' # Used by master_call for master to master communications
usage == 'call_cmd' # Used by salt-call for cli to master communications through minion
otherwise default usage == None # Used by minion to master communications
The destination estate is now None for all three usages.
These usages get translated into share destinations in the route dict for raet messages
usage 'local_cmd' -> destination share 'local_cmd'
usage 'call_cmd' -> destination share 'call_cmd'
usage None -> destination share 'remote_cmd'
Fixed core.py Lane Router _process_uxd_rxmsg to recognize the share destinations and
do the apppropriate routing
Mainly this allows the destination estate to be none when a RAETCHannel is created and then
the router can substitute the approapriate destination estate for the master
mount.mounted compared entries in fstab only by their fs_spec to determine if an
entry that is about to be added was already present and that the original one
should therefore be modified. It is, however, necessary to compare both fs_spec
and fs_file fields to allow users to mount new filesystems or block devices to
the same mountpoint.
This fixes#15968 which had already been fixed in #9573 but was re-introduced in
c8c6112 due to, what I believe, not rebasing that PR against develop.
In the past this feature has basically checked returns, pinged minions if they were still running the job, then returning once the job was no longer run remotely. This has caused some issues with race conditions if the minion or master are busy (and the time between minion done, and master recieved gets large).
This changes the behavior of the "wait" mechanism. The intent of this feature is to wait for the job return from all minions the job went to. To accomplish this we fire the job, then check the returns. If the initial returns are what we wanted/expected we break. If it isn't we will ping all targets and see if its still running. If more minions return, we merge them into the list of minions we are waiting for. We will timeout the job once all minions have stopped running the job and TIMEOUT seconds have passed (since the slowest minion)
Conflicts:
salt/client/__init__.py