The GetDiskFreeSpace API call uses 32-bit unsigned ints, which causes it
to fail for large disks. The GetDiskFreeSpaceEx method uses long
integers and avoids the need for several manual calcuations.
This also ensures the capacity reported rounds correct. The previous
implementation was to convert the percentage directly to an int, slicing
off the decimal portion.
Conflicts:
salt/modules/win_disk.py
Without this change multiple ext_pillar S3 directives would use the same
cache file even if using different buckets. Now the cache file is based
on the bucket name and prefix so that we can have multiple directives
working together without accidentally using each others caches
https://github.com/saltstack/salt/issues/22472
Without this if we are downloading a file from
s3://my-bucket/some/prefix/top.sls
Then we will save the cache file at
/var/cache/salt/master/pillar_s3fs/some/prefix/top.sls
But have a pillar root at
/var/cache/salt/master/pillar_s3fs/
Which will fail to find the top file. Now we add the prefix to the root
so that we properly find the files. One might argue we should instead
remove the prefix on download but that would cause conflict when pulling
multiple external pillars from the same s3 bucket with different
prefixes
https://github.com/saltstack/salt/issues/22472
Users can use the prefix parameter to specify that pillar data should
come only from keys that start with the provided prefix. This allows
users to put pillar data in a 'subdirectory' of their bucket. It also
makes it easier to fully define pillars using S3 using a single bucket
See: https://github.com/saltstack/salt/issues/22472
In the regular CLI we consider a job as completed once all minions aren't running the job anymore. In this batching implementation we will ping all minions, wait up to timeout, then start the batching. In the event of minions returning during the batching we'll add them to the list of targets.
In the localclient the range return is already assumed to be a list of the minion_ids. This implementation assumed it was returning a list of fqdns, which makes for unecessary complications. In addition we were making the same range expand N times (N being the number of *total* minions)-- which is a LOT.
Conflicts:
salt/utils/minions.py