So far _disk_profile uses the pool property only for vmware
hypervisors. Use it also for KVM/QEMU for users to be able to choose
where to create their images.
The pool property can be either the path to a local folder or an storage
pool name.
Rather than returning a list of {name: dict_of_disk_props}, return
include the name as a property in each disk dictionary and remove one
level. This aligns disks with the structure returned for networks and
allows usage of list comrehensions to slightly simplify the calling code.
Disk images are currently created at a patch matching this rule:
<virt:images>/<vmname>/<diskname>
In the future we want the user to be able to define in which libvirt
pool to create the image, rather than always in the default virt:images
folder. As libvirt doesn't allow volume names with a / in them, create
the qemu disk images in:
<virt:images>/<vmname>_<diskname>
Disk profile structure wasn't containing the image filename and path.
These were computed in two different places: one in _qemu_create_image()
and one in _gen_xml.
This commit moves all disks list computations in _disk_profile to get:
- default values on both disk profile and user disks definitions
- one place to compute all values
This should reduce error risks in future disk-related changes.
enable_qcow is rather badly named since it doesn't tell the user what
that actually does. Thanks to the new disks parameter, this option can
now be set on a per-disk basis in the disks structure using a new
overlay_image property.
enable_qcow is now marked as deprecated
virt.init just allows the user to pass in a profile name to get the
list of disk devices to create. This is rather good for simple
cases, but requires the profile to be stored in the minion
configuration.
From that commit on, the user can use a disks parameter to
customize the disks from the template or add other ones. This could be
handy for the virt.running state for example.
This new parameter makes the image parameter useless: deprecating it.
virt.init just allows the user to pass in a profile name to get the
list of network interfaces to create. This is rather good for simple
cases, but requires the profile to be stored in the minion
configuration.
From that commit on, the user can use an interfaces parameter to
customize the NICs from the template or add other ones. This could be
handy for the virt.running state for example.
virt.init's enable_vnc is too limiting for the current possibilities of
the libvirt stack. Deprecate it in favor of a graphics dictionary to
allow creating VMs with VNC, Spice, RDP or no graphics. This design
helps keeping the structure opened to support new parameters in the
future.
Storing the hypervisor type in a configuration value doesn't make sense
since the libvirt host tells us what it is capable of. Instead use
the values from the guest domains provided by the virt.capabilities.
This is also the occasion to remove the use of the 'esxi' hypervisor in
as many places as possible since this is a synonym of 'vmware' and
'vmware' is the value provided by the libvirt esx driver.
As mentioned in issue #48085, if a disk image has a backing file,
_parse_qemu_img_info() only returns the path to the backing file.
From a user point of view this is rather limited since there is no way
to know more on that file.
virt.get_disks now uses --backing-chain parameter to get the disk data
for all disks in the backing chain.
As mentioned in issue #48085 qemu-img infos parsing is really fragile.
In order to fix the weaknesses of that parsing use the --output json
parameter of qemu-img infos and parse this rather than the human output.
Note that the following properties in virt.get_disks output have a
different format:
* disk size, virtual size and snapshot vmsize are now in bytes, rather
than in a human-friendly format
* date is now complete and in iso format, but that won't bother
anybody since that was broken before (only had the time part)
Since the image property was a duplicate of the file one, they have been
consolidated into a single file property.
All format-specific values are simply not provided, but as those were
in the snapshots list before it's rather likely no one will care.
Also write the parsed data into a dictionary rather than writing it as
YAML in a string and parse it later on.
This commit also adds a unit test for _parse_qemu_img_info() function.
virt.get_disks does not need to depend on libvirt:hypervisor value to
decice whether to extract data using qemu-img info on a disk image.
This needs to be run on all qemu images... and those can be also used
by a Xen VM nowadays.
The refactoring removes that dependency on the deprecated configuration
libvirt:hypervisor and uses the value for the <driver> type attribute
in the disk XML configuration.
Enhance the get_disk unit test to make sure that the qemu-img info is
parsed for a qemu image and not for others
``netmiko`` is already a dependency of ``napalm``, therefore we are able to
expose all the ``netmiko`` functionalities from the ``netmiko`` execution
module without having to run under a netmiko Proxy Minion: see
https://github.com/saltstack/salt/pull/48260, which is the PR adding the netmiko
execution module.
These new functions added to the existing napalm module gate netmiko's features
to be reused into the napalm Proxy Minions, by forwarding the authentication
details and options which are already there.
For example, the following function goes through the ``netmiko`` Execution
Module, and napalm users can invoke it straight away to get direct access to
basic SSH primitives: ``salt '*' napalm.netmiko_call send_command 'show version'``.
(without any further configuration needed).
This is going to ease the CLI usage, so we can reduce everything to simply
specifying the host of the device, e.g.,
Having the following configuration in the opts/pillar:
```yaml
napalm:
username: salt
password: test
napalm_inventory:
1.2.3.4:
driver: eos
edge01.bzr01:
driver: junos
```
With the above available in the opts/pillar for a Minion, say ``server1``, the
user would be able to execute: ``salt 'server1' bgp.neighbors host=1.2.3.4`` or
``salt 'server1' net.arp host=edge01.bzr01``, etc.
Usually the Salt proxies have been designed using a single methodology: for
each device you aim to manage, start one Proxy Minion. The NAPALM modules didn't
make an exception, however, beginning with https://github.com/saltstack/salt/pull/38339
(therefore starting with release Nitrogen), the functionality has been enhanced
in such a way that we can execute the code when we are able to install the
regular Minion directly on the network hardware.
There is another use case, for something, let's call it "proxyless" when we
don't particularly need to start a Proxy process for each device, but rather
simply invoke arbitrary functions. These changes make this possible, so with one
single Minion (whether Proxy or regular), one is able to execute Salt commands
going through the NAPALM library to connect to the remote network device by
specifying the connection details from the command line and/or opts/pillar, e.g.
``salt server1 bgp.neighbors driver=junos host=1.2.3.4 username=salt``.
If the ``server1`` Minion has the following block in the Pillar (for example):
```yaml
napalm:
driver: junos
username: salt
```
The user would only need to provide the rest of the credentials (depending on
each individual case):
``salt server1 bgp.neighbors host=1.2.3.4``.