From e92b110a1c784b13b1ffd975f40c445c6bc920b8 Mon Sep 17 00:00:00 2001 From: rajvidhimar Date: Mon, 5 Jun 2017 15:49:36 +0530 Subject: [PATCH] Junos specific documentation for topic Network Automation --- doc/topics/network_automation/index.rst | 225 +++++++++++++++++++++++- 1 file changed, 222 insertions(+), 3 deletions(-) diff --git a/doc/topics/network_automation/index.rst b/doc/topics/network_automation/index.rst index b2bfded9b4..7c2e60282e 100644 --- a/doc/topics/network_automation/index.rst +++ b/doc/topics/network_automation/index.rst @@ -24,10 +24,229 @@ The methodologies for network automation have been introduced in :ref:`Carbon ` based on proxy minions: -- :mod:`NAPALM proxy ` -- :mod:`Junos ` - :mod:`Cisco NXOS ` - :mod:`Cisco NOS ` +- :mod:`Junos ` +- :mod:`NAPALM proxy ` + +JUNOS +----- + +Juniper has developed a Junos specific proxy infrastructure which allows +remote execution and configuration management of Junos devices without +having to install SaltStack on the device. The infrastructure includes: + +- :mod:`Junos proxy ` +- :mod:`Junos execution module ` +- :mod:`Junos state module ` +- :mod:`Junos syslog engine ` + +The execution and state modules are implemented using junos-eznc (PyEZ). +Junos PyEZ is a microframework for Python that enables you to remotely manage +and automate devices running the Junos operating system. + + +Getting started +############### + +Install PyEZ on the system which will run the Junos proxy minion. +It is required to run Junos specific modules. + +.. code-block:: shell + + pip install junos-eznc + +Next, set the master of the proxy minions. + +``/etc/salt/proxy`` + +.. code-block:: yaml + + master: + +Add the details of the Junos device. Device details are usually stored in +salt pillars. If the you do not wish to store credentials in the pillar, +one can setup passwordless ssh. + +``/srv/pillar/vmx_details.sls`` + +.. code-block:: yaml + + proxy: + proxytype: junos + host: + username: user + passwd: secret123 + +Map the pillar file to the proxy minion. This is done in the top file. + +``/srv/pillar/top.sls`` + +.. code-block:: yaml + + base: + vmx: + - vmx_details + +.. note:: + + Before starting the Junos proxy make sure that netconf is enabled on the + Junos device. This can be done by adding the following configuration on + the Junos device. + + .. code-block:: shell + + set system services netconf ssh + +Start the salt master. + +.. code-block:: bash + + salt-master -l debug + + +Then start the salt proxy. + +.. code-block:: bash + + salt-proxy --proxyid=vmx -l debug + +Once the master and junos proxy minion have started, we can run execution +and state modules on the proxy minion. Below are few examples. + +CLI examples +############ + +For detailed documentation of all the junos execution modules refer: +:mod:`Junos execution module ` + +Display device facts. + +.. code-block:: bash + + $ sudo salt 'vmx' junos.facts + + +Refresh the Junos facts. This function will also refresh the facts which are +stored in salt grains. (Junos proxy stores Junos facts in the salt grains) + +.. code-block:: bash + + $ sudo salt 'vmx' junos.facts_refresh + + +Call an RPC. + +.. code-block:: bash + + $ sudo salt 'vmx' junos.rpc 'get-interface-information' '/var/log/interface-info.txt' terse=True + + +Install config on the device. + +.. code-block:: bash + + $ sudo salt 'vmx' junos.install_config 'salt://my_config.set' + + +Shutdown the junos device. + +.. code-block:: bash + + $ sudo salt 'vmx' junos.shutdown shutdown=True in_min=10 + + +State file examples +################### + +For detailed documentation of all the junos state modules refer: +:mod:`Junos state module ` + +Executing an RPC on Junos device and storing the output in a file. + +``/srv/salt/rpc.sls`` + +.. code-block:: yaml + + get-interface-information: + junos: + - rpc + - dest: /home/user/rpc.log + - interface_name: lo0 + + +Lock the junos device, load the configuration, commit it and unlock +the device. + +``/srv/salt/load.sls`` + +.. code-block:: yaml + + lock the config: + junos.lock + + salt://configs/my_config.set: + junos: + - install_config + - timeout: 100 + - diffs_file: 'var/log/diff' + + commit the changes: + junos: + - commit + + unlock the config: + junos.unlock + + +According to the device personality install appropriate image on the device. + +``/srv/salt/image_install.sls`` + +.. code-block:: jinja + + {% if grains['junos_facts']['personality'] == MX %} + salt://images/mx_junos_image.tgz: + junos: + - install_os + - timeout: 100 + - reboot: True + {% elif grains['junos_facts']['personality'] == EX %} + salt://images/ex_junos_image.tgz: + junos: + - install_os + - timeout: 150 + {% elif grains['junos_facts']['personality'] == SRX %} + salt://images/srx_junos_image.tgz: + junos: + - install_os + - timeout: 150 + {% endif %} + +Junos Syslog Engine +################### + +:mod:`Junos Syslog Engine ` is a Salt engine +which receives data from various Junos devices, extracts event information and +forwards it on the master/minion event bus. To start the engine on the salt +master, add the following configuration in the master config file. +The engine can also run on the salt minion. + +``/etc/salt/master`` + +.. code-block:: yaml + + engines: + - junos_syslog: + port: xxx + +For junos_syslog engine to receive events, syslog must be set on the Junos device. +This can be done via following configuration: + +.. code-block:: shell + + set system syslog host port xxx any any + NAPALM ------ @@ -40,7 +259,7 @@ and the interaction with the network device does not rely on a particular vendor .. image:: /_static/napalm_logo.png -Bgeginning with Nitrogen, the NAPALM modules have been transformed so they can +Beginning with Nitrogen, the NAPALM modules have been transformed so they can run in both proxy and regular minions. That means, if the operating system allows, the salt-minion package can be installed directly on the network gear. The interface between the network operating system and Salt in that case would