August 4, 2017 · ansible network automation nxos group_vars host_vars handlers

Networking with Ansible 103

Networking with Ansible 103: nxos modules, groupvars, hostvars & handlers.

When playing around with Ansible playbooks it easily becomes a mess. I think the general concensus is; playbooks are not containers for variables or information, playbooks are instructions and if you are noticing your playbooks starting to populate with detailed data, you might need to plan ahead. This post will go through some of this.

We will also become friends with the Ansible modules pertaining Cisco Nexus devices. A list of current modules for these network devices can be found at: http://docs.ansible.com/ansible/latest/listofnetwork_modules.html#nxos

We will not be using all of them in these examples, but there is quite a few to work with and many will probably apply to current production needs.

3.0 Initial Setup

A basic (but still sound strategy) is when starting to build out playbooks is to always go for the fact gathering first and make sure all ansible targets are working as intended.

In this post we will be working with two Cisco Nexus devices and we will setup the nxos_facts module with both the CLI(ssh) and API(nxapi) providers. More information about this module can be found at: http://docs.ansible.com/ansible/latest/nxos_facts_module.html

- name: nxos_facts module
  hosts: nxos
  vars:
    ssh:
      host: "{{ ansible_host }}"
      username: "{{ username }}"
      password: "{{ password }}"
      transport: cli
    nxapi:
      host: "{{ ansible_host }}"
      username: "{{ username }}"
      password: "{{ password }}"
      transport: nxapi
      use_ssl: yes
      validate_certs: no
      port: 8443

  tasks:
    - name: nxos_facts SSH
      nxos_facts:
        provider: "{{ ssh }}"

    - name: nxos_facts nxapi
      nxos_facts:
        provider: "{{ nxapi }}"

This is all pretty basic, we're creating two dictionaries which we later pass as arguments to the required provider key under the two nxos_fact tasks.

Then we run our playbook to verify that the target devices are behaving as intended:

ansible-playbook ./nxos_facts.yml -i ./../ansible-hosts

Since we became quite happy about the outcome we go to play with some other Nexus modules.

3.1 Basic Config: nxos_vlan module (NX-OS VLAN Part1)

In this section we will be focusing on actually starting to make changes to our ansible targets, we will be utilising the nxos_vlan module and add a vlan to both Nexus devices. More information pertaining this module can be found at: http://docs.ansible.com/ansible/latest/nxos_vlan_module.html

We are going to add the following configuration to the tasks section in the playbook we setup previously (but copied to another file named nxapi_vlan1.yml:

- name: Configure Nexus VLANs
  nxos_vlan:
    provider: "{{ nxapi }}"
    vlan_id: 550
    admin_state: up
    name: BLACK

What this will do for us is to create a vlan named BLACK with an vlan ID of 550 and set it to admin_state up, which will enable this vlan on the two target devices.

After adding this we run the playbook and we can see below that task was executed successfully.

We can also login to one of the devices and manually confirm that there now is a new vlan up named BLACK with an ID of 550.

nxos1# show vlan

VLAN Name                             Status    Ports
---- -------------------------------- --------- -------------------------------
550  BLACK                            active

So, this is all fun and stuff but our playbook is starting to get a bit bloated and what if we wanted to set 20 vlans or more, thing would get messy. Just as we created dictionaries containing connection information we can do the same for the vlan creation task.


- name: NXOS Example
  hosts: nxos
  vars:

    nxapi:
      host: "{{ ansible_host }}"
      username: "{{ username }}"
      password: "{{ password }}"
      transport: nxapi
      use_ssl: yes
      validate_certs: no
      port: 8443

    vlans:
      - vlan_id: 550
        admin_state: up
        name: BLACK
      - vlan_id: 551
        admin_state: up
        name: ORANGE
      - vlan_id: 552
        admin_state: up
        name: PINK

We are now going to decouple the variables from the task itself by creating a datastructure for the vlan task. In the above example we created a list containing a separate dictionary for every vlan id.

We will be passing this to the task, just as we have done previously for the nxos_fact modules when it comes to connection specific variables but will have to use a for-loop (with_items) in this scenario.

  tasks:
    - name: Configure Nexus VLANs
      nxos_vlan:
        provider: "{{ nxapi }}"
        vlan_id: "{{ item.vlan_id }}"
        admin_state: "{{ item.admin_state }}"
        name: "{{ item.name }}"
      with_items: "{{ vlans }}"

For a refresher on the with_items construct please see the previous post but the short version would be: during every iteration in our list the current dictionary will be referred to as item and since we are using key values within this dictionary we need to dig deeper into the data structure using . which then would get us said key.

In this section we did not really get a smaller playbook per se, but we did clean it up structurally and as an added bonus we also made it much more scalable. In the next session we will start looking into separating data and logic even more by starting to look at group variables & host variables.

After running our newly created playbook we should see something similar to:
In this case both switches had the vlans specificed already, had they not had these vlans we would have seen the changed variable increase.

3.2 Basic Config (NX-OS VLAN Part2) Host_vars & group _vars

The previous scenario gave us a glimpse of a world where all switches are identical, and no configuration ever differs. In a world that magic we would hardly need to build configuration management systems and these blogs would have lesser value. The real world is not a magical place, and network engineers cry more than most people in IT. If we can save even one soul this section would have been worth typing. We are going to keep at expanding and cleaning the playbook, and perhaps make the world a better place.

3.2.1 GROUP__VARS
We will start with moving the provider variables to our inventory since these will be used in all other playbooks pertaining these two switches, it makes no sense in having them in each and every playbook we create. Ansible has this separation built-in via Group Vars.

1 First we create a folder called group_vars then in this folder we create a new file called nxos.yml (nxos corresponds to the host group in our inventory file). This file is where we will enter all data that is equal across the host group.

Your working folder should look something like this:

$ tree
.
├── group_vars
│   └── nxos.yml
├── nxapi_vlan1.yml
├── nxapi_vlan2.yml
└── nxos_facts.yml

And the file nxos.yml should contain the following:

$ cat ./group_vars/nxos.yml
---
provider_ssh:
  host: "{{ ansible_host }}"
  username: "{{ username }}"
  password: "{{ password }}"
  transport: cli
provider_nxapi:
  host: "{{ ansible_host }}"
  username: "{{ username }}"
  password: "{{ password }}"
  transport: nxapi
  use_ssl: yes
  validate_certs: no
  port: 8443

Please note that we changed the name of ssh and nxapi dictionaries to make it clearer what the variables are referring to.

Next we will copy the nxosvlan2.yml to nxosvlan3.yml

cp nxapi_vlan2.yml nxapi_vlan3.yml

In nxapi_vlan3.yml we are going to delete both the ssh and nxapi variables and also we will refer to the group vars by changing the line provider: "{{ provider_nxapi }}" to provider: "{{ provider_nxapi }}".

Our playbook now contains the following lines:

---
- name: nxapi_vlan3.yml group vars
  hosts: nxos
  vars:
    vlans:
      - vlan_id: 550
        admin_state: up
        name: BLACK
      - vlan_id: 551
        admin_state: up
        name: ORANGE
      - vlan_id: 552
        admin_state: up
        name: PINK
  tasks:
    - name: Configure Nexus VLANs
      nxos_vlan:
        provider: "{{ provider_nxapi }}"
        vlan_id: "{{ item.vlan_id }}"
        admin_state: "{{ item.admin_state }}"
        name: "{{ item.name }}"
      with_items: "{{ vlans }}"

We test our changes by running our playbook again:

$ ansible-playbook ./nxapi_vlan3.yml -i ./../ansible-hosts

All previously vlans should be configured and we should be able to run the playbook without issues.

Now we will move the vlans variables from the playbook to our inventory as well.

$ vim ./group_vars/nxos.yml

We will add all the vlan information from our playbook and the group_vars nxos.yml file should now look like:

---
provider_ssh:
  host: "{{ ansible_host }}"
  username: "{{ username }}"
  password: "{{ password }}"
  transport: cli
provider_nxapi:
  host: "{{ ansible_host }}"
  username: "{{ username }}"
  password: "{{ password }}"
  transport: nxapi
  use_ssl: yes
  validate_certs: no
  port: 8443
vlans_common:
  - vlan_id: 550
    admin_state: up
    name: BLACK
  - vlan_id: 551
    admin_state: up
    name: ORANGE
  - vlan_id: 552
    admin_state: up
    name: PINK

Our playbook nxapi_vlan3.yml now looks as per below:

   1 ---
   2 - name: nxapi_vlan3.yml group vars
   3   hosts: nxos
   4   tasks:
   5     - name: Configure Nexus VLANs
   6       nxos_vlan:
   7         provider: "{{ provider_nxapi }}"
   8         vlan_id: "{{ item.vlan_id }}"
   9         admin_state: "{{ item.admin_state }}"
   10         name: "{{ item.name }}"
   11       with_items: "{{ vlans }}"

3.2.2 HOST_VARS
Now this is looking quite beautiful, we really did a great job with cleaning up our playbook and structuring the information pertaining the nexus devices. But there is still that small little itch regarding the case of non_common vlans for our devices.

Let's say we want to achieve the following: only our NXOS1 target should have a vlan ID 559 with the vlan name BROWN and only our NXOS2 target should have a vlan ID 560, in this case we cannot use group_vars since the vlans are unique on a per-host basis.

This is when we need to get our administrative super powers on and start working on with host_vars instead.

We create a folder called host_vars in our playbook directory (working dir in this section), we subsequently create two sub-folders; nxos1 & nxos2, under each directory we create a file called vlans.yml. The structure should look as per below:

$tree
.
├── group_vars
│   └── nxos.yml
├── host_vars
│   ├── nxos1
│   │   └── vlans.yml    
│   └── nxos2
│       └── vlans.yml
├── nxapi_vlan1.yml
├── nxapi_vlan2.yml
├── nxapi_vlan3.yml
└── nxos_facts.yml

The contents of our newly created files should be as per below:

$ cat ./nxos1/vlans.yml
---
vlans_unique:
  - vlan_id: 559
    admin_state: up
    name: BROWN

$ cat ./nxos2/vlans.yml
---
vlans_unique:
  - vlan_id: 560
    admin_state: up
    name: GRAY

ISSUE.

We have two variables pertaining our vlans, vlanscommon & vlansunique and instead of creating two tasks for this we can combine both lists at runtime via list concatenation, we can do this in our group_vars/nxos.yml file.

We can achieve list concatenation by assigning the entries in both vlans_common and vlans_unique, by declaring a new list and using the plus-sign to concatenate the two lists as per below:

vlans: "{{ vlans_common }} + {{ vlans_unique }} "

  1 ---
  2 provider_ssh:
  3   host: "{{ ansible_host }}"
  4   username: "{{ username }}"
  5   password: "{{ password }}"
  6   transport: cli
  7
  8 provider_nxapi:
  9   host: "{{ ansible_host }}"
 10   username: "{{ username }}"
 11   password: "{{ password }}"
 12   transport: nxapi
 13   use_ssl: yes
 14   validate_certs: no
 15   port: 8443
 16
 17 vlans_common:
 18   - vlan_id: 550
 19     admin_state: up
 20     name: BLACK
 21   - vlan_id: 551
 22     admin_state: up
 23     name: ORANGE
 24   - vlan_id: 552
 25     admin_state: up
 26     name: PINK
 27
 28 vlans: "{{ vlans_common }} + {{ vlans_unique }} "

Now we run our current playbook again to see that it executes against our targets, accomplishing our desired configuration update.

$ ansible-playbook ./nxapi_vlan3.yml -i ./../ansible-hosts

We can see from our run that both nxos1 and nxos2 each had one unique vlan created. There is a small caveat to this however, and that is what if a host has no unique vlans?

The problem with our current approach is that the list concatenation would fail since if there are no unique vlans assigned there is no variable to concatenate vlans_common with. We can avoid this by for example defining blank lists on our targets or we can (preferably) solve it by using filters.

 28 vlans: "{{ vlans_common }} + {{ vlans_unique }} "

If we take the second variable vlansunique we can pipe it to default([]), what this accomplishes is that upon a non-existant vlansunique situation it creates a blank list and defines it as vlans_unique.

Basically instead of trying to accomplish adding something that neither is defined or nor existing, we create an empty list so that the concatenation can proceed.

3.3 Write Mem and Handlers

We have not really looked at saving our configuration changes. This is easily solved with using Ansible handlers.

We will continue to use the nxosvlan module but for saving the configuration we will create a new task that uses the nxoscommands module.

Due to the virtual Cisco Nexus devices being a bit slow, I went into the group_vars file nxos.yml and added timeout: 60 in order for the task to properly complete.

Below is the playbook for this section.

  1 ---
  2 - name: nxapi_vlan3.yml group vars
  3   hosts: nxos
  4   tasks:
  5     - name: Configure Nexus VLANs
  6       nxos_vlan:
  7         provider: "{{ provider_nxapi }}"
  8         vlan_id: "{{ item.vlan_id }}"
  9         admin_state: "{{ item.admin_state }}"
 10         name: "{{ item.name }}"
 11       with_items: "{{ vlans }}"
 12
 13     - name: Saving running conf for changed targets
 14       nxos_command:
 15         provider: "{{ provider_ssh}}"
 16         commands: copy run start
 17       tags: wr_mem

Ansible does not realise we are using other commands than show commands, and therefore we are not getting a status indicator of changed.

We can make this a bit clearer by utilising the following `changed_when: True

- name: Saving running conf for changed targets
  nxos_command:
    provider: "{{ provider_ssh}}"
    commands: copy run start
  changed_when: True
  tags: wr_mem

Another issue with the current playbook would be that we cannot know if any or all of the tasks results in a change happening. Only if a change occurs on our configuration would we want a write mem to occurs. In the current playbook we are always making a new copy of the startup configuration.

Let us fix this by creating a new section in our playbok and moving the write mem task into that section and with the use of the notify command we will only call upon that handler task if a ansible target was changed from the tasks we were running against it.

As per below, we are adding a notify at the end of our original task, this will call upon the handler which was renamed to write mem (the name must match the name of the handler).

The handler will only run if the task creates a change, it will go through ALL playbook tasks before it looks to execute the handler or handlers specified.

We also created a new section and moved the write mem task into this section.

  1 ---
  2 - name: Handler Example
  3   hosts: nxos
  4   tasks:
  5     - name: Configure Nexus VLANs
  6       nxos_vlan:
  7         provider: "{{ provider_nxapi }}"
  8         vlan_id: "{{ item.vlan_id }}"
  9         admin_state: "{{ item.admin_state }}"
 10         name: "{{ item.name }}"
 11       with_items: "{{ vlans }}"
 12       notify: write mem
 13
 14   handlers
 15     - name: write mem
 16       nxos_command:
 17         provider: "{{ provider_ssh}}"
 18         commands: copy run start
 19       tags: wr_mem

Below we make an example run of this playbook:

ansible-playbook ./nxapi_vlan_handlers.yml -i ./../ansible-hosts

And in this case we can see that since both nxos1 and nxos2 had a configuration change (I manually removed the 552 vlan from the configurations) and subsequently the task notified the handler section that a handler called `write mem should run. Which it did.

3.4 NX-OS Config IP Interfaces

Another common configuration would be to set IP addresses. To reach this goal we will be using the nxosipinterface module, more info for this module can be found here: nxos ip interface - Manages L3 attributes for IPv4 and IPv6 interfaces

For testing purposes we are putting a lot of the logic in our playbook and hardcoding the interface port for example, we will however create a hostvars files called ipaddresses.yml for our two nexus devices containing:

$ cat ./host_vars/nxos1/ip_addresses.yml ./host_vars/nxos2/ip_addresses.yml
(nxos1:) ---
(nxos1:) ip_addr: 10.10.10.1
(nxos2:) ---
(nxos2:) ip_addr: 10.10.10.2

Which we then call on in our playbook:

- name: Configure Nexus IP
  nxos_ip_interface:
    provider: "{{ provider_nxapi }}"
    interface: Ethernet2/1
    version: v4
    addr: "{{ ip_addr }}"
    mask: 24
  notify: write mem

Upon execution we can see below that the task executed and ran only on nxos2 since 10.10.10.1 was previously set on nxos1.

Now our playbook can change and save the configuration if a change occured.