August 20, 2017 · network automation ansible lookups filters filtes roles

Networking with Ansible 106

1.1 Lookups

The Lookup plugin allows us to access originating from outside sources. The lookups are executed from the current working directory relative to the playbook or role we are running. Ansible has built-in support for lookups using CSV, INI, Passwordstore, Credstash, DNS(dig), MongoDB and many more. We can use these sources then to assign the lookup data to variable for our roles our playbooks. As with templating, the lookup modules will be evaluated on the Ansible control machine, we can use both local local and remote sources for our lookups.

See also
Ansible Docs: Playbook Lookups

Ansible Lookup Examples

CSV
example.csv

Symbol,Atomic Number,Atomic Mass
H,1,1.008
He,2,4.0026
Li,3,6.94
Be,4,9.012
B,5,10.81
---
- hosts: local
  vars:
    contents: "{{ lookup('file', './example.csv') }}"
- debug: msg="The atomic number of Lithium is {{ lookup('csvfile', 'Li file=elements.csv delimiter=,') }}"
- debug: msg="The atomic mass of Lithium is {{ lookup('csvfile', 'Li file=elements.csv delimiter=, col=2') }}"

Manipulating the data on lookup:

showipintbrief.txt

Protocol  Address          Age (min)  Hardware Addr   Type   Interface
Internet  10.220.88.1             6   0062.ec29.70fe  ARPA   FastEthernet4
Internet  10.220.88.20            -   c89c.1dea.0eb6  ARPA   FastEthernet4
Internet  10.220.88.21           99   1c6a.7aaf.576c  ARPA   FastEthernet4
Internet  10.220.88.29            7   5254.abbe.5b7b  ARPA   FastEthernet4
Internet  10.220.88.30           88   5254.ab71.e119  ARPA   FastEthernet4
Internet  10.220.88.37          159   0001.00ff.0001  ARPA   FastEthernet4
Internet  10.220.88.38           70   0002.00ff.0001  ARPA   FastEthernet4
Internet  10.220.88.39            1   6464.9be8.08c8  ARPA   FastEthernet4
Internet  10.220.88.40          190   001c.c4bf.826a  ARPA   FastEthernet4
Internet  10.220.88.41           52   001b.7873.5634  ARPA   FastEthernet4
---
- hosts: local
  vars:
    contents: "{{ lookup('file', './showipintbrief.txt') }}"
  tasks:
    - name: Display routing table (more readable)
      debug:
       var: contents.splitlines()

    - name: Display routing table with some parsing
      debug:
       var: contents.splitlines()[3].split()

Ansible lookups using pipe

We can also use the pipe command to get both ansible control machine internal as well as external data. In the below example we are calling the application netmiko-show with the first argument specifying our show command and the second argument specifying our ansible target, the data returned is then passed to the show arp variable. Then in our task we are using the debug and since we are using the .splitlines() method we are displaying the data line by line since the method created a list from the original string.

---
- hosts: local
  vars:
    show_arp: "{{ lookup('pipe', 'netmiko-show --cmd \"show arp\" pynet_rtr1') }}"

  tasks:
    - name: Piping to Netmiko
      debug: 
        var: show_arp.splitlines()
1.2 Template Lookups with Jinja2

We start with creating our playbook and in this playbook we define a list and dictionary which we will then be using this when processing our jinja2 template lookups.

Jinja2 is a templating system that allows for creating a logical structure within normal text files.

lookup.template.yml

---
- hosts: cisco
  vars:
    my_dict:
        key1: value1
        key2: value2
        key3: value3
        key4: value4
        key5: value5
        key6: value6
    my_list:
        - hello
        - world
        - some
        - thing
        - else

  tasks:
    - name: Process list in a template
      debug:
        msg: "{{ lookup('template', './process_list.j2') }}"
      tags: process_list

    - name: Process dictionary in a template
      debug:
        msg: "{{ lookup('template', './process_dict.j2') }}"
      tags: process_dict

process_list.j2
Here we show how we can use a for loop in our flow control. In this case we simply mix normal characters and flow control by using the list my_list that was defined in our playbook and traverse it upon our playbook execution, we are not yet assigning it to a variable nor are we really doing anything else then having it be shown in our debug message.

Some straight text up here  
...
...

{% for element in my_list %}
{{ element }}
{% endfor %}


some text down here  

process_dict.j2
In this case we are processing the dictionary `my_dictwhich we declared in our playbook, we are also using an if-statement that says if the the key variable has a value of key3 print out k ----> v which will result in a line like: key3 ----> value3.

{% for k, v in my_dict.items() %}
{% if k == 'key3' %}
{{ k }} ----> {{ v }}
{% endif %}
{% endfor %}

While this is quite fun let's look at how we could actually use this to do something worthwhile. We are going to be gathering data and converting one data structure to another.

Creating our playbook: network_template.yml
We are using the `napalm_get_facts-module for this and we filter so that we only see the arp table.

---
- hosts: cisco
  vars:
    creds:
        host: "{{ ansible_host }}"
        username: "{{ username }}"
        password: "{{ password }}"
    creds_napalm:
        host: "{{ ansible_host }}"
        username: "{{ username }}"
        password: "{{ password }}"
        dev_os: ios

  tasks:
    - napalm_get_facts:
        provider: "{{ creds_napalm }}"
        filter: "arp_table"

    - name: Retrieve as a string
      debug:
        msg: "{{ lookup('template', './convert_napalm_arp.j2') }}"

    - name: Convert napalm data structure
      debug:
        msg: "{{ lookup('template', './convert_napalm_arp.j2') | from_yaml }}"

    - set_fact:
        new_arp: "{{ lookup('template', './convert_napalm_arp.j2') | from_yaml }}"

    - debug:
        var: new_arp['10.220.88.1']

Creating our Jinja2 template convert_napalm_arp.j2

---
{% for arp_entry in napalm_arp_table %}
{{ arp_entry['ip'] }}: {{ arp_entry['mac'] }}
{% endfor %}

Tasks:

The firs task is using the napalm_get _facts-module to gather facts, the module returns a variable called napalm_arp_table which is a list where the entries are dictionaries corresponding to the arp table entries.

Next we are processing the data to simplify the datastructure to only create dictionaries where ip will be the key and the mac address will be the corresponding value. This is then dynamically constructing a yaml file during runtime.

The convert_napalm_arp.j2 has a for loop that looks at all our dictionary entries and rebuilds the arp_entry dictionary to use the value of the ip as key and the value of the mac key as the corresponding value to the ip key.

The output after running msg: "{{ lookup('template', './convert_napalm_arp.j2') }}" returns a string (in the form of a yaml dictionary) with the included --- that we had on the first line in our template.

Next we are running msg: "{{ lookup('template', './convert_napalm_arp.j2') | from_yaml }}" which is using the returned string, we then pipe this to from_yaml which is a jinja2 filter we can use in ansible. We are using this filter do convert our retrieved string and converts this to a proper data structure which we can continue to use in the playbook.

The TL;DR: We collected data, cherry-picked what we wanted and transformed this into a data structure more compatible with what we needed.

After we have the data structure available to us in our playbook we use this to create a new variable called new_arp with the following line:new_arp: "{{ lookup('template', './convert_napalm_arp.j2') | from_yaml }}".

We can now work with the k,v(ip,mac) values in this dictionary for building out our playbooks dynamically.

1.3 Filters

References:
Ansible Filters
Jinja2 Filters available in Ansible

Pay attention to the fact that filters execute on the Ansible controller, not the target for the tasks being executed.

Create the ansible-playbook filters1.yml

--- 
- hosts: local
  vars:
    var1: hello
    var2: world
    my_dict:
        key1: zzzz
        key2: yyyy
        key3: whatever
    my_list:
        - hello
        - world
        - hello
        - world
        - hello
        - world
        - whatever
    my_list2:
        - world
        - 3
        - 77
        - Arista
    device: Cisco
#   var3: does not exist
  tasks:

1.3.1: to_yaml & to_nice_yaml filters

  tasks:
    - debug:
        msg: "{{ my_dict | to_yaml }}" 

    - debug:
        msg: "{{ my_dict | to_nice_yaml }}" 

We are using two different Jinja2 Filters to_yaml and to_nice_yaml.

msg: "{{ my\_dict | to\_yaml }}" gives us:

ok: [localhost] => {  
    "msg": "key1: zzzz\nkey2: yyyy\nkey3: whatever\n"
}

msg: "{{ my\_dict | to\_nice\_yaml }}" gives us:

ok: [localhost] => {  
    "msg": "{key1: zzzz, key2: yyyy, key3: whatever}\n"
}

The only real difference is "nice" yaml is a bit easier to read but the ansible run return does not really show completely since it does not give our line breaks.

1.3.2: default filter (setting default values)

 tasks:
   - name: "Jinja2 filter: default defined variable"
     debug:
       msg: "{{ device | default('device not defined') }}"
   - name:  "Jinja2 filter: default undefined variable"      
     debug:
       msg: "{{ var3 | default('Var3 not defined')

Example using pipe to default filter with a defined variable
Since Ansible did have the variable device defined already it simply prints it out for us. On the example when we are testing the undefined variable var3 it actually creates the variable for us during runtime with the string value we told it to set.

TASK [Jinja2 filter: default defined variable] *****************************
ok: [localhost] => {
    "msg": "Cisco"
}

Example using pipe to default filter with undefined variable

TASK [Jinja2 filter: default undefined variable] ***********************
ok: [localhost] => {
    "msg": "Var3 not defined"
}

1.3.3: List & Set Theory Filters

Available in Ansible we have some filters for dealing with and manipulating lists. min, max, unique, intersect, difference & symmetric_difference. These can be very powerful when scaling out our automation systems.

List & Set Theory Filters - Ansible Documentation

It is important to be aware of that when we are dealing with sets, duplicate entries will be removed.

filters3.yml playbook

---
- hosts: local
  vars:
    var1: hello
    var2: world
    my_dict:
        key1: zzzz
        key2: yyyy
        key3: whatever
    my_list:
        - hello
        - world
        - hello
        - world
        - hello
        - world
        - whatever
    my_list2:
        - world
        - 3
        - 77
        - Arista
    device: Cisco
    numbers:
        - 1
        - 2
        - 3
        - 4
        - 5

  tasks:
    - name:  List Filter - minimum value
      debug:
        msg: " {{ numbers | min }}"

    - name:  List Filter - maximum value
      debug:
        msg: " {{ numbers | max }}"

    - name: Set operations (unique)
      debug:
        msg: "{{ my_list | unique }}"
      tags: unique

    - name: Set operations (union)
      debug:
        msg: "{{ my_list | union(my_list2) }}"
      tags: union

    - name: Set operations (intersection)
      debug:
        msg: "{{ my_list | intersect(my_list2) }}"
      tags: intersection

    - name: Set operations, difference (unique items in my_list not in my_list2)
      debug:
        msg: "{{ my_list | difference(my_list2) }}"
      tags: difference

    - name: Set operations, unique items in each list
      debug:
        msg: "{{ my_list | symmetric_difference(my_list2) }}"
      tags: sym_diff

TASK [List Filter - minimum value]
Here we look at the list containing the numbers 1-5 and ask for the smallest value in this list.

ok: [localhost] => {
    "msg": " 1"
}

TASK [List Filter - maximum value]
Here we look at the list containing the numbers 1-5 and ask for the largest value in this list.

ok: [localhost] => {
    "msg": " 5"
}

TASK [Set operations (unique)] Unique returns a list with only the unique list entries msg: "{{ my_list | unique }}".

ok: [localhost] => {
    "msg": [
        "hello",
        "world",
        "whatever"
    ]
}

TASK [Set operations (union)]
Union combines and returns two lists : msg: "{{ my_list | union(my_list2) }}"

ok: [localhost] => {
    "msg": [
    "hello",
    "world",
    "whatever",
    3,
    77,
    "Arista"
    ]
}

TASK [Set operations (intersection)]
msg: "{{ my_list | intersect(my_list2) }}" will return the common(shared) items between the two sets.

ok: [localhost] => {
    "msg": [
        "world"
    ]
}

TASK [Set operations, difference (unique items in mylist not in mylist2)]

msg: "{{ my_list | difference(my_list2) }}" will remove any element in my_list that is also present in my_list2 and return the new value for my_list.

ok: [localhost] => {
    "msg": [
        "hello",
        "whatever"
    ]
}

TASK [Set operations, unique items in each list]
msg: "{{ my_list | symmetric_difference(my_list2) }}" returns a list of unique items for both sets.

ok: [localhost] => {
    "msg": [
        "hello",
        "whatever",
        3,
        77,
        "Arista"

1.3.4: Processing Hashes and dictionaries with filters

Combining hashes/dictionaries - Ansible Documentation

The Combine filter allows for hashes/dictionaries to be merged. In the below example 'b':2 is overriden by the 'b':3 on the hash we are performing the operation on.

{{ {'a':1, 'b':2}|combine({'b':3}) }}
  tasks:
    - name: Combine two dictionaries
      debug:
        msg: "{{ my_dict | combine({'key7': 'foo', 'key8': 'bar'}) }}"        
      tags: combine

    - name: Extract values from a dict
      debug:
        msg: "{{ ['key1', 'key2'] | map('extract', my_dict) | list }}"
      tags: extract

TASK [Jinja2 filter: combine hash/dictionary]
msg: "{{ my_dict | combine({'key7': 'foo', 'key8': 'bar'}) }}" simply combines two separate hashes/dictionaries and return the combined dictionary.

   ok: [localhost] => {
    "msg": {
        "key1": "zzzz",
        "key2": "yyyy",
        "key3": "whatever",
        "key7": "foo",
        "key8": "bar"
    }
}

TASK [Jinja2 filter: map (from hash/array to list)]
msg: "{{ ['key1', 'key2'] | map('extract', my_dict) | list }}" Extracts the value for key1 & key2 in my_dict and returns these values in the form of a list.

ok: [localhost] => {
    "msg": [
        "zzzz",
        "yyyy"
    ]
}

1.3.5: Ternary, Type casting and regex operations

  tasks:
    - name: Ternary operation
      debug:
        msg: "{{ (device == 'Cisco') | ternary('answer1', 'answer2') }}"

    - name: Cast string as boolean
      debug:
        msg: "{{ 'no' | bool }}"

    - name: Use regex_replace
      debug:
        msg: "{{ 'Some big string to parse' | regex_replace('^Some\\s+(big.*)$', '\\1 test') }}"

The | ternary filter makes it possible to use different values depending on if a statement is either true or false. In the task above, the statement device = cisco is true, so that would result in 'answer1', had it not been true the operation would have returned 'answer2' instead.

the | bool filter casts a string into a boolean expression

| regex_replace allows us to easily use regular expressions for replacing data.

1.3.6: Real life Example with the napalm_get_facts module and jinja filters

First we create our playbook napalm_get_facts_jinja2.yml

---
- hosts: cisco
  vars:
    creds:
        host: "{{ ansible_host }}"
        username: "{{ username }}"
        password: "{{ password }}"
    creds_napalm:
        host: "{{ ansible_host }}"
        username: "{{ username }}"
        password: "{{ password }}"
        dev_os: ios

  tasks:
    - napalm_get_facts:
        provider: "{{ creds_napalm }}"
        filter: "arp_table"
      tags: 
        - napalm_only
        - combine

    - debug:
        msg: "{{ napalm_arp_table | map(attribute='ip') | list }}"
      tags: napalm_only

    - set_fact:
        my_dict: "{{ my_dict|default({}) | combine( {item.ip: item.mac} ) }}"
      with_items: "{{ napalm_arp_table }}"
      tags: combine

    - debug:
        var: my_dict
      tags: combine

Let us do a quick break down of this playbook.

In the first task we use the napalm_get_facts module and filter so that we only get the arb table data.

After that we use jinja2 filters on the data in napalm_arp_table, we use the map filter to extract the values of the ip keys and the result is then cast to the list filter since it would have been returned as a generator as per below otherwise.

ok: [pynet-rtr1] => {
    "msg": "<generator object do_map at 0x7fa9c5bec640>"
}

In set_fact we are doing the real work.

We are creating a blank dictionary called my_dict and and for every iteration we are adding the key:value pair to this dictionary by using the combine filter on the second temp dictionary we are populating for every iteration.

The data source is the data returned from our facts gathering and which we can access in the napalm_arp_table dictionary.

This gets us every item.ip : item:mac, no matter if we are targeting one device or several.

The last debug is just how we print out the result upon runtime.

1.4 Roles

Roles allow us to build more modular playbooks by furthering the abstraction level in regards to functionality and flexibility. You will reach a point where it is just not possible to shove everything into the same playbook. This does not only make our playbooks more readable but it provides us with an easier way of reusing tasks instead of only relying on includes, since we now have these includes built into our directory structures.

Think of it like this; previously we have been very focused on specyfing the exact steps for evere playbook that tells how our ansible targets should be the design idea behind roles is further the idea of defining how the target is supposed to function instead of relying on straight up configuration steps.

It takes a while to get into this design paradigm, but without understanding the intention of the roles structure and what it is meant to achieve Ansible would just be another scripting language really, no more nor less than running Fabric or straight python scripts.

As stated in the Ansible Documentation, "When you start to think about it – tasks, handlers, variables, and so on – begin to form larger concepts." and it is these concepts we have to work with in order automate and work with design instead of just building configuration guides for CI's.

A typical directory structure for a role only targeting ACL configurations nd MGMT for our Edge devices would look as per below:

Calling upon this role could look something like:

---
-hosts: edge-devices
 roles:
   - EDGE-ACL
   - EDGE-NTP
   - EDGE-INTERFACE
ansible  
└── roles
    └── EDGE-ACL
        ├── CFGS
        ├── DIFFS
        ├── files
        ├── handlers
        ├── meta
        ├── tasks
        ├── templates
        └── vars

For all directories except for all directories except files and templates if we have a file called main.yml the contents of this will be automatically added to our playbook if it calls upon this specific role.

See also
Ansible docs » Playbook Roles and Include Statements
Ansible docs » Ansible Galaxy: Publicly available open-source roles

1.4.1 Breaking down a playbook into roles.

We will start with an existing playbook and template file which we will make a bit more modular and reusable.

roles1.yml

---
- hosts: pynet-rtr1
  gather_facts: false
  connection: local

  vars:
    creds:
      host: "{{ ansible_host }}"
      username: "{{ username }}"
      password: "{{ password }}"

  tasks:
    - name: Configure ACL
      ios_config:
        provider: "{{ creds }}"
        lines:
          - 10 permit ip host 1.1.1.1 any log
          - 20 permit ip host 2.2.2.2 any log
        parents: ip access-list extended test1
        before: no ip access-list extended test1
        match: exact
      notify: wr mem

    - name: templating tasks
      template:
        src: test_template.j2
        dest: ./outfile.txt

  handlers:
    - name: wr mem
      ios_command: 
        provider: "{{ creds }}"
        commands: write mem

test_template-j2

hello  
hello

{{ inventory_hostname }}

hello  
hello  

In this case our working directory is called roles and we have need to create the following folder structure:

EDGE-DEVICES-DIR/roles  
├── acl_config
│   ├── handlers
│   ├── tasks
│   ├── templates
│   └── vars
├── roles_1.yml
└── test_template.j2

Dealing with templates
We start by moving test_template.j2 into roles/acl_config/templates

Dealing with our tasks Create a file called main.yml under the tasks directory, since Ansible already understands that this is a task since it always looks for a file called main.yml in a folder called tasks. Since Ansible understands that this is a task we do not need to keep the - tasks keyword.

- name: Configure test1 ACL
  ios_config:
    provider: "{{ creds }}"
    lines:
      - 10 permit ip host 1.1.1.1 any log 
      - 20 permit ip host 2.2.2.2 any log 
    parents: ip access-list extended test1
    before: no ip access-list extended test1
    match: exact
  notify: wr mem 

- name: templating tasks
  #### Template ####
  template:
    src: test_template.j2
    dest: ./outfile.txt

Notice our 'notify' call to our 'wr mem' handler and the template paths used.

Dealing with our handlers Once again we're creating a file called main.yml under the handlers directory, since Ansible understands what a handler is we can remove that part from our original yaml file and have it look as per below:

- name: wr mem
  ios_command:
    provider: "{{ creds }}"
    commands: write mem

Dealing with our vars The same goes here as for the previous work we did on this role, create another file called main.yml in the vars directory and configure it as per below:

creds:  
  host: "{{ ansible_host }}"
  username: "{{ username }}"
  password: "{{ password }}"
{gfm-js-extract-pre-28}langauge-yaml
---
- hosts: pynet-rtr1
  gather_facts: false
  connection: local

  roles:
    - acl_config

As we have experienced the actual hands-on work of creating roles is not really that taxing, but it does stand that we really need to have our best thinking hats on when modeling not only how thing can be done and abstraced but also, the driver and intention of our automation design.

What are we really trying to accomplish? Unless that question has a definitive answer, how we accomplish these tasks would quickly become irrelevant. And that answer should encompass the motivations not only for the technical motivations and aspirations but also a clear management friendly mission statement for the business itself.

We look at a lot more of templating in the next post, as well more advanced ways of working with Jinja2. Till then!