Using Ascender to Update Rocky LTS and Non-LTS with CIQ Mountain
Something our customers have asked for is a way to do Long Term Support (LTS) with Rocky, and we answered by adding the ability to CIQ Mountain. In fact, we have a video showing exactly how to do that below. While that functionality is awesome, what’s not awesome is configuring and maintaining it manually. That’s where our Ascender Automation product comes in. Ascender is based on the upstream Ansible AWX project, which means it adds the ability to manage and run Ansible playbooks, all while having CIQ support!
Video demo:
https://www.youtube.com/watch?v=6VJNPA_LUyo
Playbook:
The playbooks for this demo can be found in my Git repository.
The playbook in question is rocky-update-mixed-lts.yml, and I’ll break down a few of its pieces here.
vars:
# This is the lts version you are locking to
# lts_version: lts-8.6
# This is the access key created in Mountain to authenticate the subscription
# This key should be maintained securely in a vault or securely within Ascender
mtn_access_key: xzy123
ciq_cfg_path: /etc/ciq.cfg
The above is my variables section.
First, you will see that I have the lts_version variable commented out. This is because I actually put this in the inventory and reference it from there in my playbooks. This way I can have it customized to the individual host itself!
Next, I have a Mountain access key defined. This is obviously not my actual key. The real key is stored in a custom credential inside of Ascender. (I’ll show that momentarily.) Just know that storing it this way means I can safely store all of my playbooks in a public Git repo without having to worry!
Last is the default location where the CIQ Mountain LTS information is to be stored. (I use this to check if this portion has already been configured.)
tasks:
- name: Collect what version of linux is currently running
ansible.builtin.shell: cat /etc/os-release | grep PRETTY_NAME
changed_when: false
register: os_version_before
- name: Display the os_version for each server
ansible.builtin.debug:
var: os_version_before.stdout
This begins the task section of the playbook. I have the portion shown here duplicated at the end of the playbook as well. This first section looks at the /etc/os-release file to see what version of Linux we have, then it simply displays it to screen. Ansible is a good tool for reporting, not just maintaining hosts.
- name: Search to see if LTS is already configured
when: hostvars[inventory_hostname].lts_version is defined
ansible.builtin.lineinfile:
path: "{{ ciq_cfg_path }}"
regexp: ".*{{ lts_version }}.*"
state: absent
check_mode: yes
register: presence
Here, I’m checking the Mountain configuration path to see if LTS has already been configured on the host. (I’ll use this next when checking whether or not to configure the repository.)
##Block start
- name: LTS block runs when an LTS version is defined in inventory for a host and it's not already configured
when: hostvars[inventory_hostname].lts_version is defined and presence.changed is false
block:
- name: Install required packages
ansible.builtin.dnf:
name:
- "https://repo.ciq.co/public/ciq-public-release.rpm"
disable_gpg_check: true
state: present
- name: Install required packages
ansible.builtin.dnf:
name:
- ciq
state: present
- name: Enroll in the CIQ CLI using access key
ansible.builtin.shell: "ciq enroll --access-key {{ mtn_access_key }}"
- name: Enable the subscription
ansible.builtin.shell: "ciq enable --key {{ lts_version }}"
##Block stop
In the above block, I’m first going to check if each host has the LTS variable defined for it and if LTS isn’t already configured on the host. If these conditions are met, it will complete the following four tasks:
-
Install the CIQ public release RPM
-
Install the CIQ CLI
-
Use the CIQ CLI to enroll the host with Mountain
-
Use the CIQ CLI to enable the specific LTS version defined for that host in the inventory
- name: Update all packages on the system
ansible.builtin.dnf:
name: "*"
state: latest
This last task will tell all the hosts defined to run updates.
The beautiful part of all of this is that if a device requires LTS, it will be configured. Then Ascender will go ahead and tell all of the defined hosts to run updates. This means that I can update mixed LTS and non-devices together in a single playbook if I like!
Ascender configuration
I’ll not bore you with the most basic portions; rather, I’ll show the highlights.
First, I created a custom credential to house my Mountain access key. This is done by going to Administration => Credential Types => Add.
I named mine “Mountain Access Key.”
Input configuration:
fields:
- id: supp_mtn_access_key
type: string
label: Mountain Access Key
secret: true
required:
- supp_mtn_access_key
Injector configuration:
extra_vars:
mtn_access_key: '{{ supp_mtn_access_key }}'
The input is designated as “secret” so that it will be obfuscated. The injector configuration is saying to take the gathered information and hand it to the playbook at runtime as an extra variable.
Now I can create a credential based on the above:
You can see that once I put the access key in Ascender, it doesn’t display it in plain text; rather, it will just inject it into any playbook I attach it to.
My inventory has two hosts:
The LTS host has the lts_version variable defined as such:
Not only does the lts_version instruct my playbook that it should be configured for LTS, but also the “lts-8.6” is actually the name of the Mountain repo this host will be subscribed to!
The job template is where Ascender brings all of the elements together: credentials, playbooks, inventories, etc.
Notice in my job template I have the Mountain access key credential added for injection to my playbook. As you can see in the template, it’s only a few basic entries to put it all together.
Once I run my job template (my playbook), I can see all of the detailed output:
Here you can see the results of my run. When I started, they were both Rocky 8.6 hosts, and now one is locked to LTS 8.6 while the other upgraded itself to Rocky 8.8.
Conclusion
With just a few simple tasks in a playbook, Ascender can manage an entire fleet of ever-changing hosts, both LTS and non-LTS. Not only can it perform automation, but it can also do it safely with Role-Based Access Control (RBAC), logging for auditing and compliance purposes, and excellent API access. If you have any questions, please feel free to reach out to one of our team members today!