Ascender

Migrating from one Linux major version to another never seems to be a simple task, but through the magic of automation, it can be a lot simpler and reproducible. I’m going to cover the Ansible playbooks I created to do the work, then I’ll execute it using our enterprise automation platform called Ascender.

Our recommended method is to:

  • Backup configuration and data from the old server
  • Provision a brand new server with the required apps
  • Restore configurations and data to the new server
  • Test services to the new server
  • Sunset the old server

Video Demo

Playbooks

First, I’m using resources from the community.general collection found here. I actually have a version of it included in my git repository.

All of my playbooks can be found here in my git repository.

I’ll cover some of the playbooks here… mostly discussing the highlights. The discover-backup.yml playbook is the first playbook run:

---
- name: Discover/backup hosts to be migrated
  hosts: migration-hosts
  gather_facts: false
  vars:
    # The host to store backup info to
    backup_storage: backup-storage
    # The location on the backup host to store info

    backup_location: /tmp/migration
  tasks:
  - name: Execute rpm to get list of installed packages
    ansible.builtin.command: rpm -qa --qf "%{NAME} %{VERSION}-%{RELEASE}\n"
    register: rpm_query

  - name: Populate service facts - look for running services
    ansible.builtin.service_facts:

  # - name: Print service facts
  #   ansible.builtin.debug:
  #     var: ansible_facts.services

  - name: Create backup directory on backup server - unique for each host
    ansible.builtin.file:
      path: "{{ backup_location }}/{{ inventory_hostname }}"
      state: directory
      mode: '0733'
    delegate_to: "{{ backup_storage }}"

  # - name: Backup groups
  #   ansible.builtin.include_tasks:
  #     file: group-backup.yml

  - name: Backup Apache when httpd is installed and enabled
    when: item is search('httpd ') and ansible_facts.services['httpd.service'].status == 'enabled'
    ansible.builtin.include_tasks:
      file: apache-backup.yml 
    loop: "{{ rpm_query.stdout_lines }}"

In the above, the first task I run uses the RPM command to gather information on all of the installed packages. Generally, I prefer to use a purpose-built module if one exists. In this instance, the ansible.builtin.package_facts module is designed to do this, but I found it didn’t always report correctly for CentOS 7 servers, so I went with the RPM command as it always works. This list of apps will be used towards the bottom.

Next, I create a directory for each host on a backup server. This will be the repository for all of my configs and data backed up from the old server.

The last task is where the real work happens. I loop over the list of the installed packages on the server and check if one is the Apache service and if it is enabled. If those conditions are met, it will pull in the apache-backup.yml task file. This task file is something I created to backup things from my environment. If I had FTP services on some of my servers, I would also need an ftp-backup task file and an additional matching task, just like the apache-backup file.

The apache-backup.yml file is actually fairly simple:

# Task file for backing up apache

# Backup apache config files
- name: Create an archive of the config files
  community.general.archive:
    path: /etc/httpd/con*
    dest: "/tmp/{{ inventory_hostname }}-httpd.tgz"

- name: Copy apache config files to ansible server
  ansible.builtin.fetch:
    src: "/tmp/{{ inventory_hostname }}-httpd.tgz"
    dest: "/tmp/{{ inventory_hostname }}-httpd.tgz"
    flat: true # Changes default fetch so it will save directly in destination

- name: Copy config archive to backup server from local ansible server
  ansible.builtin.copy:
    src: "/tmp/{{ inventory_hostname }}-httpd.tgz"
    dest: "{{ backup_location }}/{{ inventory_hostname }}/{{ inventory_hostname }}-httpd.tgz"
  delegate_to: "{{ backup_storage }}"

# Backup apache data files
- name: Create an archive of the data directories
  community.general.archive:
    path: /var/www
    dest: "/tmp/{{ inventory_hostname }}-httpd-data.tgz"

- name: Copy apache data files to ansible server
  ansible.builtin.fetch:
    src: "/tmp/{{ inventory_hostname }}-httpd-data.tgz"
    dest: "/tmp/{{ inventory_hostname }}-httpd-data.tgz"
    flat: true # Changes default fetch so it will save directly in destination

- name: Copy data archive to backup server from local ansible server
  ansible.builtin.copy:
    src: "/tmp/{{ inventory_hostname }}-httpd-data.tgz"
    dest: "{{ backup_location }}/{{ inventory_hostname }}/{{ inventory_hostname }}-httpd-data.tgz"
  delegate_to: "{{ backup_storage }}"

Taking a look at the above task file, you can see that it first creates an archive of the Apache configuration files. Really, it’s more or less a zip file.

It pulls the archive off the server, then pushes it over to a backup server.

It then repeats these actions for the data directories.

The next playbook is called provision-new-server.yml. I’ll leave you to look at it if you like, but it:

  • Connects to vcenter and provisions a new server
  • Waits for the server to pull an IP address
  • Adds the new host to the inventory via the Ascender API

Now that the old server is backed up and the new server has been provisioned, it’s time to restore some services on the new. This is done with the restore.yml playbook:

---
- name: Playbook to restore configs on new servers
  hosts: migration-hosts 
  gather_facts: false
  vars:
    # The host to store backup info to
    backup_storage: backup-storage

    # The location on the backup host to store info
    backup_location: /tmp/migration

  tasks:
  - name: Set the restore server variables
    ansible.builtin.set_fact:
      restore_server: "new-{{ inventory_hostname }}"
  # - name: Debug restore_server
  #   ansible.builtin.debug:
  #     var: restore_server

  # grab a list of the files on the backup server for this host
  - name: Find all files in hosts' backup directories
    ansible.builtin.find:
      paths: "{{ backup_location }}/{{ inventory_hostname }}"
#      recurse: yes
    delegate_to: "{{ backup_storage }}"
    register: config_files

  # - name: Debug config_files
  #   when: item.path is search(inventory_hostname + '-httpd.tgz')
  #   ansible.builtin.debug:
  #     var: config_files
  #   loop: "{{ config_files.files }}"

  # for each task type, loop through backup files and see if they exist - call restore task file
  - name: If apache is installed, call install task file
    when: item.path is search(inventory_hostname + '-httpd.tgz')
    ansible.builtin.include_tasks: 
      file: apache-restore.yml
    loop: "{{ config_files.files }}"

The first task in the above sets a restore_server variable to the name of the new server. My playbooks I named the new server “new-{{ inventory_hostname }}”. This means it’s the name of the old server with “new-” on the front… not overly complex, but it does the trick.

The second task will search the backup folder’s directory and find all files that have been backed up for each host.

Somewhat similar to the backup procedure, the last task in the restore procedure is to loop over the files from the backup server, then calling task files for the various applications/packages.  In this case, I’m looking for the Apache backup files, and when found, running the apache-restore.yml task file.

Next is to examine the apache-restore.yml file:

# Task file for installing and configuring apache

# - name: Debug restore_server
#   ansible.builtin.debug:
#     var: restore_server

# Install apache
- name: Install apache
  ansible.builtin.dnf:
    name: httpd
    state: latest
  delegate_to: "{{ restore_server }}"

- name: Copy apache config files to ansible server
  ansible.builtin.fetch:
    src: "{{ backup_location }}/{{ inventory_hostname }}/{{ inventory_hostname }}-httpd.tgz"
    dest: "/tmp/{{ inventory_hostname }}-httpd.tgz"
    flat: true # Changes default fetch so it will save directly in destination
  delegate_to: "{{ backup_storage }}"

- name: Copy config archive to new server from local ansible server
  ansible.builtin.copy:
    src: "/tmp/{{ inventory_hostname }}-httpd.tgz"
    dest: "/tmp/{{ inventory_hostname }}-httpd.tgz"
  delegate_to: "{{ restore_server }}"

- name: Extract config archive
  ansible.builtin.unarchive:
    src: "/tmp/{{ inventory_hostname }}-httpd.tgz"
    dest: /etc/httpd
    remote_src: true
  delegate_to: "{{ restore_server }}"

- name: Copy apache data files to ansible server
  ansible.builtin.fetch:
    src: "{{ backup_location }}/{{ inventory_hostname }}/{{ inventory_hostname }}-httpd-data.tgz"
    dest: "/tmp/{{ inventory_hostname }}-httpd-data.tgz"
    flat: true # Changes default fetch so it will save directly in destination
  delegate_to: "{{ backup_storage }}"

- name: Copy data archive to new server from local ansible server
  ansible.builtin.copy:
    src: "/tmp/{{ inventory_hostname }}-httpd-data.tgz"
    dest: "/tmp/{{ inventory_hostname }}-httpd-data.tgz"
  delegate_to: "{{ restore_server }}"

- name: Extract config archive
  ansible.builtin.unarchive:
    src: "/tmp/{{ inventory_hostname }}-httpd-data.tgz"
    dest: /var/www
    remote_src: true
  delegate_to: "{{ restore_server }}"

- name: Start service httpd and enable it on boot
  ansible.builtin.service:
    name: httpd
    state: started
    enabled: yes
  delegate_to: "{{ restore_server }}"

The above is quite simple. First things first, I install Apache. Next I connect to the backup server, copy the archive config files over, and extract them. I then do the same thing for the data files. Last, I start and enable the Apache service.

After this, I run the suspend-old.yml playbook to pause the old VM.

Lastly, I’ll run my testing playbooks that are designed for each app.

Ascender Configuration

I’ve covered adding inventories, projects, and job templates in other blog posts:

I will show the workflow template I created to tie all of the job templates together, though:

A workflow allows me to take playbooks of all sorts and string them together with branching on success or on failure logic. It also allows me to make my playbooks flexible and reusable.

Conclusion

Migrating infrastructure is often complex and time consuming, and while we can’t get more hours or employees to complete the task, we can employ our secret weapon, automation.  

CIQ is ready to help you not only standup Ascender in your environment, but also migrate your infrastructure with our expert support. We have tools to assist and at the end you have the automations for your environment ready for continued and future use!

As always, thanks for reading and I appreciate your feedback; happy migrating!

Greg Sowell
Greg Sowell
Principal Solutions Engineer | + posts

Similar Posts