4 min read

CIQ launches the Academic Research Computing Advantage for R1 universities

April 9, 2026
CIQ launches the Academic Research Computing Advantage for R1 universities

Table of contents

One stack replaces five vendor relationshipsCompliance evidence generated through normal operationsResearchers get direct access. Admins get their time back.The complete stack, under one licenseThe program is open through January 31, 2027

Contributors

Rose Stein

Subscribe to our newsletter

Subscribe

R1 universities manage some of the most complex computing infrastructure in the world. Most do it with teams of two to five people, on budgets that have not kept pace with the complexity they manage. And the commercial vendors those teams have relied on for operating systems, provisioning, and scheduling have spent the last two years redirecting their product investment toward enterprise AI. Research computing inherits the gap.

CIQ did not pivot. The Academic Research Computing Advantage (ARCA) is a new program built specifically for university research computing teams: one integrated stack, one support relationship, and one vendor actively investing in the research computing community. ARCA delivers the complete CIQ infrastructure stack from operating system through workload scheduler under a single annual site license with a customer success manager.

"University HPC teams run some of the most demanding research infrastructure in the world on staffing budgets that do not match the complexity they manage," said Gregory Kurtzer, CEO of CIQ and co-founder of Rocky Linux. "Their point solution vendors pivoted to enterprise AI. CIQ did not. ARCA is what happens when you build a stack specifically for research computing instead of retrofitting enterprise AI tooling and calling it a fit."

Rocky Linux powers more HPC environments than any other community Enterprise Linux distribution. If your cluster already runs Rocky Linux, ARCA is not a migration. It is commercial support, compliance tooling, GPU-aware scheduling, and cluster provisioning layered onto the foundation your team already trusts, with one vendor accountable for how all of it fits together.

Universities that have consolidated from fragmented point solutions to the integrated CIQ stack report reduced staff burden, improved uptime, and faster onboarding for new administrators. Research groups get broader access to cluster resources. HPC teams get their time back.

One stack replaces five vendor relationships

The typical university HPC environment runs one vendor for the operating system, another for provisioning, a third for scheduling, and additional contracts for containers and compliance. Five vendors means five support contracts, five update cycles, and five sets of expertise to maintain. When something breaks at the integration boundary (and it does) lean teams spend hours determining which vendor owns the problem.

93% of HPC centers report difficulty hiring qualified staff according to Hyperion Research/IDC (1). The operational overhead of a fragmented stack compounds that challenge. Administrators spend cycles on vendor coordination that should go to supporting researchers. CentOS 7's end of life accelerated the problem for universities still on legacy infrastructure, creating active migration demand with no clear commercial path forward from the vendors who built those environments.

ARCA consolidates that stack into one system, tested together and supported by one engineering team. A new administrator can reach full productivity in days because the stack behaves as a single system with one learning curve, not five.

The complete ARCA stack:

  • RLC Pro: Production-grade Enterprise Linux with a 10-year support lifecycle, FIPS 140-3 validated cryptography, and direct bug fixes from CIQ engineers. The foundation for every node in your cluster.
  • RLC Pro Hardened: Security-hardened Linux with pre-applied DISA STIG and CIS profiles, FIPS 140-3 cryptographic modules, and kernel-level runtime protection via LKRG. Built for universities handling Controlled Unclassified Information under DoD-funded contracts.
  • RLC Pro AI : Enterprise Linux purpose-built for GPU and AI workloads, with the CIQ Linux Kernel and a pre-validated NVIDIA stack that ships ready to run at first boot.
  • Warewulf Pro: Cluster provisioning through a web UI. Stateless imaging means nodes boot from a known image every time, configuration drift does not accumulate, and new clusters deploy in days.
  • Fuzzball: GPU-aware workload scheduling with a researcher-facing web interface. Research groups submit jobs, monitor workflows, and access GPU resources directly without filing tickets. Integrates with existing Slurm or PBS as a backend.
  • Ascender Pro: Ansible-based infrastructure automation with compliance reporting built in. Configuration tracking, drift remediation, CVE dashboards, and exportable audit records generated as a byproduct of normal operations.
  • Apptainer: The HPC-native container runtime. Researchers run containerized workflows without admin intervention or security risk. The standard for scientific container portability across HPC environments.

The entire stack is tested together, supported together, and escalated through one engineering team.

Evaluating your HPC infrastructure? Let us show you how the full stack works together.

Compliance evidence generated through normal operations

For universities handling controlled research data under DoD-funded contracts, CMMC requirements now tie contract and grant eligibility to demonstrable NIST 800-171 compliance. Manual evidence gathering consumes months of engineering time that lean teams cannot afford.

RLC Pro Hardened ships with pre-applied STIG and CIS hardening profiles and FIPS 140-3 cryptographic modules that address key NIST 800-171 control families. Ascender Pro provides Ansible-based remediation and exportable compliance reporting. When compliance tooling is part of the same integrated stack as the operating system and automation layer, evidence gathering is a byproduct of normal operations, not a separate project requiring dedicated staff hours.

Researchers get direct access. Admins get their time back.

The operational benefit of ARCA is not only fewer vendor relationships. It is what those reclaimed engineering hours go toward. Fuzzball gives research groups self-service job submission through a web interface, with GPU-aware scheduling that allocates resources fairly across groups without administrator intervention. Researchers submit containerized workflows, monitor job status, and access cluster resources directly. Admins handle fewer interruptions. Both groups do the work they were hired to do.

The complete stack, under one license

ARCA includes access to the complete CIQ stack (RLC+, RLC Pro, RLC Pro Hardened, RLC Pro AI, Warewulf Pro, Ascender Pro, Fuzzball, and Apptainer) under a single annual site license.

One invoice. One SLA. One vendor that understands how the pieces fit.

Teams running RHEL or starting fresh on Rocky Linux can onboard from their current state. The program is designed for universities supporting researchers, faculty, and graduate students running 100 to 10,000+ compute nodes, with or without federal grant compliance requirements.

The program is open through January 31, 2027

Standard terms and pricing apply after January 31, 2027. Contact us today to lock in pricing.

Schedule a discovery call

1 Hyperion Research (formerly IDC), HPC Workforce Study: Survey Results for the U.S. Department of Energy, report authored by Steve Conway, Earl Joseph, and Robert Sorensen, 2010 (updated 2017).

Built for scale. Chosen by the world’s best.

2.75M+

Rocky Linux instances

Being used world wide

90%

Of fortune 100 companies

Use CIQ supported technologies

250k

Avg. monthly downloads

Rocky Linux

Related posts

2023 Holiday Gift Guide for Rocky Linux Users

2023 Holiday Gift Guide for Rocky Linux Users

Why Rocky Linux Is a Rock-Solid Choice in an Economic Downturn

Why Rocky Linux Is a Rock-Solid Choice in an Economic Downturn

6 Signs That It's Time to Move to Rocky Linux

6 Signs That It's Time to Move to Rocky Linux

AI infrastructure labor: What GPU setup really costs

AI infrastructure labor: What GPU setup really costs