HPC Documentation

Our Mission

UA High Performance Computing (HPC) is an interdisciplinary research center focused on facilitating research and discoveries that advance science and technology. We deploy and operate advanced computing and data resources for the research activities of students, faculty, and staff at the University of Arizona. We also provide consulting, technical documentation, and training to support our users.

This site is divided into sections that describe the High Performance Computing (HPC) resources that are available, how to use them, and the rules for use.

 


 

 

Contents
  • User GuideThis section has the basic knowledge that will introduce you to the resources and provides information on account registration, system access, how to run jobs, and how to request help.
  • ResourcesDetailed information on compute, storage, software, grant, data center, and external (XSEDE, CyVerse, etc.) resources.
  • TrainingWorkshop
  • PoliciesPolicies related to topics that include acceptable use, access, acknowledgements, buy-in, instruction, maintenance, and special projects.
  • ResultsA list of research publications that utilized UArizona's HPC system resources.
  • FAQA collection of frequently asked questions and their solutions.
  • Secure HPC

 

Quick Links
  • User Portal  —  Manage and create groups, request rental storage, manage delegates, delete your account, and submit special project requests.

  • Open OnDemand —  Graphical interface for accessing HPC and applications.

  • Job Examples — View and download sample SLURM jobs from our GitHub site.

  • Training Videos — Visit our YouTube channel for instructional videos, researcher stories, and more.

  • Getting Help —  Request help from our team.

 


 

Highlighted Research

Faster Speeds Need Faster Computation - Hypersonic Travel





Quick News

Increased Time Allocations

Beginning on March 1st, 2024, the monthly allocation for each group was  increased to 150,000 CPU Hours on Puma (previously 100,000), and 100,000 CPU hours on Ocelote (previously 70,000).

Faster Interactive Sessions

Are you frustrated waiting for slow interactive sessions to start? Try using the standard queue on ElGato. We have provisioned 44 nodes to only accept the standard queue to facilitate faster connections. To access a session, try:

(puma) [netid@junonia ~]$ elgato (elgato) [netid@junonia ~]$ interactive -a <your_group>

For more information on interactive sessions see our page: Running Jobs With SLURM.

Singularity is Now Apptainer

Singularity has been renamed Apptainer as the project is brought into the Linux Foundation. An alias exists so that you can continue to invoke singularity. Local builds are now possible in many cases and remote builds with Sylabs are no longer supported

We only keep a reasonably current version of Apptainer. Prior versions are removed since only the latest one is considered secure. Apptainer is installed on all of the system's compute nodes and can be accessed without using a module.


 


 

Calendars

Maintenance Calendar

Date

Event

 

Electrical maintenance is scheduled from 6AM to 12PM on October 3rd. ElGato will be unavailable during this period.

 

Quarterly maintenance is scheduled from 6AM to 6PM on July 26th.

 

Maintenance downtime is scheduled from 6AM to 6PM on April 26th for ALL HPC services.

 

Maintenance downtime is scheduled from 6AM to 6PM on January 25 for ALL HPC services