1. About HPC
1. About HPC
As an area under University IT's Research IT Services, the High-Performance Computing unit provides access to high-performance computing to UNT researchers, students and collaborators.
The infrastructure of HPC consists of a group of 64-bit high performance Intel Xeon and AMD Opteron clusters. The clusters are interconnected with Infiniband, Gigabit Ethernet, 100MB Ethernet or any combination of the said technologies. All HPC clusters run the Linux operating system and do not allow interactive logins, i.e., X11 forwarding.
If you have a program that takes months to run on your laptop or desktop, you could probably run it within a few hours taking advantage of the significant computing power of the HPC facility, provided your program is inherently parallelisable. If you have significant computing to be done or thousands of test cases to run through on your desktop, then several processors should significantly reduce your run-times allowing you to get more done.
The daily operation and development of HPC computational facilities is managed by a group of system administrators who make up part of the Research Computing Services area of Research IT Services in University Information Technology. Read more about the team.
For technical support, send an email to email@example.com with the word "Support" in the subject field.
2. Basic Usage
Currently, Linux (CentOS 7) is the only operating system supported on HPC clusters.
What we consider primary scientific programming languages such as C, C++, and Fortran are limitedly supported as personnel resources allow. All other languages i.e., Ada, Java, Pascal, Python, Scala,etc are not supported. That means, if your program is written in any language other than C, C++, or Fortran, and you encounter a problem, it is unlikely that you will receive support. If we have spare cycles and we can offer limited support, you should not expect your problem to be solved in any reasonable amount of time.
On most clusters a variety of editors are provided. This includes vi (vim), emacs, pico, and nano. Please consult the man pages, as covering these programs in any useful detail is beyond the scope of the FAQ page.
User home directories "/home/$USER" are NFS mounted on an as needed basis on compute nodes. Most input and batch submission scripts should be submitted from your home directory. If you have very large input or require high-speed reading of input, please send an email to firstname.lastname@example.org and elaborate on the need to allow us to assist.
Please do not use your home directory in "/home" to write runtime output or scratch files; your home directory is only meant for storing completed outputs. If your job or application is creating large or many temporary files you should direct that output to our high performance filesystem, which is a Lustre based storage and is available on Talon 3 and compute nodes at "/storage/scratch2/$USER". We do not provide backups for any of these filesystems and users are responsible for their data.
/home/$USER = home directory NFS mounted from Talon over gigabit Ethernet
/storage/scratch2/$USER = high speed Lustre filesystem over Mellanox FDR InfiniBand
See 'Compiling Code' section