Frequently Asked Questions

1. About UNT HPC

  1. What is UNT HPC?
  2. What facilities does UNT HPC have?
  3. What can I do with UNT HPC facilities?
  4. Who operates UNT HPC facilities?
  5. How do I contact UNT HPC?

2. Basic Usage

  1. What operating systems are supported?
  2. What programming languages are supported?
  3. How do I edit files?
  4. Where do I direct output?
  5. How do I compile codes?

1. About UNT HPC

1.1 What is UNT HPC?

UNT HPC provides access to high performance computing to UNT researchers, students and collaborators.

Top

1.2 What facilites does UNT HPC have?

The infrastructure of UNT HPC consists of a group of 64-bit high performance Intel Xeon and AMD Opteron clusters. The clusters are interconnected with Infiniband, Gigabit Ethernet, 100MB Ethernet or any combination of the said technologies . All HPC clusters run the Linux operating system and do not allow interactive logins i.e., X11 forwarding.

Top

1.3 What can I do with UNT HPC facilities?

If you have a program that takes months to run on your laptop or desktop, you could probably run it within a few hours taking advantage of the significant computing power of the UNT HPC facility, provided your program is inherently parallelisable. If you have significant computing to be done or thousands of test cases to run through on your desktop, then several processors should significantly reduce your runtimes allowing you to get more done.

Top

1.4 Who operates UNT HPC facilities?

The daily operation and development of UNT HPC computational facilities is managed by a group of qualified and persistent system administrators from University Information Technology (UIT) at UNT. In addition, there is a system engineer or two on the team who are tasked with technical support of libraries, programming and application analysis.

Top

1.5 How do I contact UNT HPC?

For technical support, send e-mail to hpc-admin@unt.edu with the subject support.

Top

2. Basic Usage

2.1 What operating systems are supported?

Currently, Linux is the only operating system supported on UNT HPC clusters

Top

2.2 What programming languages are supported?

What we consider primary scientific programming languages such as C, C++, and Fortran are limitedly supported as personnel resources allow. All other languages i.e., Ada, Java, Pascal, Python, Scala,etc are not supported. That means, if your program is written in any language other than C, C++, or Fortran, and you encounter a problem, it is unlikely that you will receive support. If we have spare cycles and we can offer limited support, you should not expect your problem to be solved in any reasonable amount of time.

Top

2.3 How do I edit files?

On most clusters a variety of editors are provided. This includes vi (vim), emacs, pico, and nano. Please consult the man pages, as covering these programs in any useful detail is beyond the scope of the FAQ pages.

Top

2.4 Where do I direct input/output?

Input

User home directories "/home/$USER" are NFS mounted on an as needed basis on compute nodes.  Most input and batch submission scripts should be submitted from your home directory.  If you have very large input or require high speed reading of input please contact hpc-admin@unt.edu and discuss your need with us.

Output

Please DO NOT use your home directory in "/home" to write runtime output or scratch files; your home directory is only meant for storing completed outputs. If your job or application is creating large or many temporary files you should direct that output to our high performance filesystem, which is a Lustre based storage and is available on Talon 2 and compute nodes at "/storage/scratch2/$USER".  Alternatively each compute node also has local SAS based storage in sizes of 300GB at /storage/local for any job I/O.  We do not provide backups for any of these filesystems and users are responsible for their data. 

In summary:

/home/$USER = home directory NFS mounted from Talon over gigabit Ethernet

/storage/local = local compute node storage for any job I/O (6Gbs SAS disks)

/storage/scratch2/$USER = high speed Lustre filesystem over Mellanox FDR InfiniBand

Top

2.5 How do I compile programs?

In general, all clusters use the same or similiar commands for compilation. On AMD systems, these are:

 

cc, c++, f77, f90, and f95

 

On Intel systems:

 

icc, icpc, and ifort

 

The MPI counterparts to these compilers are:

 

mpicc, mpic++, mpiCC and mpif90

 

On Intel systems, such as Talon 2, you should always use the high performance Intel compilers icc and ifort for C/C++ and Fortran code respectively, if available. They give much better performance than the generic GNU compilers. Also, Intel compilers support OpenMP. Compiling with Intel compilers is not much different the compiling with their GNU couterparts, for example:

 

$ icc foo.c -o foo
$ icc -openmp foo.c -o foo
$ ifort *.f90 -o my_f90_prog
$ mpif90 *.f90 -o my_mpi_prog

 

The first example demonstrates compiling a C program with the Intel C compiler. In the second case, we request that the underlying compiler's OpenMP flag (which differs among compilers) is selected. The third example demonstrates an program written in Fortran90 compiled with Intel Fortran and linked with whatever cluster-specific MPI libraries are required. The fourth example is identical except that the mpi-prefixed command is used.

Now, some GNU compiler basic usage examples

 

$ cc foo.c -o foo
$ cc -openmp foo.c --o foo
$ f90 *.f90 -o my_f90_prog
$ mpif90 *.f90 -o my_mpi_prog

 

In the first example, the preferred compiler and optimization flags will be selected to compile a C program but nothing else. In the second case, we request that the underlying compiler's OpenMP flag (which differs among compilers) is selected. The third example demonstrates an program written in Fortran90 compilation and linkage with whatever cluster-specific MPI libraries are required. The fourth example is identical except that the mpi-prefixed command is used.

Top