Computing Resources


ITC computational resources are managed by Research Computing (RC). ITC users have access to the FAS (Faculty of Arts and Sciences) Cannon [] cluster which has over 100,000 cores available for use. Information on the general use queues for Cannon can be found on the RC website. The software that is available on Cannon can also be found there.

In addition to Cannon, the ITC has purchased an additional cluster for its own dedicated use. The ITC Cluster consists of 24 nodes each with two water cooled Intel 24-core Platinum 8268 Cascade Lake processors with 4 GB of RAM per core, for a total of 192 GB of RAM per node. This gives a total of 1152 cores available for use and 4.6 TB of RAM. The nodes are interconnected with HDR Infiniband and is part of the larger
Cannon Infiniband network.

There is one SLURM queue for the ITC cluster named itc_cluster. This queue has a run time limit of 7 days but has no limit to the number of cores that can be requested. This queue is subject to normal fairshare rules.

In addition to the cluster the ITC has purchased storage beyond the normal allotted for Cannon users. The ITC has 200 TB of storage space on
holystore01.  This space is not backed up.  The space is organized into three directories: Users, Lab, Everyone. Data in Users is only visible to that user, data in Lab is visible any one in the ITC, and data in Everyone is visible to anyone on Cannon.  While there is no user level quota, we do ask users be judicious in their use of the space.  If you wish to have access to this space please contact FASRC.

Groups within the ITC have also purchased resources beyond those provided by FAS and ITC. These resources vary from group to group and may include machines not managed by RC. If you desire more information on the resources held by a specific group please contact them directly.

To help ITC members utilize the cluster and the computational resources, the ITC has a member of Research Computing, Paul Edmon (pedmon@cfa), on staff. He is a trained astronomer with a background in high performance computing, HPC, and computational astrophysics. He is availible to help with computational astrophysics questions as well as general HPC concerns.

Research Computing staff are also available for help on software installation, debugging, and problems with the cluster. Please contact RC at if help is needed. For more information, please look at the FAS Research Computing website at To gain access to Odyssey and the ITC cluster, fill out the web form at Specify that you are a member of the ITC and include the name of your PI to receive access to ITC queues and storage. Please also include information about any additional resources that you have access to from your group.

For those who want to run calculations that do not fit on the ITC resources there is the Extreme Science and Engineering Discovery Environment (XSEDE) program. XSEDE coordinates access to 16 supercomputers, high-end visualization, and data analysis resources across the country. While full usage proposals can be quite substantial, startup requests, on the order 50-100k hours, are easily obtained using the step-by-step instruction page at