Computing Resources

 

ITC computational resources are managed by Research Computing (RC). ITC users have access to the FAS (Faculty of Arts and Sciences) Cannon [https://www.rc.fas.harvard.edu/about/cluster-architecture/] cluster which has over 100,000 cores available for use. Information on the general use queues for Cannon can be found on the RC website. The software that is available on Cannon can also be found there.
In addition to Cannon, the ITC has purchased additional compute for its own dedicated use. These compute are split into three partitions. The
first is the itc_cluster partition which consists of 24 nodes each with two water cooled Intel 24-core Platinum 8268 Cascade Lake processors with 4 GB of RAM per core, for a total of 192 GB of RAM per node. This gives a total of 1152 cores available for use and 4.6 TB of RAM. The nodes are interconnected with HDR Infiniband and is part of the larger Cannon Infiniband network. This queue has a run time limit of 7 days. This queue is subject to normal fairshare rules.

The second is the itc_gpu partition which consists of 4 nodes each with two water cooled Intel 32-core Platinum 8358 Ice Lake processors with 32 GB of RAM per core, for a total of 2 TB of RAM per node, and 4 watercooled Nvidia A100 GPUs. This gives a total of 256 cores available for use, 8 TB of RAM, and 16 GPUs. This queue has a run time limit of 7 days and users must request at least one GPU to use this partition. This queue is subject to normal fairshare rules.

The final is the itc_gpu_requeue partition which uses the same hardware as the itc_gpu partition. Jobs in this partition can be preempted by
jobs in the itc_gpu partition. The partition has a 7 day time limit. Otherwise there is no other restrictions on the use of this partition. Work in this partition is half the cost of work on itc_gpu.

In addition to the cluster the ITC has purchased storage beyond the normal allotted for Cannon users. The ITC has 200 TB of storage space on
holystore01.  This space is not backed up.  The space is organized into three directories: Users, Lab, Everyone. Data in Users is only visible to that user, data in Lab is visible any one in the ITC, and data in Everyone is visible to anyone on Cannon.  While there is no user level quota, we do ask users be judicious in their use of the space.  If you wish to have access to this space please contact FASRC.

Groups within the ITC have also purchased resources beyond those provided by FAS and ITC. These resources vary from group to group and may include machines not managed by RC. If you desire more information on the resources held by a specific group please contact them directly.

To help ITC members utilize the cluster and the computational resources, the ITC has a member of Research Computing, Paul Edmon (pedmon@cfa), on staff. He is a trained astronomer with a background in high performance computing, HPC, and computational astrophysics. He is available to help with computational astrophysics questions as well as general HPC concerns.

Research Computing staff are also available for help on software installation, debugging, and problems with the cluster. Please contact RC at rchelp@fas.harvard.edu if help is needed. For more information, please look at the FAS Research Computing website at http://rc.fas.harvard.edu. To gain access to Odyssey and the ITC cluster, fill out the web form at http://rc.fas.harvard.edu/request. Specify that you are a member of the ITC and include the name of your PI to receive access to ITC queues and storage. Please also include information about any additional resources that you have access to from your group.

For those who want to run calculations that do not fit on the ITC resources there is the Extreme Science and Engineering Discovery Environment (XSEDE) program. XSEDE coordinates access to 16 supercomputers, high-end visualization, and data analysis resources across the country. While full usage proposals can be quite substantial, startup requests, on the order 50-100k hours, are easily obtained using the step-by-step instruction page at https://portal.xsede.org/web/guest/new-allocation.