High(er)-Performance Computing¶
Many researchers find that they are able to conduct all the computation required for their own research on their personal machines. Others need to make use of higher-performance computing facilities to execute their computations in a timely manner to a sufficient degree of accuracy.
The facilities available to INSIGNEO members:
Grant applications¶
Those submitting grant applications may want to incorporate the following text to provide an overview of the research computing facilities available to INSIGNEO and the Integrated Multiscale Biomechanics (IMSB) Research Group. Note that this does not mention the INSIGNEO Workstations.
USFD has various research computing infrastructure and services, including two high-performance computing (HPC) clusters, a research storage service and a virtual machine pool.
The ShARC HPC system has (at the time of writing) 108 nodes accessible to all researchers, providing a total of 1756 Intel Xeon Haswell cores, 9088GB RAM, 16 NVIDIA K80 GPUs for GPU computing and 2 NVIDIA Quadro K4200 GPUs for hardware-accelerated visualisation. In addition, the INSIGNEO Institute has 4 private nodes, providing 112 Xeon cores, 2112 GB RAM (1.5TB in one node) and two NVIDIA V100 GPUs. The related Integrated Musculoskeletal Biomechanics (IMSB) group has 12 private nodes, providing 272 Xeon cores, and 4608 GB RAM. Nodes are connected by an Intel Omni-Path network (100Gb/s point-to-point; in addition to Ethernet), which also connects nodes to a 667TB Lustre parallel filesystem.
The older HPC system, Iceberg, has (at the time of writing) 180 nodes accessible to all researchers, providing a total of 2632 Intel Xeon Ivy Bridge and Westmere cores, 9196 GB RAM, 15 GPUs for GPU computing and NVIDIA Tesla M2070-Q for hardware-accelerated visualisation. In addition, the INSIGNEO Institute has 8 private nodes, providing 128 Xeon cores and 2048 GB RAM. In addition, the IMSB group has 6 private nodes providing 96 Xeon cores, 1536 GB RAM, 2 NVIDIA K80 GPUs for GPU computing and 3 NVIDIA M2070-Q GPUS for hardware-accelerated visualisation. Nodes are connected by an Intel True-Scale Infiniband network (40Gb/s point-to-point; in addition to Ethernet). All nodes have access to ShARC’s Lustre parallel filesystem via Ethernet.
Both clusters run the Son of Grid Engine scheduler and have access to the research storage service.
As a UK research institute, USFD has access to Tier 2 (national) and Tier 2 (regional) HPC facilities through research council calls and Resource Allocation Panel (RAP) calls. The Tier 2 HPC systems include JADE (Joint Academic Data Science Endeavour), which is in part supported by USFD. JADE is targetted at machine learning workloads and contains 22 NVIDIA DGX-1 nodes, each of which contains 8 NVIDIA P100 GPUs (connected by NVlink interconnects) and 4TB of SSD storage. The cluster as a whole has over 1PB of Seagate ClusterStor storage, Mellanox EDR networking and optimised builds of machine learning software.