ShARC: University of Sheffield’s newer HPC cluster

Introduction

ShARC is the newer of the University of Sheffield’s two computer clusters. The majority of the computational resources are available to all researchers with ShARC/Iceberg accounts but INSIGNEO and certain sub-groups have purchased hardware for ShARC that only they have access to.

The documentation on getting started with High-Performance Computing at the University of Sheffield covers connecting, usage (inc. submitting jobs and filestores. There is a separate section for the software that is readily available on ShARC. All users should consult that documentation when getting started.

INSIGNEO members will then want to consult the following for additional information on making use of INSIGNEO-specific resources in ShARC and for guidance on INSIGNEO-specific workflows.

Gaining access to these nodes

Users need to be explicitly added to particular user groups in order to be able to run jobs on these nodes (in addition to the ‘public’ nodes).

If a researcher would like access then a relevant PI needs to contact the INSIGNEO tech team.

Using these nodes

To run jobs on these ‘private’ nodes you need to follow the standard instructions for starting interactive sessions and submitting batch jobs but make sure you specify a Project and Queue, the values of which depend on which research group you are in:

Research group

Project

Queue

IMSB (MultiSim)

insigneo-imsb

insigneo-imsb.q

Polaris

insigneo-polaris

insigneo-polaris.q

Other INSIGNEO

insigneo-default

insigneo.q

Here’s how you specify a Project and Queue when starting an interactive session; in this case we start a session on an IMSB-owned node:

qrshx -P insigneo-imsb -q insigneo-imsb.q -l rmem=256G

And here’s how to specify a Project and Queue in a batch job submission script; in this case we submit a job that should run on a Polaris-owned node:

#!/bin/bash
#$ -l h_rt=24:00:00
#$ -l rmem=6G
#$ -P insigneo-polaris
#$ -q insigneo-polaris.q
#$ -pe smp 16
#$ -m youremail@sheffield.ac.uk

module load apps/java/jdk1.8.0_102/binary
java -jar MyProgram.jar

To see all jobs that are running in a particular queue or are waiting for a particular queue:

qstat -q queuename.q -u \*

e.g.

qstat -q insigneo-imsb.q -u \*

Details of HPC resource sharing

Jobs submitted under the insigneo-imsb project:

  • can run for up to 96h

  • will preferentially run:

    1. on the IMSB-purchased nodes

    2. on the INSIGNEO nodes excluding the GPU node and big memory node

    3. on the INSIGNEO GPU node or big memory node

Jobs submitted under the insigneo-polaris project:

  • can run for up to 96h

  • will preferentially run:

    1. on the Polaris-purchased nodes

    2. on the INSIGNEO nodes excluding the GPU node and big memory node

    3. on the INSIGNEO GPU node or big memory node

Jobs submitted under the insigneo-default project:

  • will preferentially run:

    1. on the INSIGNEO CPU nodes excluding the GPU node and big memory node (for up to 96h)

    2. on the INSIGNEO IMSB nodes (for up to 8h)

    3. on the INSIGNEO GPU node and big memory node (for up to 96h)