.. _insigneo_sharc: ShARC: University of Sheffield's newer HPC cluster ================================================== Introduction ------------ ShARC is the newer of the University of Sheffield's two computer clusters. The majority of the computational resources are available to all researchers with ShARC/Iceberg accounts but INSIGNEO and certain sub-groups have purchased hardware for ShARC that only they have access to. The documentation on :ref:`getting started with High-Performance Computing at the University of Sheffield ` covers connecting, usage (inc. submitting jobs and filestores. There is a separate section for the :ref:`software that is readily available on ShARC `. All users should consult that documentation when getting started. INSIGNEO members will then want to consult the following for additional information on making use of INSIGNEO-specific resources in ShARC and for guidance on INSIGNEO-specific workflows. INSIGNEO-related hardware ------------------------- Current hardware ^^^^^^^^^^^^^^^^ INSIGNEO have purchased some nodes for ShARC for the use of all members. In addition, two of INSIGNEO's sub-groups have purchased nodes for ShARC: * IMSB (for the `MultiSim `__ project) * `Polaris `__. These 'private' nodes allow for: * Restricted access to the sub-group/project that purchased the nodes (and potentially other related groups/projects if sharing agreements are reached); * Shorter job queue times for researchers as there is less contention for resources; * Specific workloads as the nodes may have e.g. more RAM than is typical for the cluster. Details as of 2018-11-16: .. csv-table:: :file: insig_sge_host_info_sharc.csv Future hardware ^^^^^^^^^^^^^^^ 2x nodes with NVIDIA K80 GPUs will be moved from Iceberg to ShARC in late 2018; see the :ref:`Iceberg ` page in this documentation for more information. Gaining access to these nodes ----------------------------- Users need to be explicitly added to particular user groups in order to be able to run jobs on these nodes (in addition to the 'public' nodes). If a researcher would like access then a relevant PI needs to contact the :ref:`INSIGNEO tech team `. Using these nodes ----------------- To run jobs on these 'private' nodes you need to follow the standard instructions for starting interactive sessions and submitting batch jobs but make sure you specify a **Project** and **Queue**, the values of which depend on which research group you are in: +-----------------+----------------------+------------------------+ | Research group | Project | Queue | +=================+======================+========================+ | IMSB (MultiSim) | ``insigneo-imsb`` | ``insigneo-imsb.q`` | +-----------------+----------------------+------------------------+ | Polaris | ``insigneo-polaris`` | ``insigneo-polaris.q`` | +-----------------+----------------------+------------------------+ | Other INSIGNEO | ``insigneo-default`` | ``insigneo.q`` | +-----------------+----------------------+------------------------+ Here's how you specify a Project and Queue when starting an interactive session; in this case we start a session on an IMSB-owned node: .. code-block:: bash qrshx -P insigneo-imsb -q insigneo-imsb.q -l rmem=256G And here's how to specify a Project and Queue in a batch job submission script; in this case we submit a job that should run on a Polaris-owned node: .. code-block:: bash :emphasize-lines: 4,5 #!/bin/bash #$ -l h_rt=24:00:00 #$ -l rmem=6G #$ -P insigneo-polaris #$ -q insigneo-polaris.q #$ -pe smp 16 #$ -m youremail@sheffield.ac.uk module load apps/java/jdk1.8.0_102/binary java -jar MyProgram.jar To see all jobs that are running in a particular queue or are waiting for a particular queue: :: qstat -q queuename.q -u \* e.g. :: qstat -q insigneo-imsb.q -u \* Details of HPC resource sharing ------------------------------- Jobs submitted under the ``insigneo-imsb`` project: - can run for up to 96h - will preferentially run: #. on the IMSB-purchased nodes #. on the INSIGNEO nodes excluding the GPU node and big memory node #. on the INSIGNEO GPU node or big memory node Jobs submitted under the ``insigneo-polaris`` project: - can run for up to 96h - will preferentially run: #. on the Polaris-purchased nodes #. on the INSIGNEO nodes excluding the GPU node and big memory node #. on the INSIGNEO GPU node or big memory node Jobs submitted under the ``insigneo-default`` project: - will preferentially run: #. on the INSIGNEO CPU nodes excluding the GPU node and big memory node (for up to 96h) #. on the INSIGNEO IMSB nodes (for up to 8h) #. on the INSIGNEO GPU node and big memory node (for up to 96h)