Architecture

PLEIADI is a project by USC VIII-Computing of INAF – National Institute for Astrophysics, offering high-performance computing (HPC) and high-throughput computing (HTC) resources.

Individual researchers and teams belonging to research projects, European projects, PRIN, INAF mainstream projects, scientific missions, etc. that require computing can apply requesting the resources.

The Pleiadi infrastructures is distributed on the following sites:

Bologna

  1. 1 frontend node for scheduling only
  2. 48 compute nodes without GPUs

The below table summarizes the main features of the Bologna Pleiadi cluster:

ArchitectureCluster Linux x86_64
Nodes interconnectionOmni-Path HFI Silicon 100 Series, 100 Gbits interconnect
Service networkEthernet 1 Gbits
CPU ModelIntel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz
Number of nodes48
Operating systemDebian 11
Workload managerSLURM 20.11.7
Storage volume200 TB, Lustre parallel filesystem (quota is 10 TB per user)

Catania

  1. 1 frontend node
  2. 72 compute nodes without GPUs (12 with a RAM memory of 256 GB and 60 with a RAM memory of 128 GB)
  3. 6 compute nodes with 1 GPU each (4 of Tesla K40m type, of 12 GB of memory each, and 2 of Tesla V100 PCIe type, of 16 GB of memory each), with a RAM memory of 128 GB
  4. 1 storage volume of 174 TB with BeeGFS parallel filesystem

The below table summarizes the main features of the Catania Pleiadi cluster:

ArchitectureCluster Linux x86_64
Nodes interconnectionOmni-Path HFI Silicon 100 Series, 100 Gbits interconnect
Service networkEthernet 1 Gbits
CPU ModelIntel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz
Number of nodes78
Operating systemCentOS Linux release 7.9.2009
Workload managerSLURM 21.08.5
Storage volume174 TB, BeeGFS parallel filesystem

Trieste

  1. 1 frontend node
  2. 60 compute nodes without GPUs (all with 256 GB of RAM)
  3. 6 compute nodes with 1 GPU each (Tesla K80 and 128 GB of RAM)
  4. 1 storage volume of 480 TB with BeeGFS parallel filesystem

The below table summarizes the main features of the Trieste Pleiadi cluster:

ArchitectureCluster Linux x86_64
Nodes interconnectionOmni-Path HFI Silicon 100 Series, 100 Gbits interconnect
Service networkEthernet 1 Gbits
CPU ModelIntel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz
Number of nodes66
Operating systemCentOS Linux release 7.9.2009
Workload managerSLURM 19.05.0
Storage volume480 TB, BeeGFS parallel filesystem

Call for proposals

Call 1https://docs.google.com/spreadsheets/d/1jJKsp1ibDRlpICN0T2sLb-H2bgdlSkC0RiRiF4rwdV0/edit#gid=935398274

Call 2 – https://docs.google.com/spreadsheets/d/1B1P7ATzLBu6wW6dSRofriwUHeVBqu-Cj/edit#gid=391681008

Call 3 – Coming soon….

User Guide

The comprehensive user guide for PLEIADI can be accessed at the following URL:

https://pleiadi.readthedocs.io/en/latest/quickstart/index.html.

This online resource offers a detailed overview of the initial steps to make the most of PLEIADI’s powerful capabilities. Through the guide, users can become familiar with the process of requesting computing resources, gain access to quick start instructions, and delve into advanced features provided by the platform.

For more information you can contact us at: info.pleiadi@inaf.it

Coming soon…

M100