Architecture

PLEIADI is a project by USC VIII-Computing of INAF – National Institute for Astrophysics, offering high-performance computing (HPC) and high-throughput computing (HTC) resources.

Individual researchers and teams belonging to research projects, European projects, PRIN, INAF mainstream projects, scientific missions, etc. that require computing can apply requesting the resources.

The Pleiadi infrastructures is distributed on the following sites:

Bologna

  1. 1 frontend node for scheduling only
  2. 48 compute nodes without GPUs

The below table summarizes the main features of the Bologna Pleiadi cluster:

ArchitectureCluster Linux x86_64
Nodes interconnectionOmni-Path HFI Silicon 100 Series, 100 Gbits interconnect
Service networkEthernet 1 Gbits
CPU ModelIntel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz
Number of nodes48
Operating systemDebian 11
Workload managerSLURM 20.11.7
Storage volume200 TB, Lustre parallel filesystem (quota is 10 TB per user)

Catania

  1. 1 frontend node
  2. 72 compute nodes without GPUs (12 with a RAM memory of 256 GB and 60 with a RAM memory of 128 GB)
  3. 6 compute nodes with 1 GPU each (4 of Tesla K40m type, of 12 GB of memory each, and 2 of Tesla V100 PCIe type, of 16 GB of memory each), with a RAM memory of 128 GB
  4. 1 storage volume of 174 TB with BeeGFS parallel filesystem

The below table summarizes the main features of the Catania Pleiadi cluster:

ArchitectureCluster Linux x86_64
Nodes interconnectionOmni-Path HFI Silicon 100 Series, 100 Gbits interconnect
Service networkEthernet 1 Gbits
CPU ModelIntel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz
Number of nodes78
Operating systemCentOS Linux release 7.9.2009
Workload managerSLURM 21.08.5
Storage volume174 TB, BeeGFS parallel filesystem

Trieste

  1. 1 frontend node
  2. 60 compute nodes without GPUs (all with 256 GB of RAM)
  3. 6 compute nodes with 1 GPU each (Tesla K80 and 128 GB of RAM)
  4. 1 storage volume of 480 TB with BeeGFS parallel filesystem

The below table summarizes the main features of the Trieste Pleiadi cluster:

ArchitectureCluster Linux x86_64
Nodes interconnectionOmni-Path HFI Silicon 100 Series, 100 Gbits interconnect
Service networkEthernet 1 Gbits
CPU ModelIntel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz
Number of nodes66
Operating systemCentOS Linux release 7.9.2009
Workload managerSLURM 19.05.0
Storage volume480 TB, BeeGFS parallel filesystem

Call for proposals

Call 1https://docs.google.com/spreadsheets/d/1jJKsp1ibDRlpICN0T2sLb-H2bgdlSkC0RiRiF4rwdV0/edit#gid=935398274

Call 2https://docs.google.com/spreadsheets/d/1B1P7ATzLBu6wW6dSRofriwUHeVBqu-Cj/edit#gid=391681008 

Call 3 – closing date 30/11/2023 More info here

Call 4 – closing date 30/06/2024 More info here 

Call 5 – closing date 15/01/2025 More info here

User Guide

The comprehensive user guide for PLEIADI can be accessed at the following URL:

https://pleiadi.readthedocs.io/en/latest/quickstart/index.html.

This online resource offers a detailed overview of the initial steps to make the most of PLEIADI’s powerful capabilities. Through the guide, users can become familiar with the process of requesting computing resources, gain access to quick start instructions, and delve into advanced features provided by the platform.
Please find here the assigned resources for call #5 available from 01/03/2025 to 01/09/2025

For more information you can contact us at: info.pleiadi@inaf.it

INAF’s USC VIII-Computing issues a new call (the fifth one) for the use of HPC/HTC computing resources and for the availability of data storage spaces.

In particular,

  1. the use of the INAF computing system called PLEIADI and PLEIADI-GPU (the latter, on an experimental basis starting from March 2025)will be offered, the technical characteristics of which are described on the site https://pleiadi.readthedocs.io/en/latest/clusters/index.html ,
  2. the use of the Leonardo BOOSTER computing system will also be available, the technical characteristics of which are described on the site https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+LEONARDO+UserGuide,
  3. the long-term preservation system of scientific products at IA2 will also be available, with characteristics described on the site https://www.ia2.inaf.it/index.php/ia2-services/data-sharing-preservation

A ticketing system has also been setup to deal with technical problems related with the use of these systems: you can send your messages at pleiadi-help@ced.inaf.it

For more details about application process, please read it here

M100