Architecture
PLEIADI is a project by USC VIII-Computing of INAF – National Institute for Astrophysics, offering high-performance computing (HPC) and high-throughput computing (HTC) resources.
Individual researchers and teams belonging to research projects, European projects, PRIN, INAF mainstream projects, scientific missions, etc. that require computing can apply requesting the resources.
The Pleiadi infrastructures is distributed on the following sites:
Bologna
- 1 frontend node for scheduling only
- 48 compute nodes without GPUs
The below table summarizes the main features of the Bologna Pleiadi cluster:
Architecture | Cluster Linux x86_64 |
Nodes interconnection | Omni-Path HFI Silicon 100 Series, 100 Gbits interconnect |
Service network | Ethernet 1 Gbits |
CPU Model | Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz |
Number of nodes | 48 |
Operating system | Debian 11 |
Workload manager | SLURM 20.11.7 |
Storage volume | 200 TB, Lustre parallel filesystem (quota is 10 TB per user) |
Catania
- 1 frontend node
- 72 compute nodes without GPUs (12 with a RAM memory of 256 GB and 60 with a RAM memory of 128 GB)
- 6 compute nodes with 1 GPU each (4 of Tesla K40m type, of 12 GB of memory each, and 2 of Tesla V100 PCIe type, of 16 GB of memory each), with a RAM memory of 128 GB
- 1 storage volume of 174 TB with BeeGFS parallel filesystem
The below table summarizes the main features of the Catania Pleiadi cluster:
Architecture | Cluster Linux x86_64 |
Nodes interconnection | Omni-Path HFI Silicon 100 Series, 100 Gbits interconnect |
Service network | Ethernet 1 Gbits |
CPU Model | Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz |
Number of nodes | 78 |
Operating system | CentOS Linux release 7.9.2009 |
Workload manager | SLURM 21.08.5 |
Storage volume | 174 TB, BeeGFS parallel filesystem |
Trieste
- 1 frontend node
- 60 compute nodes without GPUs (all with 256 GB of RAM)
- 6 compute nodes with 1 GPU each (Tesla K80 and 128 GB of RAM)
- 1 storage volume of 480 TB with BeeGFS parallel filesystem
The below table summarizes the main features of the Trieste Pleiadi cluster:
Architecture | Cluster Linux x86_64 |
Nodes interconnection | Omni-Path HFI Silicon 100 Series, 100 Gbits interconnect |
Service network | Ethernet 1 Gbits |
CPU Model | Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz |
Number of nodes | 66 |
Operating system | CentOS Linux release 7.9.2009 |
Workload manager | SLURM 19.05.0 |
Storage volume | 480 TB, BeeGFS parallel filesystem |
Call for proposals
Call 2 – https://docs.google.com/spreadsheets/d/1B1P7ATzLBu6wW6dSRofriwUHeVBqu-Cj/edit#gid=391681008
Call 3 – closing date 30/11/2023 More info here
Call 4 – closing date 30/06/2024 More info here
Call 5 – closing date 15/01/2025 More info here
User Guide
The comprehensive user guide for PLEIADI can be accessed at the following URL:
https://pleiadi.readthedocs.io/en/latest/quickstart/index.html.
This online resource offers a detailed overview of the initial steps to make the most of PLEIADI’s powerful capabilities. Through the guide, users can become familiar with the process of requesting computing resources, gain access to quick start instructions, and delve into advanced features provided by the platform.
Please find here the assigned resources for call #5 available from 01/03/2025 to 01/09/2025
For more information you can contact us at: info.pleiadi@inaf.it
INAF’s USC VIII-Computing issues a new call (the fifth one) for the use of HPC/HTC computing resources and for the availability of data storage spaces.
In particular,
- the use of the INAF computing system called PLEIADI and PLEIADI-GPU (the latter, on an experimental basis starting from March 2025)will be offered, the technical characteristics of which are described on the site https://pleiadi.readthedocs.io/en/latest/clusters/index.html ,
- the use of the Leonardo BOOSTER computing system will also be available, the technical characteristics of which are described on the site https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+LEONARDO+UserGuide,
- the long-term preservation system of scientific products at IA2 will also be available, with characteristics described on the site https://www.ia2.inaf.it/index.php/ia2-services/data-sharing-preservation
A ticketing system has also been setup to deal with technical problems related with the use of these systems: you can send your messages at pleiadi-help@ced.inaf.it
For more details about application process, please read it here