Architecture
PLEIADI is a project by USC VIII-Computing of INAF – National Institute for Astrophysics, offering high-performance computing (HPC) and high-throughput computing (HTC) resources.
Individual researchers and teams belonging to research projects, European projects, PRIN, INAF mainstream projects, scientific missions, etc. that require computing can apply requesting the resources.
The Pleiadi infrastructures is distributed on the following sites:
Bologna
- 1 frontend node for scheduling only
- 48 compute nodes without GPUs
The below table summarizes the main features of the Bologna Pleiadi cluster:
Architecture | Cluster Linux x86_64 |
Nodes interconnection | Omni-Path HFI Silicon 100 Series, 100 Gbits interconnect |
Service network | Ethernet 1 Gbits |
CPU Model | Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz |
Number of nodes | 48 |
Operating system | Debian 11 |
Workload manager | SLURM 20.11.7 |
Storage volume | 200 TB, Lustre parallel filesystem (quota is 10 TB per user) |
Catania
- 1 frontend node
- 72 compute nodes without GPUs (12 with a RAM memory of 256 GB and 60 with a RAM memory of 128 GB)
- 6 compute nodes with 1 GPU each (4 of Tesla K40m type, of 12 GB of memory each, and 2 of Tesla V100 PCIe type, of 16 GB of memory each), with a RAM memory of 128 GB
- 1 storage volume of 174 TB with BeeGFS parallel filesystem
The below table summarizes the main features of the Catania Pleiadi cluster:
Architecture | Cluster Linux x86_64 |
Nodes interconnection | Omni-Path HFI Silicon 100 Series, 100 Gbits interconnect |
Service network | Ethernet 1 Gbits |
CPU Model | Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz |
Number of nodes | 78 |
Operating system | CentOS Linux release 7.9.2009 |
Workload manager | SLURM 21.08.5 |
Storage volume | 174 TB, BeeGFS parallel filesystem |
Trieste
- 1 frontend node
- 60 compute nodes without GPUs (all with 256 GB of RAM)
- 6 compute nodes with 1 GPU each (Tesla K80 and 128 GB of RAM)
- 1 storage volume of 480 TB with BeeGFS parallel filesystem
The below table summarizes the main features of the Trieste Pleiadi cluster:
Architecture | Cluster Linux x86_64 |
Nodes interconnection | Omni-Path HFI Silicon 100 Series, 100 Gbits interconnect |
Service network | Ethernet 1 Gbits |
CPU Model | Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz |
Number of nodes | 66 |
Operating system | CentOS Linux release 7.9.2009 |
Workload manager | SLURM 19.05.0 |
Storage volume | 480 TB, BeeGFS parallel filesystem |
Call for proposals
Call 2 – https://docs.google.com/spreadsheets/d/1B1P7ATzLBu6wW6dSRofriwUHeVBqu-Cj/edit#gid=391681008
Call 3 – Coming soon….
User Guide
The comprehensive user guide for PLEIADI can be accessed at the following URL:
https://pleiadi.readthedocs.io/en/latest/quickstart/index.html.
This online resource offers a detailed overview of the initial steps to make the most of PLEIADI’s powerful capabilities. Through the guide, users can become familiar with the process of requesting computing resources, gain access to quick start instructions, and delve into advanced features provided by the platform.
For more information you can contact us at: info.pleiadi@inaf.it
Coming soon…