Data Center

The Green Data Center (GDC) of the University of Pisa is a state-of-the-art facility located in San Piero a Grado (Pisa). With its 104 racks, it is the largest university Data Center in Italy, as well as the only one to have received the “A” classification from AgID.  

The heart of the IT infrastructure supporting the teaching and research activities of the University of Pisa, the Pisa Data Center provides researchers at the University with resources capable of competing on equal footing with the most important research institutions in Europe. Specifically.  

The Computing Center is equipped with the latest technologies available for cooling, power supply, and distribution systems.  A next-generation infrastructure that can support around 700 nodes for a total of approximately 30K computing cores and more than 100 accelerators (GPUs) dedicated to High-Performance Computing and research in Artificial Intelligence. 

The machine room

The machine room is designed for efficient cooling through hot aisle containment.

Currently, the Data Center offers approximately 700 computing nodes for scientific computing, totaling 25K computing cores, 200TB of RAM, and around 100 GPUs of various generations.

The available scientific computing systems include:

  • multiprocessor servers with a high amount of RAM (High Memory Nodes);
  • medium small HPC computing clusters (16-32 nodes) with high-speed, low-latency connections (Infiniband/Omnipath);
  • single/multi GPU servers (including 4 Nvidia DGX/A100 and H100);
  • next-generation ARM processor manycore servers.

It includes a virtualization infrastructure based on VMware technology and one based on Microsoft technology that guarantees the possibility of obtaining computational resources simply and quickly (partly in a self-service mode).

The computing systems are connected to a state-of-the-art storage infrastructure composed of AllFlash storage through a high-speed interconnection network (25-400 Gb/s).

The network

The Data Center has high-speed and highly reliable internal and external connectivity.

There is a spine-leaf topology connection of 100Gb/s intra-datacenter and inter-datacenter. The connections are redundant both at the physical level (fiber optics) and at higher levels (network devices) to achieve No Single Point of Failure.

The interconnection network is based on next-generation Ethernet switching of 25-400 Gb/s and Infiniband / Omnipath for HPC services. The Data Center is connected to the university network with a 200 Gb/s fiber network and to the GARR network with a 100 Gb/s link.

Power and Cooling System

The power supply system consists of two independent lines (grid and generator) with double UPS. The pairing of these technologies allows for a low PUE: over the course of 3 years, a PUE of 1.2 has been recorded (for every KW absorbed for useful calculation, only 200W additional for cooling were necessary) while many data centers have a PUE of 2 (for every KW of useful calculation, another KW is needed to ensure cooling).

Data center power supply and cooling

News

Audio for Education | University of Pisa and Bose Professional on Classroom Audio

16 February 2026

How to ensure clear and inclusive listening in university classrooms? The University of Pisa and Bose Professional explore the strategic role of audio in teaching.

Data Centers and Sustainability | The Role of Water in Cooling Systems

16 February 2026

The Green Data Center of the University of Pisa adopts liquid cooling to enhance energy efficiency and support the new demands of high-density computing.

Nextcloud u-drive | The new storage service for scientific and research data is now active

27 June 2025

A new service based on Nextcloud is available for the secure storage, management, and sharing of scientific and research data.

Back to top