Scientists increasingly need extensive computing power to solve complex problems in physics, mechanics and dynamics. Delft High Performance Computing Centre (DHPC) will deploy the infrastructure (hardware, software and staff) for TU Delft that is capable of complex analysis and modelling for researchers. At the same time we will provide Bachelor, Master and PhD students with hands-on experience using the tools they will need in their careers.
Both high-performance simulations and high-performance data science are evolving rapidly and the combination of these techniques will lead to completely new insights into science and engineering, an increase in innovation, and the training of high-performance computing engineers for the future.
Due to the rapidly evolving hardware and tools for numerical simulations, HPC has significantly changed the way fundamental research is conducted at universities. Simulations not only replace experiments, but also add very valuable fundamental insights. We see the results in all kinds of disciplines, such as materials science, fluid dynamics, quantum mechanics, design optimization, big data mining and artificial intelligence.
Supercomputer on TU Delft campus
What makes the Delft High Performance Computing Centre special is its flexibility: there are few limitations regarding hardware and software. This means the facility can be quickly adjusted in line with research and teaching requirements. When resources are insufficient cloud bursting to SURF or AWS is possible.
The Delft supercomputer will be ranked 250th worldwide, with a speed of 2 petaflops (one million times a billion calculations per second).
In addition to traditional access to the HPC, a user friendly web-based portal will also become available.
The cluster management software enables queueing of jobs and automatic assignment of appropriate resources to jobs.
The heart of the HPC is formed by 20,000 CPU cores in over 400 compute nodes and incorporates a high-speed parallel storage subsystem, based on a BeeGFS. All the compute nodes and the storage system is interconnected with HDR100 Infiniband technology for high-throughput low-latency inter-node communication.
The hardware layer is built on Fujitsu’s standard x86 servers, which are configured as CPU intensive compute nodes, high memory nodes, with up to 1,5 TB of memory, and nodes with GPU cards, Nvidia Tesla V100s, to allow applications with different requirements to run.
The DHPC also offers all of the advantages of a combined, central facility, which optimally complements the existing ICT facilities for login, authentication, storage and licenses.
An intensive training programme with courses, expert advice and user support is being developed.