The service descriptions are being translated at the moment

If there´s no English information available please use the German counterpart. We are working on the English translations right now and we apologize for any inconvenience.

High-performance research computer ForHLR I

From the end of September 2014 to April 2020, i.e. for just over five and a half years, the SCC operated the ForHLR (Phase) I high-performance research computer, which was jointly funded by the state of Baden-Württemberg and the federal government. ForHLR I was a parallel computer equipped with Intel Xeon processors (IvyBridge-EP) with more than 10,000 cores. The allocation of computing time on the ForHLR I was based on project applications, which were assessed by a steering committee.

ForHLR I was the first of two parallel computers, the second of which was put into operation in spring 2016, and consisted of more than 500 SMP nodes with 64-bit Intel Xeon processors and was used to provide computing time for research projects whose computing needs required parallel computing with three-digit processor numbers. The processing of jobs with this degree of parallelization is temporarily taken over by the ForHLR (Phase) II.

Configuration of the ForHLR I

The ForHLR I high-performance research computer included

  • 2 login nodes, each with 20 cores with a theoretical peak performance of 400 GFLOPS and 64 GB of main memory per node,
  • 512 "thin" computing nodes, each with 20 cores with a theoretical peak performance of 400 GFLOPS and 64 GB of main memory per node,
  • 16 "fat" computing nodes, each with 32 cores with a theoretical peak performance of 665.6 GFLOPS and 512 GB of main memory per node
  • and an InfiniBand 4X FDR interconnect as the connection network.
-->


The ForHLR I was a massively parallel parallel computer with a total of 540 nodes, 10 of which were service nodes - not counting the file server nodes. All nodes - except for the "fat" nodes - had a clock frequency of 2.5 GHz. The "fat" nodes had a clock frequency of 2.6 GHz. All nodes had local memory, local disks and network adapters. A single computing node had a theoretical peak performance of 400 or 665.6 GFLOPS, resulting in a theoretical peak performance of 216 TFLOPS for the entire system. The main memory across all computing nodes amounted to 41.1 TB. All nodes were interconnected by an InfiniBand 4X FDR interconnect.

The base operating system on each node was Ret Hat Enterprise Linux (RHEL) 6.x. The management software for the cluster was KITE; KITE is an open environment for the operation of heterogeneous computing clusters.

The scalable, parallel Lustre file system was connected as the global file system via a separate InfiniBand network. By using several Lustre Object Storage Target (OST) servers and Meta Data Servers (MDS), both high scalability and redundancy in the event of individual server failures were achieved. After logging in, you were in the HOME directory, which was the same as the HOME directory of the InstitutCluster II, the hc3 and the bwUniCluster. However, only 1 GB of disk space (for configuration files) was available on the ForHLR I in this directory. After logging in, you should change directly to the directory that can be accessed via the PROJECT environment variable. In this directory, the disk space approved by the steering committee was available to your project. Approx. 224 TB of disk space was available in the WORK directory. In addition, each node of the cluster was equipped with local disks for temporary data.

Detailed brief description of the nodes:

  • 2 20-way (login) nodes, each with 2 deca-core Intel Xeon E5-2670 v2 processors with a clock frequency of 2.5 GHz (max. turbo clock frequency 3.3 GHz), 64 GB of main memory and 5x1 TB of local disk space,
  • 512 20-way (computing) nodes, each with 2 deca-core Intel Xeon E5-2670 v2 processors (Ivy Bridge) with a clock frequency of 2.5 GHz (max. turbo clock frequency 3.3 GHz), 64 GB main memory and 2x1 TB local disk space,
  • 16 32-way (computing) nodes, each with 4 octa-core Intel Xeon E5-4620 v2 processors (Ivy Bridge) with a clock frequency of 2.6 GHz, 512 GB main memory and 8x1 TB local disk space and
  • 10 20-way service nodes, each with 2 deca-core Intel Xeon E5-2670 v2 processors with a clock frequency of 2.5 GHz and 64 GB of main memory.

A single deca-core processor (Ivy Bridge) had 25 MB L3 cache and operated the system bus at 1866 MHz, whereby each individual core of the Ivy Bridge processor had 64 KB L1 cache and 256 KB L2 cache.


Access to the ForHLR I for project participants

Only secure procedures such as secure shell (ssh) and the associated secure copy (scp) were permitted when logging in or copying data from and to the ForHLR I. The telnet and rsh mechanisms and other r commands were disabled for security reasons. To log in to the ForHLR I, the following command should be used:

ssh kit-account does-not-exist.forhlr1 scc kit edu or ssh kit-account does-not-exist.fh1 scc kit edu

In order to obtain access authorization for the ForHLR I as a project participant, the corresponding access form, which was accessible via the website https://www.scc.kit.edu/hotline/formulare.php, had to be filled out, signed by the project manager and sent to the ServiceDesk. -->