Networks
Overwiew
(in german)
Internet Connection Uni Frankfurt
Network structure Department of Computer Science (as of April 2010)
HPC Cluster Computer Science (as of 2008)
RBI net segment (as of February 2007)
RBI net segment (as of February 2000)
RBI net segment (as of July '97)
RBI net segment (as of June '94)
FB20 net segment (as of June 94)
University network structure (as of June 94)
Network structure Department of Computer Science (as of April 2010)
HPC Cluster Computer Science (as of 2008)
RBI net segment (as of February 2007)
RBI net segment (as of February 2000)
RBI net segment (as of July '97)
RBI net segment (as of June '94)
FB20 net segment (as of June 94)
University network structure (as of June 94)
Network infrastructure
The infrastructure of the Institute for Computer Science is based on Ethernet and Infiniband technology. In the
Ethernet domain Gigabit and 10 Gigabit components are used. The links to the professorships are realised with
monomode and multimode optical fibres.
The connection of the two HPC clusters of the Computer Science Department is realised through 10 Gigabit
ethernet. The nodes of the cluster are linked by Gigabit ethernet connections as well as Infiniband. The link to
the central computer center of the university (Hochschulrechenzentrum / HRZ) are carried out by a 10 Gigabit
ethernet connection.
Hardware
The present configuration consists of:
- a pool of 39 workstations for students in the Fischerräumen:
- P-Room: 15 systems with Intel i5-9400 processor, 16 GB memory, 512 GB SSD, NVidea GTX 1650, 27" monitors
- E-Room: 8 systems with Intel i5-9400 processor, 16 GB memory, 512 GB SSD, Nvidea GTX 1650, 27" monitors
- K-Room: 10 systems with Intel Xeon/i5 processor, 8-16 GB memory, 512 GB SSD, 27" monitors
- O-Rooms: 6 systems with Intel Xeon/i7 processor, 8-32 GB memory, 512 GB SSD, 27" monitors
- a pool of 21 workstations for master students and final year projects in the Computer Lab R026:
- A-Room: 21 systems with Intel i5-9500 processor, 32 GB memory, 1TB SSD, Nvidea GTX 1660, 31,5" monitors
- HPC-Cluster
- Student Cluster ( 1 Head-Node + 15 Xeon based compute nodes)
- a network of central servers to realise central service functions for the department, e.g. NFS-service,
SMB-services, Logon-services, Database-service, Name-service, Mail-service, Cluster-service,
netinstallation-services and communication services like FTP, WWW und Email - On the workstation computers Linux Fedora is running. The server are operated with Windows Server 2016
and the linux derivates Centos/Rocky Linux/Alma Linux.
Naming
For historical and sentimental reasons all our workstations and server machines have names of the
Greek mythology (in the beginning only names of greek gods were used, but unfortunately there are not
enough gods). Workstations belonging to the same Ethernet segment wear names beginning with the
enough gods). Workstations belonging to the same Ethernet segment wear names beginning with the
same first letter. Workstations of the RBI staff beginn all with the letter h with one exception: otto (Rehagel).
The main server are named zeus, kronos, hera, helios, olymp, sphinx, hermes, helix, harpinna,
athene und poseidon.
The main server are named zeus, kronos, hera, helios, olymp, sphinx, hermes, helix, harpinna,
athene und poseidon.
Internet Domain
All computers belong to the internet domain rbi.informatik.uni-frankfurt.de. The workstations of the clusters
belong to the internet domain rbicl.informatik.uni-frankfurt.de.
belong to the internet domain rbicl.informatik.uni-frankfurt.de.
NIS Domain
For security and practical reasons the network is parted into two separate NIS domains, one for users and one
exclusively for the RBI staff. To the later belong all machines beginning with first letter h except hera, helios,
helix, harpinna und hermes.
helix, harpinna und hermes.