Teika campus data network and Data Center (DC) were upgraded in a frame work of Latvian Academic Network (LAT) implementation project (ERAF project Nr. 2DP/2.1.1.3.2.10/IPIA/VIAA/001 sub activity “Information technology infrastructure and information systems improvement for scientific work” project “Development of Unified Academic Core Network of National Significance in Latvia”). Near to all Latvian academic and state scientific institutes are connected to LAT.
Teika campus merges state scientific institutes (Institutes) compactly located at Teika district in Riga. Among them: Latvian Institute of Organic Synthesis (OSI), Institute of Electronics and Computer Science (EDI), Latvian State Institute of Wood Chemistry (IWC), Institute of Physical Energetics (IPE).
Each institute connected to campus optical loop with 10 Gbps channel (with possibility to extend to 20 Gbps) and Teika loop is connected to LAT main infrastructure with 10 Gbps channel also. Finally, LAT is a part of GEANT2 network and therefore connected to scientific and academic networks (NREN) of all European countries (connection speed 2.5 Gbps). Connection speed to main Internet provider is 10 Gbps.
Teika campus DC server room was built in accord with TIER II+ standard. A lot of new computational and network resources were purchased and put into operation, namely: computing resources, data storage, software for neural networks and organic synthesis processes simulation and other scientific researches. DC computing resources include several virtual machine servers and HPC (High Performance Computer) that is based on 12 IBM System x3650 nodes. Each node includes one NVIDIA Tesla K40c accelerators (each with 2880 CUDA cores and 12 GB memory) and total HPC core count (CUDA cores) is 34560, and total operational memory – 192 GB. Later EDI additionally purchased node HPE Apollo 6500 Gen10 System /P00392-B21, with four NVIDIA A100 accelerators with total HPC core count (CUDA cores) 27648 and total video operational memory – 160 GB. Total overrall EDI HPC CUDA core count now is 62208 and and total overall operational memory – 352 GB. Each HPC node has a network card with two 10 Gbps ports, connected to the 10 Gbps switch (HP 5900AF-48XG-4QSFP+), which provides high speed data exchange between HPC nodes. HPC is used in many projects where simulation of real processes required. New data storage (more than 100TB) allows placing video archives and backups of research results and programs. Among scientific program systems are: Solid Works, ArcGIS, Math Works and others.
DC resources can be used not only by scientific and academic institutions but also by commercial enterprises if they carry out scientific researches jointly with scientific institutes.