Administrative data |
System name |
PhysOn |
A brief description of the system |
High-performance computing cluster |
Owner |
SU |
Technical specifications |
Number of servers |
96 |
Server Specification |
SuperMicro CPU-Quad |
Processor specification |
Xeon 5335; Xeon 5420; Xeon E5-2620; Xeon E5-2650v4; Tesla M2090 |
CPU cores on the server |
8, 8, 12, 24, 16 |
Server memory |
12GB, 16GB, 16GB, 64GB, 94GB |
Total number of CPU cores |
216 |
Maximum number of parallel processes per processor |
240 |
Connection type |
Infiniband DDR & FDR |
Connection delay |
1.1 to 1.5 μs |
Connection tape |
20 – 56 Gbps |
Local file system type |
NFS & Lustre |
General data storage |
48TB |
Type of accelerators |
nVidia Tesla M2090 & K80 |
Number of cores |
6200 |
Server Accelerators |
2 |
Servers equipped with accelerators |
1 |
Max. performance (Tflops, double precision) |
3.5 |
Real performance (Tflops, double precision) |
3.2 |
OS |
Ubuntu and CentOS |
Version |
16.06 LTE ; 7x |
Task management system |
Sun Grid Engine & Slurm |
Development Tools |
Intel Vtune, Eclipse (incl. Intel XE support, PyDev, Photran), Nsight Eclipse (CUDA support), Nvidia Visual Profiler, Intel Inspector, Jupyter Notebook, Jupyter Console, PyCharm, git |
Compilers and libraries |
Компилатори: Intel Compilers, PGI Compilers, GNU C/C++/Fortran Compilers, OpenJDK; Интерпретатори: Python 2.7.x (Intel), Python 2.7.x (GCC), Intel Python 3, Python 3.6.x (GCC), TCL 8.5 (GCC), Perl 5.16 (GCC), Julia 1.1; Библиотеки/модули/добавки: Intel MKL, Intel DAAL, OpenMPI (Intel), OpenMPI (GCC), MPICH2 (Intel), MPICH2(GCC), HDF5 (Intel), HDF5+MPI (Intel), HDF5(GCC), NetCDF (Intel), NetCDF(GCC), PyDAAL, NumPy, SciPy, H5py, PyTables, mpi4py, pandas, matplotlib, sympy, tensorflow-gpu, MDAnalysis, OpenVINO, CUDA+cuDNN (8.x,9.x,10.x) |
Application software |
GROMACS, Quantum Espresso, ABINIT, CP2K, elk, WRF, Maple, Matlab, Mathematica, Siesta, Kshell, GAUSSIAN, LAMMPS, Tensorflow (with GPU support, on Intel Python 3), NAMD, CHARMM, PG-Storm |