OpenMind Cluster Specifications

Software

The system runs on CentOS 6.5, is managed through Bright Cluster Manager and runs the SLURM job scheduler.  

Hardware


Head node: openmind.mit.edu
•    2x Intel E5-2660 @ 2.20GHz (8 cores each)
•    128 GB RAM
•    5T local space (used for /home)

Workers: node001 to node016

•    2x Intel E5-2660 v2 @ 2.20 GHz (10 cores each)
•    256 GB RAM
•    800GB local space
•    GPU nodes: node001-node004
o    3x NVidia K20m each
•    hyper-threading on
•    virtualization bit on


Large memory node: node017

•    4x Intel E5-4650 @ 2.70GHz (8 cores each)
•    1TB RAM
•    3TB local space
•    hyper-threading on
•    virtualization bit on

worker nodes: node018 - node030

•    2x Intel E5-2660 v2 @ 2.20 GHz (10 cores each)
•    256 GB RAM
•    GPUs:
o    4x NVidia K20m in each of 10 nodes
o    2x NVidia K40m (from Nvidia)
o    1x NVidia Titan Black

Storage: 483TB Lustre filesystem from Xyratex (now part of Seagate) mounted at /om/

Networking:

•    Mellanox FDR infiniband across nodes and Lustre storage
•    Gigabit ethernet monitoring
•    10 Gigabit uplink to MIT network from OpenMind