Centre for Information Resource Management

Tejaswi HPC – Server Specifications

Complete Server Specifications

SI No Workload Qty Model CPU GPU RAM (GB) ETH IB HDD/SSD
1 CPU Node 102 HPE DL360 G10+ Intel 8358, 32C, 2.6GHz NA 256 4×1G 1×PX100G 1×960GB
2 GPU Node 8 HPE DL360 G10+ Intel 6336Y, 24C, 2.4GHz 2×NVIDIA A100 80GB 256 4×1G 1×PX100G 1×960GB
3 Master Node 2 HPE DL360 G10+ Intel 6336Y, 24C, 2.4GHz NA 256 4×1G 1×PX100G 4×960GB
4 Login Node 2 HPE DL360 G10+ Intel 6336Y, 24C, 2.4GHz NA 256 4×1G 1×PX100G 4×960GB
5 AI/ML Node 1 HPE A6500 G10+ Intel 7543, 32C, 2.8GHz 8×NVIDIA HGX A100 80GB 1024 4×1G 2×PX100G 4×3.84TB

1. CPU Node

A CPU Node is used for general-purpose computing tasks. It is suitable for non-GPU intensive workloads, such as business applications, database management, and web hosting. CPU nodes perform well when the tasks are sequential and don’t need the massive parallel processing capabilities of GPUs.

2. GPU Node

A GPU Node is designed for GPU-accelerated tasks that require heavy parallel processing, such as AI/ML model training, data science, and deep learning. These nodes use powerful GPUs to accelerate the computation of complex tasks, significantly reducing processing time.

3. Master Node

The Master Node is the central management server in a distributed computing environment. It schedules and manages tasks across the cluster and ensures smooth system operation.

4. Login Node

A Login Node provides users with access to the cluster. It handles job submission, authentication, and user interaction without affecting compute performance.

5. AI/ML Node

The AI/ML Node is designed specifically for large-scale AI and ML workloads requiring high GPU performance and very large memory resources.

6. NVIDIA A100 80GB GPU

The NVIDIA A100 80GB GPU is a high-performance accelerator designed for AI, deep learning, and HPC workloads. It includes Tensor Cores and 80GB HBM2e memory for massive model training.

7. NVIDIA HGX A100 80GB GPU

The NVIDIA HGX A100 80GB is a multi-GPU platform designed for extreme-scale AI model training and HPC computation. It uses NVLink for ultra-fast inter-GPU communication.

Back to HPC Home