Nvidia DGX Systems in partnership with Netweb provides a goal-driven portfolio for businesses aspiring to leverage the potential of breakthrough technology such as AI/ML, high-performance computing (HPC), and deep learning.

Nvidia DGX System for Deep Learning Includes-

NVIDIA DGX H100 (640GB):

The system has incredible capabilities for running deep learning workloads. Some of the crucial features include:

  • 32 PetaFLOPS AI Performance
  • Dual x86 CPU & 2TB of System Memory
  • 8x NVIDIA H100 Tensor Core GPU SXM5
  • 640GB HBM3 GPU Memory with NVLink
NVIDIA DGX A100 (640GB):

It is one of the power-packed machines from NVIDIA which is ideal for executing your dep learning applications. Some of the highlight features include:

  • 5 PetaFLOPS AI Performance
  • Dual AMD Rome 7742 CPU
  • 8x NVIDIA A100 Tensor Core GPU SXM4
  • 640GB HBM2e GPU Memory with NVLink

This is another gem in NVIDIA’s range of systems built for deep learning. Although not as powerful as H100, or A100, it still delivers a very powerful performance. The highlight features include:

  • 2.5 petaFLOPS AI Performance
  • 4x NVIDIA A100 Tensor Core GPU SXM4
  • Up to 320GB HBM2e Memory with NVLink
  • Whisper Quiet Liquid Cooling

NVIDIA Hopper H100

NVIDIA’s next AI GPU with Tensor Core delivers exceptional security, performance, and incredible scalability for every data center, ranging from clusters to workstations. With its efficiency, you can accelerate the execution of diverse workloads from small enterprise workloads to extreme HPC. Because of its incredible performance, it allows innovators to pursue their ambition faster.

Augment your Enterprise Performance with NVIDIA DGX™ Deployment and Powered by Netweb HPC, AI/ML Expertise!

High Availability with Impeccable Support by Nvidia

Nvidia customer programs and holistic software, hardware, and ML support are customized for quickly organizing your team and running demanding workloads.

Faster Infrastructure Set Up

Develop the infrastructure of your choice by leveraging the power of NVIDIA DGX™ Deployment and Netweb expertise in AI/ML, HPC.

Efficient Scaling

With the input of subject matter experts from Netweb and Nvidia DGX capability, you can scale as per will depending on compute, network, and storage requirements with an increasing workload requirement.

Game Changing Performance


DLRM Training

Up to 3X Higher Throughput for AI Training on Largest Models

DLRM on HugeCTR framework, precision = FP16 | 1x ​DGX A100 640GB batch size = 48 | 2x DGX A100 320GB batch size = 32 | 1x DGX-2 (16x V100 32GB) batch size = 32. Speedups Normalized to Number of GPUs.


RNN-T Inference: Single Stream

Up to 1.25X Higher Throughput for AI Inference

MLPerf 0.7 RNN-T measured with (1/7) MIG slices. Framework: TensorRT 7.2, dataset = LibriSpeech, precision = FP16. ​

Big Data Analytics Benchmark

RNN-T Inference: Single Stream

Up to 83X Higher Throughput than CPU, 2X Higher Throughput than DGX A100 320GB

Big data analytics benchmark | 30 analytical retail queries, ETL, ML, NLP on 10TB dataset | CPU: 19x Intel Xeon Gold 6252 2.10 GHz, Hadoop | 16x DGX-1 (8x V100 32GB each), RAPIDS/Dask | 12x DGX A100 320GB and 6x DGX A100 640GB, RAPIDS/Dask/BlazingSQL​. Speedups Normalized to Number of GPUs ​