POD Reference Architectures for AI & HPC at scale

 

Scan AI, as a leading NVIDIA Elite Solution Provider, can deliver a variety of enterprise infrastructure architectures with the either NVIDIA EGX, HGX or DGX servers at their centre, known as PODs. These reference architectures combine industry-leading NVIDIA GPU compute with AI-optimised flash storage from a variety of leading manufacturers and low latency NVIDIA Networking solutions, in order to provide a unified underlying infrastructure on which to accelerate AI training whilst eliminating the design challenges, lengthy deployment cycle and management complexity traditionally associated with scaling AI infrastructure.

Although there is a vast variety of ways in which each POD infrastructure solution can be configured, there are four main architecture families - a Scan POD, based on NVIDIA EGX and HGX server configurations; an NVIDIA BasePOD including made up of 2-40 DGX H100 appliances; an NVIDIA SuperPOD consisting of up to 140 DGX H100 appliances centrally controlled with NVIDIA Unified Fabric Manager; and an NVIDIA DGX GH200 designed exclusively for LLMs and generative AI. All these infrastructures then connect to a choice of enterprise storage options linked together by NVIDIA Networking switches. Click on the tabs below to explore each solution further.

The Scan POD range of reference architectures are based around a flexible infrastructure kit list in order to deliver cost effective yet cutting-edge AI training for any organisation. A Scan POD infrastructure consists of NVIDIA EGX or HGX servers - starting at just two nodes - connected via NVIDIA Networking switches to a choice of NVMe storage solutions. This can then complemented by Run:ai Atlas software and supported by the NVIDIA GPU-optimised software stack available from the NVIDIA GPU Cloud (NGC).

Scan POD Servers

At the heart of a Scan POD architecture is either an NVIDIA-certified EGX or HGX GPU-accelerated server built by our in-house experts at 3XS Systems.

 

3XS EGX Servers


Up to 8x NVIDIA professional Ampere or Ada Lovelace PCIe GPUs
2x Intel 4th gen Xeon or AMD 4th gen EPYC CPUs with PCIe 5.0 support
Up to 2TB of DDR5 system memory
NVIDIA ConnectX Ethernet NICs / Infiniband HCAs
Up to 6x NVMe drives

 

3XS HGX Servers


Up to 8x NVIDIA A100 SXM4 GPUs
2x Intel 3rd gen Xeon or AMD 3rd gen EPYC CPUs with PCIe 4.0 support
Up to 2TB of DDR4 system memory
NVIDIA ConnectX Ethernet NICs / Infiniband HCAs
Up to 6x NVMe drives

Scan POD Management

The EGX and HGX systems are managed using Run:ai Atlas software to enable not only scheduling and orchestration of workloads, but also virtualisation of the PODs GPU resource. Run:ai Atlas automates resource management and consumption so that users can easily access GPU fractions, multiple GPUs or clusters of GPUs for workloads of every size and stage of the AI lifecycle. This ensures that all available compute can be utilised and GPUs never have to sit idle. Whenever extra compute resources are available, data scientists can exceed their assigned quota, speeding time to results and ultimately meeting the business goals.

Pool GPU Compute

Centralise AI

Pool GPU compute resources so IT gains visibility and control over resource prioritisation and allocation

Guaranteed Quotas

Maximise Utilisation

Automatic and dynamic provisioning of GPUs breaks the limits of static allocation to get the most out of existing resources

Elasticity

Deploy to Production

An end-to-end solution for the entire AI lifecycle, from developing to training and inferencing, all delivered in a single platform

Scan POD Networking

Scan POD architectures can be configured with a choice of network switches, each relating to a specific function within the design and whether InfiniBand, Ethernet or both are being utilised.

 

NVIDIA QM9700 Switch


NVIDIA QM9700 switches with NDR InfiniBand connectivity link to ConnectX-7 adapters. Each server system has dual connections to each QM9700 switch, providing multiple high-bandwidth, low-latency paths between the systems.

 

NVIDIA QM8700 Switch


NVIDIA QM8700 switches with HDR InfiniBand connectivity link to ConnectX-6 adapters. Each server system has dual connections to each QM8700 switch providing multiple high-bandwidth, low-latency paths between the systems.

 

NVIDIA SN4600 Switch


NVIDIA SN4600 switches offer 64 connections per switch to provide redundant connectivity for in-band management. The switch can provide speeds up to 200GbE. For storage appliances connected over Ethernet, these switches are also used.

 

NVIDIA SN2201 Switch


NVIDIA SN2201 switches offer 48 ports to provide connectivity for out-of-band management. Out-of-band management provides consolidated management connectivity for all components in the Scan POD.

The Scan POD topology is flexible when it comes to configuration and scalability. Server nodes and storage appliances can be simply added to scale the POD architecture as demand requires.

Scan POD Storage

For the storage element of the Scan POD architecture, we have teamed up with PEAK:AIO to provide AI data servers that deliver the fastest AI-optimised data management around. PEAK:AIO’s success stems from understanding the real-life values of AI projects - making ambitious AI goals significantly more achievable within constrained budgets while delivering the perfect mix of the performance of a parallel filesystem with the simplicity of a simple NAS, all within a single 2U server. Furthermore, PEAK:AIO starts as small as your project needs, and scales as you need, removing the traditional requirement to over invest in storage at the onset. Additionally, a longstanding complication within high performance storage has been the need for proprietary drivers which can cause significant disruption and worse within typical AI projects when OS or GPU tools are updated. PEAK:AIO is fully compatible with modern Linux kernels, requiring no proprietary drivers.

LEARN MORE

Secure Hosting

Accommodating a Scan POD architecture may not be possible on every organisations premises, so Scan AI has teamed up with a number of secure hosting partners with UK and European based datacentres. This means you can be safe in the knowledge that the location that houses your infrastructure is perfect to manage a Scan POD infrastructure and accelerate your AI projects.

The DGX BasePOD is an NVIDIA reference architecture based around a specific infrastructure kit list in order to deliver cutting edge AI training for the enterprise. A BasePOD infrastructure consists of NVIDIA either DGX H100 appliances - ranging from two to 40 nodes - connected via NVIDIA Networking switches to a choice of enterprise storage solutions. This is then complemented by Base Command management software and the NVIDIA AI Enterprise software stack to form a complete solution.

BasePOD Servers

At the heart of a BasePOD architecture is either the DGX the DGX H100 GPU-accelerated server appliance. Both models of DGX provide exceptional compute performance powered by eight H100 GPU accelerators respectively.

 

NVIDIA DGX H100


8x NVIDIA H100 GPUs
80GB memory per GPU
4x NVIDIA NVSwitch chips
2x Intel 4th gen Xeon 56-core CPUs with PCIe 5.0 support
2TB of DDR5 system memory
4x OSFP ports serving 8x single-port NVIDIA ConnectX-7 NDR
and 3x dual-port
NVIDIA ConnectX-7 NDR InfiniBand HCAs
2x 1.92TB M.2 NVMe drives for DGX OS
8x 3.84TB U.2 NVMe drives for storage /cache
11.3kW max power

BasePOD Management

The DGX systems are managed and controlled by NVIDIA Base Command software - using it, every organisation can tap the full potential of its DGX BasePOD investment with a platform that includes enterprise-grade orchestration and cluster management, libraries that accelerate compute, storage and network infrastructure and system software optimised for running AI workloads.

Pool GPU Compute

Trusted by NVIDIA

The same software that supports NVIDIA’s thousands of in-house developers, researchers, and AI practitioners underpins every BasePOD.

Guaranteed Quotas

Scheduling and Orchestration

Base Command provides Kubernetes, Slurm, and Jupyter Notebook environments for DGX systems, delivering an easy-to-use scheduling and orchestration solution.

Elasticity

Comprehensive Cluster Management

Full-featured cluster management automates the end-to-end administration of DGX systems - whether it’s initial provisioning, operating system and firmware updates, or real-time monitoring.

Elasticity

Optimised for DGX

The Base Command software stack is optimised for DGX BasePOD environments that scale from two nodes to 40 node clusters, ensuring maximum performance and productivity.

Enhanced Management with Run:Ai

Run:ai Atlas integrates with NVIDIA Base Command, combining GPU resources into a virtual pool and enables workloads to be scheduled by user or project across the available resource. By pooling resources and applying an advanced scheduling mechanism to data science workflows, Run:ai greatly increases the ability to fully utilise all available resources, essentially creating unlimited compute. Data scientists can increase the number of experiments they run, speed time to results and ultimately meet the business goals of their AI initiatives.

BasePOD Networking

DGX BasePODs can be configured with four types of network switches, each having a specific function within the design - there are a choice of InfiniBand switches depending on the DGX appliance used, supported by Ethernet switches for management and storage connectivity.

 

NVIDIA QM9700 Switch


NVIDIA QM9700 switches with NDR InfiniBand connectivity link to ConnectX-7 adapters. Each server system has dual connections to each QM9700 switch, providing multiple high-bandwidth, low-latency paths between the systems.

 

NVIDIA QM8700 Switch


NVIDIA QM8700 switches with HDR InfiniBand connectivity link to ConnectX-6 adapters. Each server system has dual connections to each QM8700 switch providing multiple high-bandwidth, low-latency paths between the systems.

 

NVIDIA SN4600 Switch


NVIDIA SN4600 switches offer 64 connections per switch to provide redundant connectivity for in-band management. The switch can provide speeds up to 200GbE. For storage appliances connected over Ethernet, these switches are also used.

 

NVIDIA SN2201 Switch


NVIDIA SN2201 switches offer 48 ports to provide connectivity for out-of-band management. Out-of-band management provides consolidated management connectivity for all components in the Scan POD.

The BasePOD topology is made up of three networks - an InfiniBand-based compute network, an Ethernet fabric for system management and storage, and an out-of-band Ethernet network. Also, included in the reference architectures are five CPU-only servers for system management. Two of these systems are used as the head nodes for Base Command Management software. The three additional systems provide the platform to house specific services for the deployment - this could be login nodes for a Slurm-based deployment or Kubernetes master nodes supporting an MLOps-based partner solution. The below diagram depicts a typical BasePOD using the DGX H100 and QM9700 switches as an example - for all options, topology diagrams and interconnect details please see the full NVIDIA DGX BasePOD Reference Document.

BasePOD Storage

For the storage element of the BasePOD architecture, there are several options that Scan AI can provision. All are based on all-flash NVMe SSD hardware and software defined management platforms tried, tested and approved by NVIDIA.

 

NetApp BasePOD


Configured with NetApp AFF-A series storage appliances and the ONTAP AI platform.

 

DDN BasePOD


Configured with DDN A3I series storage appliances and software platform.

 

Dell-EMC BasePOD


Configured with Dell PowerScale or Isilon storage appliances and the OneFS platform.

Secure Hosting

Accommodating a DGX BasePOD architecture may not be possible on every organisation’s premises, so Scan AI has teamed up with a number of secure hosting partners with UK and European based datacentres. This means you can be safe in the knowledge that the location that houses your infrastructure is perfect to manage a BasePOD infrastructure and accelerate your AI projects.

The DGX SuperPOD is an NVIDIA reference architecture is based around a specific infrastructure kit list in order to deliver hyperscale AI training environments in order to solve the world's most challenging computational problems. A SuperPOD infrastructure consists of either NVIDIA DGX H100 appliances - ranging from 20 to 140 nodes - connected via NVIDIA Networking switches to a choice of enterprise storage solutions. This is then complimented by Unified Fabric Manager software and the NVIDIA AI Enterprise software stack to form a complete solution.

SuperPOD Servers

 

NVIDIA DGX H100


8x NVIDIA H100 GPUs
80GB memory per GPU
4x NVIDIA NVSwitch chips
2x Intel 4th gen Xeon 56-core CPUs with PCIe 5.0 support
2TB of DDR5 system memory
4x OSFP ports serving 8x single-port NVIDIA ConnectX-7 NDR
and 3x dual-port
NVIDIA ConnectX-7 NDR InfiniBand HCAs
2x 1.92TB M.2 NVMe drives for DGX OS
8x 3.84TB U.2 NVMe drives for storage /cache
11.3kW max power

SuperPOD Management

The DGX systems are managed and controlled by NVIDIA Base Command software, however the whole SuperPOD is overseen by NVIDIA Unified Fabric Manager (UFM) - this revolutionises datacentre networking management by combining enhanced, real-time network telemetry with AI-powered cyber intelligence and analytics to support scale-out InfiniBand clusters and is available in three versions.

Pool GPU Compute

UFM Telemetry Real-Time Monitoring

The UFM Telemetry platform provides network validation tools to monitor network performance and conditions, capturing and streaming rich real-time network telemetry information, application workload usage and system configuration to an on-premises or cloud-based database for further analysis.

• Switches, adapters and cables telemetry
• System validation
• Network performance tests
• Streaming of telemetry information to on-premises
or cloud-based database

Guaranteed Quotas

UFM Enterprise Fabric Visibility and Control

The UFM Enterprise platform performs automated network discovery and provisioning, traffic monitoring and congestion discovery. It also enables job schedule provisioning and integrates with industry-leading job schedulers and cloud and cluster managers, including Slurm and Platform Load Sharing Facility.

• Includes UFM Telemetry features
• Secure cable management
• Congestion tracking
• Problem identification and resolution
• Advanced reporting

Elasticity

UFM Cyber-AI Cyber Intelligence and Analytics

The UFM Cyber-AI platform enhances the benefits of UFM Telemetry and UFM Enterprise, providing preventive maintenance and cybersecurity for lowering supercomputing OPEX.

• Includes UFM Telemetry and UFM Enterprise features
• Detects performance degradations
• Detects abnormal cluster behaviour
• Uses AI to make correlations
• Alerts when preventive maintenance is required

Enhanced Management with Run:Ai

Run:ai Atlas integrates with NVIDIA Base Command, combining GPU resources into a virtual pool and enables workloads to be scheduled by user or project across the available resource. By pooling resources and applying an advanced scheduling mechanism to data science workflows, Run:ai greatly increases the ability to fully utilise all available resources, essentially creating unlimited compute. Data scientists can increase the number of experiments they run, speed time to results and ultimately meet the business goals of their AI initiatives.

SuperPOD Networking

DGX SuperPOD architectures can be configured with four types of network switches, each having a specific function within the design - there are a choice of InfiniBand switches depending on the DGX appliance used, supported by Ethernet switches for management and storage connectivity.

 

NVIDIA QM9700 Switch


NVIDIA QM9700 switches with NDR InfiniBand connectivity link to ConnectX-7 adapters. Each server system has dual connections to each QM9700 switch, providing multiple high-bandwidth, low-latency paths between the systems.

 

NVIDIA QM8700 Switch


NVIDIA QM8700 switches with HDR InfiniBand connectivity link to ConnectX-6 adapters. Each server system has dual connections to each QM8700 switch providing multiple high-bandwidth, low-latency paths between the systems.

 

NVIDIA SN4600 Switch


NVIDIA SN4600 switches offer 64 connections per switch to provide redundant connectivity for in-band management. The switch can provide speeds up to 200GbE. For storage appliances connected over Ethernet, these switches are also used.

 

NVIDIA SN2201 Switch


NVIDIA SN2201 switches offer 48 ports to provide connectivity for out-of-band management. Out-of-band management provides consolidated management connectivity for all components in the Scan POD.

The SuperPOD topology is made up of scalable units (SU), where each SU consists of 20 DGX systems. This size optimises both performance and cost while still minimising system bottlenecks so that complex workloads are well supported, with a single SU capable of delivering 48 PFLOPs of performance. The DGX systems have eight HDR or NDR InfiniBand host channel adapters (HCAs) for compute traffic. Each pair of GPUs has a pair of associated HCAs. For the most efficient network, there are eight network planes, one for each HCA of the DGX system that connects using eight leaf switches, one per plane. The planes are interconnected at the second level of the network through spine switches. Each SU has full bisection bandwidth to ensure maximum application flexibility. Furthermore, each SU has a dedicated management rack where the leaf switches are centralised. Other equipment for the DGX SuperPOD, such as the second-level spine switches or management servers, could be in the empty space of a SU management rack or separate rack depending on the datacentre layout. The below diagram depicts a typical SU layout.

The below diagrams depict the compute and storage architectures for a 140 node solution comprised for seven Sus. For all the options, topology diagrams and interconnect details please see the full NVIDIA DGX SuperPOD Reference Document.

SuperPOD Storage

For the storage element of the BasePOD architecture, there are several options that Scan AI can provision. All are based on all-flash NVMe SSD hardware and software defined management platforms tried, tested and approved by NVIDIA.

NetApp


Configured with NetApp AFF-A series storage appliances and the ONTAP AI platform.

DDN


Configured with DDN A3I series storage appliances and software platform.

Dell-EMC


Configured with Dell PowerScale or Isilon storage appliances and the OneFS platform.

Secure Hosting

Accommodating a DGX SuperPOD architecture may not be possible on every organisations premises, so Scan AI has teamed up with a number of secure hosting partners with UK and European based datacentres. This means you can be safe in the knowledge that the location that houses your infrastructure is perfect to manage a SuperPOD infrastructure and accelerate your AI projects.

The NVIDIA DGX GH200 is designed to handle terabyte-class models for massive recommender systems, generative AI and graph analytics, offering 144TB of shared memory with linear scalability for giant AI models. Unlike existing AI supercomputers that are designed to support workloads that fit within the memory of a single system the NVIDIA DGX GH200 offers a vast shared memory space across 256 Grace Hopper Superchips. This provides developers with nearly 500 times more fast-access memory and 48 times more bandwidth than previous generation AI supercomputers, enabling trillion-parameter AI models. The DGX GH200 is pre-installed with NVIDIA Base Command, which includes an OS optimised for AI workloads, cluster manager, libraries that accelerate compute, storage and network infrastructure. It also includes NVIDIA AI Enterprise, providing a suite of software and frameworks optimised to streamline AI development and deployment. This full-stack solution enables customers to focus on innovation and worry less about managing their IT infrastructure.

Grace Hopper Superchip

The Grace Hopper architecture brings together the groundbreaking performance of the NVIDIA Hopper GPU with the versatility of the NVIDIA Grace CPU in a single superchip, connected with NVIDIA NVLink Chip-2-Chip (C2C), a high-bandwidth, low-latency, memory-coherent interconnect. Each NVIDIA Grace Hopper Superchip in the NVIDIA DGX GH200 has 480GB of LPDDR5 CPU memory and 96GB of HBM3e GPU memory. The NVLink Switch system forms a two-level, non-blocking, fat-tree NVLink fabric to fully connect the 256 Grace Hopper Superchips, so each GPU in a DGX GH200 can access the memory of all other GPUs and extended GPU memory of all NVIDIA Grace CPUs at an astonishing 900Gbps.

Compute baseboards hosting Grace Hopper Superchips are connected to the NVLink Switch System using a custom cable harness for the first layer of NVLink fabric, while LinkX cables extend the connectivity in the second layer of NVLink fabric.

Every Grace Hopper Superchip in a DGX GH200 system is paired with an NVIDIA ConnectX-7 network card and a BlueField-3 DPU. For scaling beyond 256 GPUs, ConnectX-7 adapters can interconnect multiple DGX GH200 systems to scale into an even larger solution.

DGX GH200 Specification

DGX GH200 Storage

NetApp


Configured with NetApp AFF-A series storage appliances and the ONTAP AI platform.

DDN


Configured with DDN A3I series storage appliances and software platform.

Dell-EMC


Configured with Dell PowerScale or Isilon storage appliances and the OneFS platform.

Secure Hosting

Accommodating a DGX SuperPOD architecture may not be possible on every organisations premises, so Scan AI has teamed up with a number of secure hosting partners with UK and European based datacentres. This means you can be safe in the knowledge that the location that houses your infrastructure is perfect to manage a SuperPOD infrastructure and accelerate your AI projects.