PNY NVIDIA DGX H100 P4387 640GB AI Server System
PNY NVIDIA DGX H100 Deep Learning AI System, 8x H100 GPUs, 640GB HBM3, 32 petaFLOPS FP8 Performance
- By DPD to your specified address. | £11.50 Receive SMS with one-hour delivery window Weekend, timed and European delivery options are available at checkout
- Collect instore Collect from our Bolton store, BL6 6PE | Free
- 48HR REPLACEMENT If you need to return this item, your replacement will be dispatched within 2 working days of your product arriving back at Scan. More info
PNY NVIDIA DGX H100 AI Infrastructure System
The Gold Standard for AI Infrastructure The fourth-generation DGX AI appliance is built around the new Hopper architecture, providing unprecedented performance in a single system and unlimited scalability with the DGX SuperPOD enterprise-scale infrastructures. The DGX H100 features eight H100 Tensor Core GPUs, totalling 640GB of GPU memory, and providing up to 6x more performance than previous generation DGX appliances, and is supported by a wide range of NVIDIA AI software applications and expert support.
The World’s Proven Choice for Enterprise AI
The Cornerstone of Your AI Centre of ExcellenceArtificial intelligence has become the go-to approach for solving difficult business challenges. Whether improving customer service, optimising supply chains, extracting business intelligence, or designing leading-edge products and services with generative AI and other transformer models, AI gives organisations across nearly every industry the mechanism to realise innovation. And as a pioneer in AI infrastructure, NVIDIA DGX™ provides the most powerful and complete AI platform for bringing these essential ideas to fruition.
NVIDIA DGX H1OO powers business innovation and optimisation. Part of the DGX platform and the latest iteration of NVIDIA's legendary DGX systems, DGX H1OO is the AI powerhouse that's the foundation of NVIDIA DGX SuperPOD™, accelerated by the groundbreaking performance of the NVIDIA H1OO Tensor Core GPU. The system is designed to maximise AI throughput, providing enterprises with a highly refined, systemised, and scalable platform to help them achieve breakthroughs in natural language processing, recommender systems, data analytics, and much more. Available on-premises and through a wide variety of access and deployment options, DGX H1OO delivers the performance needed for enterprises to solve the biggest challenges with AI.
AI has bridged the gap between science and business. No longer the domain of experimentation, AI is used day in and day out by companies large and small to fuel their innovation and optimise their business. As the fourth generation of the world's first purpose-built AI infrastructure, DGX H1OO is designed to be the centrepiece of an enterprise AI centre of excellence. It's a fully optimised hardware and software platform that includes full support for the new range of NVIDIA AI software solutions, a rich ecosystem of third-party support, and access to expert advice from NVIDIA professional services. DGX H1OO offers proven reliability, with the DGX platform being used by thousands of customers around the world spanning nearly every industry.
An Order-of-Magnitude Leap for Accelerated Computing
Break Through the Barriers to AI at ScaleAs the world's first system with the NVIDIA H100 Tensor core GPU, NVIDIA DGX H100 breaks the limits of AI scale and performance. It features 9X more performance, 2X faster networking with NVIDIA ConnectX®-7 smart network interface cards (SmartNICs), and high-speed scalability for NVIDIA DGX SuperPOD. The next-generation architecture is supercharged for the largest, most complex AI jobs, such as generative AI, natural language processing and deep learning recommendation models.
8x NVIDIA H100 Tensor CoreGPUs
640GB TotalGPU Memory
32 petaFLOPS FP8Performance
4xNVIDIA NVSwitches
NVIDIA AI Enterprise
AI and Data Science Tools/FrameworksReady-to-use, fully supported software that speeds developer success.
NVIDIA
RAPIDS™
NVIDIA
TAO Toolkit
NVIDIA
Tensor RT™
NVIDIA Triton™
Inference Server
NVIDIA Base Command
AI Workflow Management and MLOpsGet more models from prototype to production.
Job Scheduling & OrchestrationEnsure hassle-free execution of every developer's jobs.
Kubernetes
Slurm
Job Scheduling & OrchestrationEffortlessly scale and manage one node to thousands.
Provisioning
Monitoring
Clustering
Managing
Network/Storage Acceleration Libraries & ManagementAccelerate end-to-end infrastructure performance.
Network IO
Storage IO
In-network Compute
IO Management
DGX OS Extensions for Linux DistributionsMaximise system uptime security and reliability.
A New Era of Performance
18x NVIDIA H100 GPUs WITH 640GB OF TOTAL GPU MEMORY
Delivers unparalleled performance for large-scale AI and HPC.
24x NVIDIA NVSWITCHES™
Scalable interconnects with high-speed communication.
38x SINGLE-PORT NVIDIA CONNECT X-7, 2x DUAL-PORT NVIDIA CONNECT X-7
Delivers accelerated networking for modern cloud, artificial intelligence, and enterprise workloads.
42x INTEL XEON PLATINUM 8480C CPUS AND 2 TERABYTES OF SYSTEM MEMORY
Powerful CPUs for the most intensive AI jobs.
530 TERABYTES OF NVME Storage
High speed storage for maximum performance.
Find out more about: NVIDIA DGX H100