AI Optimised Storage Solutions
Optimised Storage for all AI Workloads
GPU-accelerated computing only works as intended if the GPUs in question can receive data consistently and rapidly enough that the maximum utilisation is delivered, thus providing a significant shortening of time frames required to get results. AI training presents a particularly heavy demand on the attached storage, so the Scan AI team has created a portfolio of options to deliver data at high speed to the GPU servers.
Optimised for GPU Acceleration
Maximum GPU Utilisation
The majority of servers designed for AI workloads will contain multiple GPUs, and the key thing is to ensure they are working as hard as possible for the most time possible. Having the correct all-flash storage will enable data transfer at a sufficient rate that the combined GPU memory is consistently saturated with data so that results are achieved in the fastest time frame possible
Low Latency
Storage appliances using technologies such as NVMe interfaces ensure that the data is not only transferred at an incredible rate, but also with minimal latency from command to action. This is another factor in delivering lightning fast storage capability and one that should not be overlooked when considering budget versus time to results required
Scalable
The nature of AI projects is that expansion of GPU capability is very likely - put simply you’ll need more servers. However, when this happens you don’t want to have to replace your storage at the same time. All our options are capable of supporting multiple multi-GPU appliances, so scalability isn’t an issue is the short term and capacity can be added to the storage in the longer term too
NetApp AI Storage Solutions
The NetApp ONTAP AI architecture delivers ground breaking performance. It comprises a NetApp AFF A-Series appliance - an all-flash fabric-attached storage system, linked to one or more NVIDIA DGX or HGX servers by NVIDIA Mellanox switches. The system has been designed, tested and validated to deliver excellent training and inferencing performance.
AFF A-series
The AFF A-series systems support end-to-end NVMe technologies, from NVMe-attached SSDs to front-end NVMe over Fibre Channel (NVMe/FC) host connectivity. These systems deliver the industry’s lowest latency for an enterprise all-flash array, making them a superior choice for driving the most demanding workloads and applications. With a simple software upgrade to the modern NVMe/FC SAN infrastructure, you can drive more workloads with faster response times, without disruption or data migration.
PEAK:AIO AI-Optimised Storage
PEAK:AIO is a cost-effective software platform that unlocks the full potential of GPU compute resources so that less funds are wasted on legacy storage. Developed exclusively for the ultra-low latency high-bandwidth demands of AI data servers PEAK:AIO is designed for direction connection to single NVIDIA DGX, HGX and EGX servers or small to mid-size AI clusters of up to 10 GPU servers using InfiniBand or Ethernet switches.
PEAK:AIO AI Data Servers
PEAK:AIO has worked with a variety of system integrators and global OEMs including Scan 3XS Systems and Dell to certify its software on a variety of hardware platforms. All platforms use the same ultra-low latency NVMe-oF, delivering blistering transfer rates of over 80GB/s using NFS without the need to install any drivers on the GPU servers. Just purchase the capacity and performance you need today and feel secure that as you scale, so can your capacity and performance.
DDN AI Storage Solutions
The DDN A³I (Accelerated, Any-Scale AI) architecture breaks new ground for AI and deep learning. Engineered from the ground up for the AI-enabled datacentre, DDN A³I solutions with NVIDIA DGX and HGX servers accelerate end-to-end data pipelines for AI workloads of any scale. They are designed to provide extreme amounts of performance and capacity backed by a jointly engineered, validated architecture.
DDN A³I Series
DDN A³I series appliances are optimised with NVIDIA DGX servers at every layer of hardware and software to ensure data delivery and storage is fast, responsive and reliable. To meet the requirements of a variety of workloads, DDN A³I leverages the DDN AI200 and AI7990 storage appliances. The AI200 is an all-NVME, fully-integrated parallel file storage appliance that delivers 20GB/s of throughput and over 350K IOPS to applications, whereas the AI200 is specifically optimised to keep GPU computing resources fully utilised, ensuring maximum efficiency while easily managing tough data operations.
Dell AI Storage Solutions
Dell offers a number of pre-validated PowerScale architectures designed for enterprises that want to manage their AI productivity, not their storage. The storage systems are powerful yet simple to install, manage, and scale to virtually any size, and have been optimised and tested with NVIDIA DGX and HGX servers at every stage of their development.
PowerScale Storage
The PowerScale family includes platforms configured with the PowerScale OneFS operating system. OneFS provides the intelligence behind the highly scalable, high–performance modular storage solution that can grow with your business. The new PowerScale all-flash platforms co-exist seamlessly in the same cluster with your existing Isilon nodes to drive your traditional and modern applications.
IBM AI Storage Solutions
Data is the fuel that powers AI, but it can become trapped or stored in a way that makes it difficult or cost prohibitive to maintain or expand. Customers need to unleash that data so it can expand from edge to inference in a simple and cost-effective infrastructure. IBM Storage for AI makes data simple and accessible for a hybrid multi-cloud infrastructure with AI storage solutions that fit your business model.
Elastic Storage System
The IBM Elastic Storage System (ESS) is a modern implementation of software-defined storage, making it easier for you to deploy fast, highly scalable storage for AI and big data. With the lightning-fast NVMe storage technology and industry-leading file management capabilities of the IBM Spectrum Scale platform, the ESS 3000 and nodes can grow to over vast scalability and are designed to seamlessly support a POWER9 server deployment.