Cray announces new AI offerings
Seattle, WA-based Cray Inc. announced on March 28 it is adding new options to its line of CS-Storm GPU-accelerated servers as well as improved fast-start AI configurations, making it easier for organizations implementing AI to get started on their journey with AI proof-of-concept projects and pilot-to-production use.
Cray is enhancing its CS-Storm series GPU-accelerated systems with the addition of a new four-GPU version — the CS-Storm 500NX 4-GPU server, a 1U server with two Intel Xeon CPUs and four NVIDIA Volta GPUs, designed for customers whose AI models and HPC applications require lower GPU-to-CPU ratios for optimal performance. Including support for NVIDIA Volta GPUs, the new system is well suited for applications ranging from deep learning neural network training and inference to high-performance computing.
Implementing machine and deep learning in many organizations is a journey — from investigation to proof of concept to production applications — that data science and IT teams undertake. Different AI use cases require unique combinations of machine intelligence tools, model designs and compute infrastructure. However, no single system can address the entire spectrum of uses and models. Factors like I/O throughput, GPU-to-CPU ratio and GPU memory can have a direct impact on performance, and, ultimately, the success of the AI application.
“As companies approach AI projects, choices in system size and configuration play a crucial role,” said Fred Kohout, Cray’s senior vice president of products and chief marketing officer. “Our customers look to Cray Accel AI offerings to leverage our supercomputing expertise, technologies and best practices. Whether an organization wants a starter system for model development and testing, or a complete system for data preparation, model development, training, validation and inference, Cray Accel AI configurations provide customers a complete supercomputer system.”
“We are seeing a wide range of customer use cases for GPU accelerated computing but with different configuration requirements, even within a category like deep learning neural network training,” Kohout said. “Adding a smaller form factor allows our customers to choose the right node configuration for their application needs.”