DOE, Cerebras Systems working on supercomputer scale AI
Cerebras Systems, based in Los Altos, CA, announced on September 17 a partnership with the U.S. Department of Energy (DOE) to advance the massive deep learning experiments being pursued at its laboratories for basic and applied science and medicine with supercomputer-scale AI. Argonne National Laboratory and Lawrence Livermore National Laboratory are the first labs announced in Cerebras’ multi-year, multi-laboratory partnership, with more to follow in the coming months. The partnership comes on the heels of Cerebras’ introduction of the Wafer Scale Engine (WSE), the largest chip ever built, last month.
“The stand-up of DOE’s new Artificial Intelligence and Technology Office underscores the broad importance of AI to all of our mission, business and operational functions,” said Dr. Dimitri Kusnezov, DOE’s Deputy Under Secretary for Artificial Intelligence & Technology. “We are excited to partner with innovative companies like Cerebras Systems to push the frontiers of AI. The strategic deployment of high-performance AI systems with next-generation innovative technologies like Cerebras’ Wafer Scale Engine to build and defend national competitive advantage is very much at the heart of the Secretary of Energy, Rick Perry’s vision and in line with President Trump’s executive order on AI dated February 11, 2019.”
“We are honored and proud to partner with the Department of Energy and the talented researchers at Argonne National Laboratory and Lawrence Livermore National Laboratory,” said Andrew Feldman, co-founder and CEO of Cerebras Systems. “Together we aim to push the boundaries of AI technologies by combining DOE’s unmatched computing capabilities with the largest and highest performing AI processor ever built – the Cerebras WSE. In partnership, we aim to gain traction on a diverse set of grand challenges that will touch virtually everything we do.”
In August, Cerebras Systems announced the WSE, a single chip that contains more than 1.2 trillion transistors. With an area of 46,225 square millimeters, the WSE is the largest chip in the world and enables AI at supercompute scale. The WSE is 56.7 times larger than the largest graphics processing unit which measures only 815 square millimeters and contains only 21.1 billion transistors1. The WSE also contains 3,000 times more high speed, on-chip memory, and has 10,000 times more memory bandwidth. The massive size and resources available with Cerebras’ WSE make it an ideal instrument to accelerate the Department of Energy’s numerous deep learning experiments across its mission, business and operations, including basic and applied science and medicine.
“The opportunity to incorporate the largest and fastest AI chip ever—the Cerebras WSE—into our advanced computing infrastructure will enable us to dramatically accelerate our deep learning research in science, engineering and health,” said Rick Stevens, head of computing at Argonne National Laboratory. “It will allow us to invent and test more algorithms, to more rapidly explore ideas, and to more quickly identify opportunities for scientific progress.”
“Integrating Cerebras technology into the Lawrence Livermore National Laboratory supercomputer infrastructure will enable us to build a truly unique compute pipeline with massive computation, storage, and thanks to the Wafer Scale Engine dedicated AI processing,” said Bronis R. de Supinski, CTO of Livermore Computing at LLNL. “This unique opportunity for public-private partnership with a cutting-edge AI partner will help us meet our mission and push the boundaries of managing the increasingly complex and large data sets from which we have to make decisions.”
When it comes to AI compute, bigger is better. Big chips process information more quickly, producing answers in less time than do clusters of small chips. By accelerating all the components of AI training, the Cerebras WSE trains models faster than alternative approaches. Unlike graphics processors, which are designed primarily for graphics processing, the WSE is designed from the ground up for AI work. It contains fundamental innovations that advance the state-of-the-art by solving decades-old technical challenges that limited chip size—such as cross-reticle connectivity, yield, power delivery, and packaging.
“Cerebras’s ability to partner with leading supercomputer sites indicates the performance potential of the Wafer Scale Engine,” said Linley Gwennap, principal analyst at The Linley Group. “The processor’s tight coupling of compute resources and memory on a massive scale, enabled by the startup’s innovative engineering solutions, make it uniquely suited to solving supercomputer-caliber problems.”