Supermicro Collaborates with Intel to Deliver Large Scale Distributed Training AI Systems
"Supermicro is excited to cooperate with
The Intel Nervana NNP-T solves memory constraints and is designed to scale out through systems with racks easier than today's solutions. As part of the validation process, Supermicro integrated 8 NNP-T processors, dual 2nd Generation Intel® Xeon® Scalable processors, up to 6TB DDR4 memory per node supporting both PCIe card and OAM form factors. Supermicro NNP-T systems are expected to be available mid-year 2020.
"Supermicro has validated our Deep Learning (DL) solution and is helping us prove the Nervana NNP-T system architecture, including card and server design, interconnect, and rack," said
With high compute utilization and high-efficiency memory architecture for complex deep learning models, the Supermicro NNP-T AI System is built to validate two key real-world considerations: accelerating the time to train ever-complex AI models and doing it within a given power budget. The system enables faster AI model training with images and speech, more efficient gas & oil exploration, more accurate medical image analytics, and faster autonomous driving model generation.
"We are collaborating with
Supermicro is showcasing the NNP-T AI System with PCIe cards at SC19, November 17–22, 2019, in booth #1211, at the Colorado Convention Center in
Supermicro (SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server
All other brands, names and trademarks are the property of their respective owners.
View original content to download multimedia:http://www.prnewswire.com/news-releases/supermicro-collaborates-with-intel-to-deliver-large-scale-distributed-training-ai-systems-300959590.html
Greg Kaufman, Super Micro Computer, Inc., firstname.lastname@example.org
|Contact IR||Email Alerts|
|Super Micro Computer, Inc.
980 Rock Avenue
San Jose, CA 95131
|You may automatically receive Super Micro Computer, Inc. financial information by email.|