Facebook opens up its internal AI training hardware and custom-built chips

Few organizations use artificial intelligence on the scale that Facebook Inc. does. The social network’s deep learning models perform 200 trillion predictions each day, a level of output made possible by purpose-built hardware designed from the ground up to run neural networks. At the Open Compute Summit today in San Jose, California, Facebook open-sourced three of the core building blocks that make up its infrastructure. They include a powerful server engineered for the sole purpose of training deep learning models and a pair of similarly specialized, internally designed chips.The training phase of AI projects is often the most hardware-intensive aspect of the entire development workflow. Before an algorithm can process live data, engineers have to feed it immense quantities of training information to help it learn what patterns to look for and how. In the case of Facebook, the company’s programmers draw upon a repository of over 3.5 billion public images to hone their models.The AI training server it open-sourced today helps speed up the process. Dubbed Zion, the machine is powered by eight central processing units that each sport a generous amount of so-called DDR memory and can share this memory with one another to coordinate processing.

Spotlight

Spotlight

Related News