ArticlesNews

New AI server designs to accelerate services of Facebook, Microsoft

1 Mins read

At the Open Compute Project U.S Summit on Wednesday, social media giant Facebook and technology expert Microsoft, introduced new open-source server designs that will bring faster responses to their artificial intelligence services and will help them offer more such services.

Facebook handles over 100 million hours of video requests, 400 million messenger users and over 95 million photos and videos that are posted on Instagram. To manage this heavy load, Facebook servers rely on machine learning technologies like image recognition.

Microsoft also has the same story and it uses machine learning techniques for Cortana – its AI service.

Hence, both the companies came up with open-source hardware designs for servers to facilitate faster response. Many companies and financial organizations are expected to follow suite.

Facebook’s Big Basin, termed ‘JBOG’ – for Just a Bunch Of GPUs, is an independent server box that when connected to any discrete server and storage boxes, delivers high-end computing performance for machine learning. It works by decoupling storage, processing and networking units in data centers, helping them cut down their electricity consumption with shared cooling and power resources.

It has eight Nvidia tesla P100 GPU accelerators in a mesh architecture conected through NVLink interconnect.

Microsoft’s Project Olympus has improved space for AI co-processers. Microsoft further made announcement of a GPU accelerator with Ingrasys and Nvidia called HGX-1. It can be scaled for linking together 32 GPUs.

The new server design has a universal motherboard slot that will aid advanced server chips of Intel and AMD. It is an advancement from x86 to ARM with support for Cavium’s thunder X2 chips or Qualcomm’s Centriq 2400.

Kushagra Vaid, GM Azure infrastructure at Microsoft, in a blog said that the ARM support for Project Olympus will be one of the biggest achievements in context of new server design.

Ian Buck – the vice-president and GM of accelerated competing at Nvidia said, “The new OCP designs from Microsoft and Facebook show that hyper scale data centers need high-performance GPUs to support the enormous demands of AI computing.”

Leave a Reply

Your email address will not be published. Required fields are marked *

+ sixty three = 68