On Wednesday, Microsoft unveiled two custom-designed computing chips, marking its entry into the group of major technology companies internalizing key technologies to manage the high costs of delivering artificial intelligence (AI) services.
The company stated that these chips, intended for internal use, will enhance its subscription software and Azure cloud computing services. During the Ignite developer conference in Seattle, Microsoft introduced the Maia chip to accelerate AI computing tasks and support its $30-a-month Copilot service for business software users and developers creating custom AI services.
Designed for running large language models, the Maia chip is integral to Microsoft’s Azure OpenAI service and stems from their collaboration with ChatGPT’s creator, OpenAI. This development reflects Microsoft’s strategy to address the high costs of AI services, which can be significantly more than traditional services like search engines.
Microsoft plans to unify its AI efforts across products using a standard set of AI models, with the Maia chip being central to this initiative. Scott Guthrie, executive vice president of Microsoft’s cloud and AI group, believes that this approach allows them to offer better, faster, and more cost-effective solutions to our customers
In 2024, Microsoft will also provide Azure services running on the latest chips from Nvidia and Advanced Micro Devices. Currently, they are testing OpenAI’s GPT-4 on AMD’s chips. According to Ben Bajarin of Creative Strategies, the Maia chip will enable Microsoft to offer cloud-based AI services until PCs and smartphones become capable enough.
Microsoft’s second chip, Cobalt, serves as both a cost-saving measure and a competitive response to Amazon Web Services’ Graviton chips. Cobalt, a central processing unit developed with Arm Holdings technology, has been tested with Microsoft Teams. The company aims to make Cobalt competitively available to rival Amazon’s offerings.
At the event, Microsoft provided limited technical details about the chips, which are manufactured using 5-nanometer technology from Taiwan Semiconductor Manufacturing Co. Rani Borkar, corporate vice president for Azure hardware systems and infrastructure, noted that the Maia chip would use standard Ethernet network cabling, unlike the expensive custom networking technology previously used with Nvidia in supercomputers for OpenAI. Microsoft is moving towards more standardized solutions, Borkar added.