AI Made Friendly HERE

British AI company Graphcore is trying to sell its accelerator business, perhaps to OpenAI

The big picture: AI algorithms are spreading rapidly, and the demand for GPUs and other specialized chips designed to accelerate AI workloads is continuously increasing. Graphcore could offer an intriguing alternative to Nvidia’s GPUs, but despite its potential, the company is struggling to attract buyers for its products and is now up for sale.

After its unsuccessful attempt to capitalize on the recent artificial intelligence craze, Graphcore is seemingly seeking a buyer among foreign organizations interested in AI chip applications. The British fabless semiconductor company has developed a line of Intelligence Processing Units (IPUs), featuring a massively parallel design capable of holding an entire machine learning model inside the processor.

Graphcore refers to its IPU chips as the most complex processors in the world. Collaborating with the Poplar SDK software stack, Graphcore’s latest IPU unit (Colossus MK2 GC200 IPU) boasts 59.4 billion transistors, 1,472 processor cores, and an “unprecedented” 900MB of integrated cache RAM, sufficient to run nearly 9,000 independent parallel program threads simultaneously.

Despite its impressive specifications, IPU chips are not selling as well as expected. Graphcore was valued at $2.8 billion in its latest funding round in 2020, and current rumors suggest that the company could now be sold for around $500 million. According to sources from The Telegraph, potential buyers may include Arm, SoftBank, and even OpenAI, the company that developed ChatGPT.

Companies involved in the matter provided no direct comments, while a source mentioned that Arm was not engaged in any discussions about a potential acquisition. Graphcore is in dire need of cash but seems to be facing difficulties in raising additional funds to avoid going under. Revenues for the past year fell by 46 percent, while losses widened.

As Graphcore CEO Nigel Toon highlighted some years ago, IPU chips should theoretically be quite effective at performing the massive parallel computations needed by AI algorithms. They could outperform today’s GPUs while using much less energy at the same time.

Energy usage is starting to become a significant issue for generative AI services, as traditional GPUs require a lot of power, and improvements aren’t expected anytime soon. OpenAI could indeed transform the massively parallel chip design developed by Graphcore into a new-generation hardware platform for its future large language models, although nothing is certain yet.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird