IBM Launches Telum, Its New AI Chip

Our goal is to continue improving AI hardware compute efficiency by 2.5 times every year for a decade, achieving 1,000 times better performance by 2029.

IBM has announced its new chip, Telum – a new CPU chip that will allow IBM clients to leverage deep learning inference at scale. The new chip features a centralised design, which allows clients to leverage the full power of the AI processor for AI-specific workloads, making it ideal for financial services workloads like fraud detection, loan processing, clearing and settlement of trades, anti-money laundering, and risk analysis.

A Telum-based system is planned for the first half of 2022. “Our goal is to continue improving AI hardware compute efficiency by 2.5 times every year for a decade, achieving 1,000 times better performance by 2029,” said IBM in a press release.

The chip contains eight processor cores, running with more than 5GHz clock frequency, optimised for the demands of enterprise-class workloads. The completely redesigned cache and chip-interconnection infrastructure provide 32MB cache per core. The chip also contains 22 billion transistors and 19 miles of wire on 17 metal layers.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

According to a recent Federal Trade Commission report, consumers reported losing more than $3.3 billion to fraud in 2020, up from $1.8 billion in 2019. With Telum, financial institutions will be able to move from fraud detection to a fraud prevention posture, catching instances of fraud while the transaction is still ongoing.

In traditional computing systems such as CPUs, calculations are performed by repeatedly transferring instructions and data between the memory and processor, but AI workloads have much higher computational requirements and operate on large quantities of data. Therefore, as you infuse AI into application workflows, it is critical to have a heterogeneous system consisting of CPU and AI cores tightly integrated on the same chip to support very low-latency AI inference. Telum achieves just the same.

More Great AIM Stories

kumar Gandharv
Kumar Gandharv, PGD in English Journalism (IIMC, Delhi), is setting out on a journey as a tech Journalist at AIM. A keen observer of National and IR-related news.

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

Conference, in-person (Bangalore)
Cypher 2023
20-22nd Sep, 2023

3 Ways to Join our Community

Whatsapp group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our newsletter

Get the latest updates from AIM