Global spending on AI systems is expected to reach $154bn in 2023, according to a new forecast by the International Data Corporation (IDC).

The predicted figure represents a spending increase of nearly 27pc compared to last year and includes spending on AI software, hardware and services for AI-centric systems.

The IDC also predicts that AI spending will surpass $300bn by 2026, as more companies integrate AI into their products and services.

AI has been a growing market for years, but recent landmark products have drawn more attention to the sector, with companies big and small racing to utilise AI.

IDC senior market research analyst Mike Glennon said that – for both large and small companies – those that are slow to adopt AI “will be left behind”.

“AI is best used in these companies to augment human abilities, automate repetitive tasks, provide personalised recommendations and make data-driven decisions with speed and accuracy,” Glennon said.
The IDC predicts the banking and retail sectors will be involved in the largest AI investments this year, followed by the professional services and manufacturing sectors.

Combined, these industries are expected to account for more than half of all AI-centric spending this year, according to the IDC.

“In the future, both government-level urban issues and life issues that are closely related to everyone will enjoy the dividends brought by AI technology and eventually usher in AI for all,” said IDC senior market analyst Xueqing Zhang.

The US is predicted to account for half of all the world’s AI spending, followed by Western Europe at 20pc and China being the third largest market.

Google’s latest AI model

Meanwhile, Google has showcased a demo of its new “generalist robotics model” called PaLM-E, to help integrate AI capabilities in robotic systems.

Last August, Google researchers revealed that they were testing the use of large language models to help robots respond better to complex and abstract requests from humans.

For example, if someone says “I spilled my drink, can you help?”, the robot would make the decision to get a sponge and clean the mess, rather than needing to be told “pick up a sponge”.

This research was originally done using the large language model PaLM, but Google’s latest model combines texts with images by using “raw streams of robot sensor data”.

“The resulting model not only enables highly effective robot learning, but is also a state-of-the-art general-purpose visual-language model, while maintaining excellent language-only task capabilities,” Google said in a blog post.

Leigh Mc Gowran

This article originally appeared on and can be found at: