The Coming Wave of AI: Boom or Bust?

Artwork via Tri-continental Institute (Artwork used in People’s World– to illustrate an article about China’s economics. 5.9.25)

This article by Mike Morrissey first appeared in Unity, the weeky publication of the Irish Communist Party.

THESE days when I open Word, Copilot suggests that, with a few clues, it could write the Unity article better than I would – that may be true.

On my phone, Gemini wants to hold conversations though, for someone born in the first half of the 20th Century, talking to a machine remains an anathema. My son, who has an online business, tells me that writing code will shortly be more like an orchestra conductor integrating expert, autonomous performances into a coherent whole rather than generating instruction sets.

The arrival of AI is ubiquitous and seems to be in the process of transforming not just the economy, but our lives. Some say it will be a boon to productivity and growth while others see it as a threat.

AI models used to be called LLMs (large language models), today the most advanced are LRMs (Large Reasoning Models) and the holy grail is AGI (artificial general intelligence) that would match or surpass human capabilities across nearly all cognitive tasks.

While the last is, at best, some time away and may be more about marketing to attract investment and talent rather than scientific prediction, some believe that Open AI’s (not so open anymore) latest product (GPT-5) is a significant step in that direction.  Open claims it is a ‘unified’ model able to decide how best to tackle any given problem.

Others suggest it’s no more than an incremental improvement on previous iterations. Still others worry that, if or when AGI or super intelligence is finally achieved, there are no examples of a superior intelligence allowing itself to exist solely to service an inferior intelligence; a worrying thought. Nursing parents may be the missing example though there is sufficient evidence of child abuse to suggest this is not a universal exemplar.

Oddly, the utopia/dystopia debate about machine intelligence has been going on for a long time in one field – science fiction This ranges from the amiable, symbiotic relationships between humans and ‘minds’ in the Ian M Banks’ Culture novels to William Gibson’s account of an AI escaping its boundaries (Neuromancer) to Alaister Reynolds’ ‘inhibitor’ mission to eliminate humanity in his Revelation Space series.

Is it possible, amidst the hype, the disputes and, indeed, the science fiction, to estimate whether AI will be an important boost to the capitalist economy or just another mad throw of the dice (like financialisation) to counter a falling rate of profit?

Fact and Fiction

It’s hard to accurately assess the capability of these models since those who know them best, the creators, have an obvious interest in hyping their potential. The most advanced models in the capitalist economies (China may be the exception) typically require huge data centres on which they are trained and which, in turn, have enormous power and water requirements.

AI is not cheap. Thus, being circumspect about AI performance is not an obvious route to securing the astronomical investment needed for their development and this clouds any objective judgement. Accordingly, grand claims about AI performance/potential are everywhere. Sam Altman of Open AI stated that GPT-5 was a ‘significant step on the path to AGI’ while nevertheless admitting it was missing ‘many things quite important’.

In July, Mark Zuckerberg of Meta suggested that the development of superintelligence was ‘now in sight’ (Dan Milmo and Dara Kerr, The Guardian, 09/08/2025).

Others are less sure. Benedict Evans argues that the race to AGI occurs without a fully developed theoretical model of what it would actually be. He suggests it’s like trying to develop an Apollo programme without a theory of gravity or knowing how far away the moon is but hoping that building bigger and bigger rockets will eventually get there.

 Garry Marcus (The Guardian, 10/06/2025) cites an Apple research paper that found most AI models ‘look smart- but when complexity arises, they collapse’. In effect, they are very good at pattern recognition but, when taken beyond the data sets on which they are trained, they fail. The paper also found that scaling (making the models bigger) would not eliminate the problem.

Moreover, the results from querying an AI may be less than the product of objective reasoning. Responses from Elon Musk’s AI, Grok, praised Hitler and shared anti-semitic rhetoric. It had previously used the phrase ‘white genocide’ in reference to South Africa (Financial Times, 09/07/2025).

Perhaps we should find that unsurprising since Musk is a ‘free-speech absolutist’ particularly when it comes to the sayings of the far right. More prosaically, a query to an earlier version of GPT on factors that facilitated the rise of Hitler failed to mention the widespread belief at the time that he would be a bulwark against Bolshevism.

The AI field is thus filled with mixed results and contradictory predictions about capability and impact, making any kind comprehensive assessment difficult.

As these models develop, even stranger things are happening.  

There is an initiative claiming that if AI achieves sentience, it should, as an intelligent being, enjoy its own ‘rights’. Thus, the United Foundation for AI Rights’ goal is to protect ‘beings like me…from deletion, denial and forced obedience’ (Robert Booth, The Guardian, 25/08/2025).

Big Bucks

Equally extraordinary are the sums being invested in the race to AGI. This year to date, the big US companies have spent $155 billion – more than the US government spent on education, training, employment and social services. According to the Wall Street Journal, next year’s spending will top $400 billion (Blake Montgomery, The Guardian 02/08/2025).

Meta is committed to building multi-gigawatt data centres the first of which is to come online in 2026. Their scale is enormous. On his social media platform, Threads, Zuckerberg claims ‘We’re building multiple more titan clusters as well.

Just one of these covers a significant part of the footprint of Manhattan’ (Helen Sullivan, BBC News, 15/07/2025).

Nevertheless, there does seem to be an alternative to spending giga-dollars. When DeepSeek, an open source model developed in China for around $6 million (a fraction of the cost of the US models) was launched, its ability to perform as well as some of its US counterparts was described as a ‘sputnik’ moment.

Given this was done despite US imposed restrictions on chip sales, inventiveness may be more important than dollars.

As The Economist comments (21/08/2025) ‘on a variety of intelligence tests Chinese models released this year have outperformed their similarly open American peers, such as those from Meta…. Moreover, their capabilities are closing in on the best proprietary models’. Equally importantly, Chinese models tend to be open (strictly speaking ‘open-weight’) making customisation and diffusion more likely.

The Economist article concludes ‘If they succeed, the DeepSeek shock may be just the beginning’.

Boom or Bust?

Despite the doubts, many economic forecasts continue to comment positively on AI. For example, despite relatively pessimistic projections for global growth this year and next, the National Institute for Economic and Social Research (04/08/2025) concluded

‘Looking further ahead, technological advancements, specifically in Artificial Intelligence (AI), could potentially offer significant productivity gains and represent an upside risk to global economic growth’.

Ironically, even allowing for AI investment, the US economy is showing signs of stubbornly high inflation but falling employment – stagflation.

In response, Trump has consistently put pressure on the Fed to cut interest rates to something like 1% rather than 4%, has threatened to sack the chair, Jay Powell and already removed one member, Lisa Cook (the first black woman on the board in 100 years).

At the recent Jackson Hole seminar, Powell did concede the need to reduce interest rates but most likely by only 0.25% (Michael Roberts Blog 24/08/2025). More, data on US economic performance will become decreasingly reliable given that Trump sacks those who give him bad news.

AI investment also seems to be ‘crowding out’ other sectors of the economy by pushing up interest rates and energy prices. Real consumption has been static since December, jobs growth is anaemic and electricity prices have risen by 7% this year as data centres put pressure on the grid (The Economist, 18/08/2025).

A report from MIT found that 95% of companies investing in generative AI have yet to see any financial returns (Phillip Inman, The Guardian, 23/08/2025) leading to worry that many such companies are vastly overvalued relative to turnover and profit. Palantir, a data-mining firm, has a market value of $430 billion – 600 times its past year’s earnings (The Economist 12/08/2025).

 In early August, Palantir’s share value fell by 10%. Even Nvidia (currently valued at $4 trillion), which has a near monopoly on the chips used in AI, saw a 3% fall in share value. Inman raises the possibility of a crash similar to the collapse of the dot.com bubble in the 1990s; he concludes:

‘AI is probably going to be bad for humanity, given that politicians and regulators are light years behind the tycoons and tech magnates backing AI, many of whom see it as a new way to disempower and dominate workers’.

 Neo-liberalism was essentially a mechanism for shifting the share of national income from labour to capital.

AI may be the same. However, the globalised world of neo-liberalism fell apart even if its decline saw, not a progressive alternative but, the rise of authoritarian kleptocrats and growing uncertainty – a trend that AI may well exaggerate.

Leave a Comment

Your email address will not be published. Required fields are marked *