Fund independent views with $15 per month
Support us
-->
Faith in God-like Large Language Models Is Waning
The overconfidence in god-like large language models is fading as their limitations surface, creating space for more measured approaches by late movers like Apple.

Faith in God-like Large Language Models Is Waning

Faith in AI Giants Starts to Cool Down

Not long ago, large language models (LLMs) such as ChatGPT, Gemini, and Claude were portrayed as almost god-like. They were supposed to think, reason, and even create in ways that rival human intelligence. Venture capitalists rushed to fund AI startups, tech giants poured billions into data centers, and a wave of AI-driven optimism swept through boardrooms and governments.

Today, however, that confidence is visibly waning. Investors and consumers alike are beginning to recognize the limits of LLMs. Instead of ushering in an age of all-knowing digital assistants, the technology is showing cracks—ranging from hallucinations and bias to spiraling costs of training and maintaining ever-larger models. The sense of awe is giving way to something more practical: skepticism.


Lessons From Past Tech Hype Cycles

This cooling of faith in AI supermodels is not unprecedented. History shows that technological enthusiasm often follows a predictable curve: inflated expectations, disillusionment, and then steady, realistic adoption.

  • Dot-com boom (1990s–2000s): Startups promised to reinvent commerce overnight. Many collapsed, but the survivors—Amazon, eBay—redefined retail.
  • Blockchain and crypto (2010s): Touted as a cure-all for finance, trust, and governance. While much of the hype fizzled, serious applications in supply chain and payments remain.
  • Metaverse hype (2021–2022): Billed as the future of digital life, it quickly faded as consumers resisted bulky headsets and unclear utility.

AI may be entering a similar stage. The disillusionment does not mean irrelevance—it means a shift from mythology to utility.


The Cost Burden of Supermodels

One of the main reasons faith is faltering is cost. Training models with hundreds of billions of parameters consumes enormous energy and water resources, straining both corporate budgets and environmental systems.

A single training run for a top-tier LLM can cost tens or even hundreds of millions of dollars, alongside massive infrastructure outlays for GPUs, cooling systems, and electricity. This raises the question: how sustainable is the “bigger is better” race?

Even the richest players—Microsoft, Google, Amazon—are starting to explore whether smaller, specialized models might offer better returns. The dream of one all-powerful, general-purpose AI is colliding with economic and ecological reality.


Apple’s Patience May Pay Off

While rivals sprinted to dominate generative AI, Apple seemed to lag. It stayed relatively quiet, integrating AI into its devices in subtle ways but avoiding flashy chatbot launches. Critics saw this as a sign of weakness.

Now, however, that restraint looks strategic. As hype cools, Apple appears ready to introduce smaller, device-centered AI tools that are:

  • Efficient (optimized for iPhones, iPads, and Macs rather than cloud supercomputers).
  • Private (leveraging Apple’s marketing edge on security and on-device processing).
  • Practical (focused on productivity, user experience, and services rather than “AI gods”).

By avoiding the costly arms race of LLM gigantism, Apple might emerge as the long-term beneficiary. It can let others burn billions, then roll out polished features when the market is ready.


Shifting Expectations in Business and Policy

Corporate leaders once imagined AI copilots capable of drafting contracts, diagnosing diseases, or writing flawless code with minimal oversight. What they’ve encountered instead are tools that often require human supervision, careful fine-tuning, and niche use cases.

Regulators, too, are catching on. In the EU, US, and Asia, debates about data privacy, copyright, and algorithmic accountability are intensifying. Instead of racing toward a universal AI assistant, governments are signaling they will constrain the most ambitious applications.

This regulatory backdrop makes smaller, focused, and transparent AI systems more attractive than giant “black box” models.


The Human Factor: Trust and Fatigue

Beyond economics and regulation, public trust plays a role. People who initially marveled at LLMs now complain about inaccuracy, hallucinations, and repetition. Some are even experiencing AI fatigue, weary of exaggerated claims that don’t materialize in daily use.

This skepticism doesn’t kill adoption—it reshapes it. AI will likely continue to support customer service, translation, summarization, and creative brainstorming. But the era of unquestioned belief in god-like powers is ending.


What Comes Next

As confidence in massive AI wanes, three clear trends are emerging:

  1. Decentralization of AI – Instead of one giant brain in the cloud, we’ll see thousands of specialized models running locally or in sector-specific contexts.
  2. Integration over spectacle – AI will be embedded invisibly into workflows (like predictive text or photo editing) rather than presented as a “super-assistant.”
  3. Balanced competition – Companies like Apple, Meta, and Samsung may find new openings as the first-movers confront scaling fatigue.

The broader lesson: technological revolutions are rarely about divine leaps. They are about slow, uneven, practical adoption.


Conclusion: A Welcome Reset

The decline in faith toward LLM supermodels is not a crisis—it’s a reset. For investors, it curbs irrational exuberance. For companies, it encourages efficiency and focus. For consumers, it tempers unrealistic expectations and prepares them for tools that genuinely improve life.

If anything, the waning god-like aura around AI is a healthy development. It clears the stage for pragmatic innovation, regulatory clarity, and a more balanced playing field. And for late entrants like Apple, it may be the opening they have been waiting for.


We appreciate that not everyone can afford to pay for Views right now. That’s why we choose to keep our journalism open for everyone. If this is you, please continue to read for free.

But if you can, can we count on your support at this perilous time? Here are three good reasons to make the choice to fund us today. 

1. Our quality, investigative journalism is a scrutinising force.

2. We are independent and have no billionaire owner controlling what we do, so your money directly powers our reporting.

3. It doesn’t cost much, and takes less time than it took to read this message.

Choose to support open, independent journalism on a monthly basis. Thank you.

Recommended

Related stories

  • India’s Miscalculated Shift Toward Russia and China Risks a Strategic Dead-End

    India’s Miscalculated Shift Toward Russia and China Risks a Strategic Dead-End

  • Jeffrey Sachs: The Foolish American Economist Echoing Russian Propaganda

    Jeffrey Sachs: The Foolish American Economist Echoing Russian Propaganda

  • Putin–Trump Meeting in Alaska: What It Could Mean for Ukraine’s Fate

    Putin–Trump Meeting in Alaska: What It Could Mean for Ukraine’s Fate

  • The Gaza Obsession: Why Liberal Views Portals Keep It Front and Center

    The Gaza Obsession: Why Liberal Views Portals Keep It Front and Center

  • The Liberal Double Standards of Non-Western Darlings

    The Liberal Double Standards of Non-Western Darlings

More from Communal