Testing shows that current models hallucinate more than previous ones. OpenAI rebeadged ChatGPT 5 to 4.5 because the gains were so meagre that they couldn’t get away with pretending it was a serious leap forward. “Reasoning” sucks; the model just leaps to a conclusion as usual then makes up steps that sound like they lead to that conclusion; in many cases the steps and the conclusion don’t match, and because the effect is achieved by running the model multiple times the cost is astronomical. So far just about every negative prediction in this article has come true, and every “hope for the future” has fizzled utterly.
Are there minor improvements in some areas? Yeah, sure. But you have to keep in mind the big picture that this article is painting; the economics of LLMs do not work if you’re getting incremental improvements at exponential costs. It was supposed to be the exact opposite; LLMs were pitched to investors as a “hyperscaling” technology that was going to rapidly accelerate in utility and capability until it hit escape velocity and became true AGI. Everything was supposed to get more, not less, efficient.
The current state of AI is not cost effective. Microsoft (just to pick on one example) is making somewhere in the region of a few tens of millions a year off of copilot (revenue, not profit), on an investment of tens of billions a year. That simply does not work. The only way for that to work is not only for the rate of progress to be accelerating, but for the rate of accelleration to be accelerating. We’re nowhere near close to that.
The crash is coming, not because LLMs cannot ever be improved, but because it’s becoming increasingly clear that there is no avenue for LLMs to be efficiently improved.
DeepSeek showed there is potential in abandoning the AGI pathway (which is impossible with LLMs) and instead training lots and lots of different specialized models that can be switched between for different tasks (at least, that’s how I understand it)
So I’m not going to assume LLMs will hit a wall, but it’s going to require something else paradigm shifting that we just aren’t seeing out of the current crop of developers.
Yes, but the basic problem doesn’t change; you’re spending billions to make millions. And Deepseek’s approach only works because they’re able to essentially distill the output of less efficient models like Llama and GPT. So they haven’t actually solved the underlying technical issues, they’ve just found a way to break into the industry as a smaller player.
At the end of the day, the problem is not that you can’t ever make something useful with transformer models; it’s that you cannot make that useful thing in a way that is cost effective. That’s especially a problem if you expect big companies like Microsoft or OpenAI to continue to offer these services at an affordable price. Yes, Copilot can help you code, but that’s worth Jack shit if the only way for Microsoft to recoup their investment is by charging $200 a month for it.
It does have a large initial cost. It also has a large ongoing cost. GPU time is really, really pricey.
Even putting aside training and infrastructure, OpenAI still loses money on even their most expensive paid subscribers. While guys like Deepseek have shown ways of reducing those costs, they’re still not enough to make these models profitable to run at the kind of workloads they’re intended to handle, and attempts to reduce their fallibility make them even more expensive, because they basically just involve running the model multiple times over.
That was pretty much always the only potential path forward for LLM type AIs. It’s an extension of the same machine learning technology we’ve been building up since the 50s.
Everyone trying to approximate an AGI with it has been wasting their time and money.
Amazon did not turn a profit for 14 years. That’s not a sign of a crash.
Ai is progressing and different routes are being tried. Some might not work as good as others. We are on a very fast train. I think the crash is unlikely. The prize is too valuable and it’s strategically impossible to leave it to someone else.
Assuming it cost Microsoft $0 dollars to provide their AI services (this is up there with "Assuming all of physics stops working), and every dollar they make from Copilot was pure profit, it would take Microsoft 384 years to recoup one year of investment in AI.
And thats without even getting into the fact that in reality these services are so expensive to run that every time a customer uses them its a net loss to the provider.
When Amazon started out, no one had heard of them. Everyone has heard of Microsoft. Everyone already uses Microsoft’s products. Everyone has heard about AI. It’s the only thing in tech that anyone is talking about. It’s hard to see how they could be doing more to market this. Same story with OpenAI, Facebook, Google, basically every player in this space.
Even if they can solve the efficiency problems to the point where they can actually make a profit off of these things, there just isn’t enough interest. AI does plenty of things that are useful, but nothing that’s truly vital, and it needs to be vital to have any hope of making back the money that’s gone into it.
At present, there simply is not a path to profitability that doesn’t rely on unicorn farts and pixie dust.
The companies developing ai don’t need to make a profit just the same as Amazon didn’t. They are in the development phase. Profit is not a big concern.
The point is that they need to eventually make a profit that is commensurate to the investments they have put in, and right now there is absolutely no feasible path to them doing so. Any plan for eventual profitability relies entirely on magical thinking.
Amazon isn’t a good comparison. People need to buy things. Having a better way to do that was and is worth billions.
There is no revolutionary product that people need on the horizon for AI. The products released using it are mostly just fun toys, because it can’t be trusted with anything serious. There’s no indication this will change in the near to distant future
Yes. It is all a scam and the people cheerleading it are either in on it or one of the “useful idiots” (to paraphrase Lenin’s purported catchphrase) that all scams need to continue.
And you think these “smart people” can’t be taken in by such an obvious scam? Madoff ran his really obvious scam for seventeen years before he got caught out. It turns out the “smartest people in the room” aren’t quite as smart as they thought.
I wonder how smart one must be to see past everything ai already is and what it promises to see it as scam. Probably way more than the hundreds of millions of people using it every day.
Testing shows that current models hallucinate more than previous ones. OpenAI rebeadged ChatGPT 5 to 4.5 because the gains were so meagre that they couldn’t get away with pretending it was a serious leap forward. “Reasoning” sucks; the model just leaps to a conclusion as usual then makes up steps that sound like they lead to that conclusion; in many cases the steps and the conclusion don’t match, and because the effect is achieved by running the model multiple times the cost is astronomical. So far just about every negative prediction in this article has come true, and every “hope for the future” has fizzled utterly.
Are there minor improvements in some areas? Yeah, sure. But you have to keep in mind the big picture that this article is painting; the economics of LLMs do not work if you’re getting incremental improvements at exponential costs. It was supposed to be the exact opposite; LLMs were pitched to investors as a “hyperscaling” technology that was going to rapidly accelerate in utility and capability until it hit escape velocity and became true AGI. Everything was supposed to get more, not less, efficient.
The current state of AI is not cost effective. Microsoft (just to pick on one example) is making somewhere in the region of a few tens of millions a year off of copilot (revenue, not profit), on an investment of tens of billions a year. That simply does not work. The only way for that to work is not only for the rate of progress to be accelerating, but for the rate of accelleration to be accelerating. We’re nowhere near close to that.
The crash is coming, not because LLMs cannot ever be improved, but because it’s becoming increasingly clear that there is no avenue for LLMs to be efficiently improved.
DeepSeek showed there is potential in abandoning the AGI pathway (which is impossible with LLMs) and instead training lots and lots of different specialized models that can be switched between for different tasks (at least, that’s how I understand it)
So I’m not going to assume LLMs will hit a wall, but it’s going to require something else paradigm shifting that we just aren’t seeing out of the current crop of developers.
Yes, but the basic problem doesn’t change; you’re spending billions to make millions. And Deepseek’s approach only works because they’re able to essentially distill the output of less efficient models like Llama and GPT. So they haven’t actually solved the underlying technical issues, they’ve just found a way to break into the industry as a smaller player.
At the end of the day, the problem is not that you can’t ever make something useful with transformer models; it’s that you cannot make that useful thing in a way that is cost effective. That’s especially a problem if you expect big companies like Microsoft or OpenAI to continue to offer these services at an affordable price. Yes, Copilot can help you code, but that’s worth Jack shit if the only way for Microsoft to recoup their investment is by charging $200 a month for it.
ai has large initial cost, but older models will continue to exist and the open source models will continue to take potential profit from the corps
It does have a large initial cost. It also has a large ongoing cost. GPU time is really, really pricey.
Even putting aside training and infrastructure, OpenAI still loses money on even their most expensive paid subscribers. While guys like Deepseek have shown ways of reducing those costs, they’re still not enough to make these models profitable to run at the kind of workloads they’re intended to handle, and attempts to reduce their fallibility make them even more expensive, because they basically just involve running the model multiple times over.
That was pretty much always the only potential path forward for LLM type AIs. It’s an extension of the same machine learning technology we’ve been building up since the 50s.
Everyone trying to approximate an AGI with it has been wasting their time and money.
Amazon did not turn a profit for 14 years. That’s not a sign of a crash.
Ai is progressing and different routes are being tried. Some might not work as good as others. We are on a very fast train. I think the crash is unlikely. The prize is too valuable and it’s strategically impossible to leave it to someone else.
Assuming it cost Microsoft $0 dollars to provide their AI services (this is up there with "Assuming all of physics stops working), and every dollar they make from Copilot was pure profit, it would take Microsoft 384 years to recoup one year of investment in AI.
And thats without even getting into the fact that in reality these services are so expensive to run that every time a customer uses them its a net loss to the provider.
When Amazon started out, no one had heard of them. Everyone has heard of Microsoft. Everyone already uses Microsoft’s products. Everyone has heard about AI. It’s the only thing in tech that anyone is talking about. It’s hard to see how they could be doing more to market this. Same story with OpenAI, Facebook, Google, basically every player in this space.
Even if they can solve the efficiency problems to the point where they can actually make a profit off of these things, there just isn’t enough interest. AI does plenty of things that are useful, but nothing that’s truly vital, and it needs to be vital to have any hope of making back the money that’s gone into it.
At present, there simply is not a path to profitability that doesn’t rely on unicorn farts and pixie dust.
The companies developing ai don’t need to make a profit just the same as Amazon didn’t. They are in the development phase. Profit is not a big concern.
You’re being incredibly dense.
The point is that they need to eventually make a profit that is commensurate to the investments they have put in, and right now there is absolutely no feasible path to them doing so. Any plan for eventual profitability relies entirely on magical thinking.
Amazon isn’t a good comparison. People need to buy things. Having a better way to do that was and is worth billions.
There is no revolutionary product that people need on the horizon for AI. The products released using it are mostly just fun toys, because it can’t be trusted with anything serious. There’s no indication this will change in the near to distant future
People don’t need to buy anything over Amazon. That’s not a need.
There is no revolutionary product on the horizon!?! I’m not sure how to respond to that.
you think It’s all a scam and everyone is in on it?
Yes. It is all a scam and the people cheerleading it are either in on it or one of the “useful idiots” (to paraphrase Lenin’s purported catchphrase) that all scams need to continue.
And you think these “smart people” can’t be taken in by such an obvious scam? Madoff ran his really obvious scam for seventeen years before he got caught out. It turns out the “smartest people in the room” aren’t quite as smart as they thought.
I wonder how smart one must be to see past everything ai already is and what it promises to see it as scam. Probably way more than the hundreds of millions of people using it every day.
Just need a sense of history. The AI sector has scammed over and over and over and over again. This upcoming “AI Winter” will be the sixth.
Oh, and you need to just pay a bit of attention when you use the damned thing:
This was from two days ago (2025-06-07), so not an “old model”. Looking at what ai already is and we’re seeing fundamental problems in counting.
You know, that thing you likely learned so long ago you don’t even remember having had to learn it.