the only people ever obsessed with AI, were corporate heads looking to reduce headcount in thier companies, and to suck up more VC money.
It’s worse than that… They are broken. Like, they are all fucking broken.
You should probably mention that this is an article from 7 months ago.
3 years ago Sam Altman said the current models hit a wall and the media blocked it out
Has anything changed?
nope, AI already kinda peaked what it can do currently.
No, they already stole everything, so there’s nothing left they can use to train and improve further.
Yes, it kept improving
[Citation needed]
If anything the LLMs have gotten less useful and started hallucinating even more obviously now.
7 months ago: https://web.archive.org/web/20241210232635/https://openlm.ai/chatbot-arena/ Now: https://web.archive.org/web/20250602092229/https://openlm.ai/chatbot-arena/
You can see that o1-mini, a silver (almost gold) model, is now a middle-of-the-road copper model.
Note that Chatbot Arena calculates its score relatively - they’ll show two outputs (without the model names), and people select the output they prefer. The preferences are ordered. Not sure what accounts for gold/silver/copper.
LOL
lol right
Did it?
Yes. 7 months ago there weren’t any reasoning models. The video models were far worse. Coding was nothing compared to capabilities they have now.
Ai has come far fast from the time this article was written.
Testing shows that current models hallucinate more than previous ones. OpenAI rebeadged ChatGPT 5 to 4.5 because the gains were so meagre that they couldn’t get away with pretending it was a serious leap forward. “Reasoning” sucks; the model just leaps to a conclusion as usual then makes up steps that sound like they lead to that conclusion; in many cases the steps and the conclusion don’t match, and because the effect is achieved by running the model multiple times the cost is astronomical. So far just about every negative prediction in this article has come true, and every “hope for the future” has fizzled utterly.
Are there minor improvements in some areas? Yeah, sure. But you have to keep in mind the big picture that this article is painting; the economics of LLMs do not work if you’re getting incremental improvements at exponential costs. It was supposed to be the exact opposite; LLMs were pitched to investors as a “hyperscaling” technology that was going to rapidly accelerate in utility and capability until it hit escape velocity and became true AGI. Everything was supposed to get more, not less, efficient.
The current state of AI is not cost effective. Microsoft (just to pick on one example) is making somewhere in the region of a few tens of millions a year off of copilot (revenue, not profit), on an investment of tens of billions a year. That simply does not work. The only way for that to work is not only for the rate of progress to be accelerating, but for the rate of accelleration to be accelerating. We’re nowhere near close to that.
The crash is coming, not because LLMs cannot ever be improved, but because it’s becoming increasingly clear that there is no avenue for LLMs to be efficiently improved.
DeepSeek showed there is potential in abandoning the AGI pathway (which is impossible with LLMs) and instead training lots and lots of different specialized models that can be switched between for different tasks (at least, that’s how I understand it)
So I’m not going to assume LLMs will hit a wall, but it’s going to require something else paradigm shifting that we just aren’t seeing out of the current crop of developers.
Yes, but the basic problem doesn’t change; you’re spending billions to make millions. And Deepseek’s approach only works because they’re able to essentially distill the output of less efficient models like Llama and GPT. So they haven’t actually solved the underlying technical issues, they’ve just found a way to break into the industry as a smaller player.
At the end of the day, the problem is not that you can’t ever make something useful with transformer models; it’s that you cannot make that useful thing in a way that is cost effective. That’s especially a problem if you expect big companies like Microsoft or OpenAI to continue to offer these services at an affordable price. Yes, Copilot can help you code, but that’s worth Jack shit if the only way for Microsoft to recoup their investment is by charging $200 a month for it.
ai has large initial cost, but older models will continue to exist and the open source models will continue to take potential profit from the corps
That was pretty much always the only potential path forward for LLM type AIs. It’s an extension of the same machine learning technology we’ve been building up since the 50s.
Everyone trying to approximate an AGI with it has been wasting their time and money.
Amazon did not turn a profit for 14 years. That’s not a sign of a crash.
Ai is progressing and different routes are being tried. Some might not work as good as others. We are on a very fast train. I think the crash is unlikely. The prize is too valuable and it’s strategically impossible to leave it to someone else.
Assuming it cost Microsoft $0 dollars to provide their AI services (this is up there with "Assuming all of physics stops working), and every dollar they make from Copilot was pure profit, it would take Microsoft 384 years to recoup one year of investment in AI.
And thats without even getting into the fact that in reality these services are so expensive to run that every time a customer uses them its a net loss to the provider.
When Amazon started out, no one had heard of them. Everyone has heard of Microsoft. Everyone already uses Microsoft’s products. Everyone has heard about AI. It’s the only thing in tech that anyone is talking about. It’s hard to see how they could be doing more to market this. Same story with OpenAI, Facebook, Google, basically every player in this space.
Even if they can solve the efficiency problems to the point where they can actually make a profit off of these things, there just isn’t enough interest. AI does plenty of things that are useful, but nothing that’s truly vital, and it needs to be vital to have any hope of making back the money that’s gone into it.
At present, there simply is not a path to profitability that doesn’t rely on unicorn farts and pixie dust.
The companies developing ai don’t need to make a profit just the same as Amazon didn’t. They are in the development phase. Profit is not a big concern.
Amazon isn’t a good comparison. People need to buy things. Having a better way to do that was and is worth billions.
There is no revolutionary product that people need on the horizon for AI. The products released using it are mostly just fun toys, because it can’t be trusted with anything serious. There’s no indication this will change in the near to distant future
People don’t need to buy anything over Amazon. That’s not a need.
There is no revolutionary product on the horizon!?! I’m not sure how to respond to that.
you think It’s all a scam and everyone is in on it?
Don’t get my hopes up. I want them to lose as much of their dumb tech bro money as possible.
PSA:
Loose is the opposite of tight.
Lose is the opposite of win.
Also lefty-loosy, righty-tighty, or if you want to translate from the Spanish expression “la derecha oprime y la izquierda libera” (“the right oppresses and the left liberates”) that’s sage advice too.
Lowes is a store
The problem is it will hurt everyone when they fail
Anyone relying on this shit deserves it. Let these venture capitalists throwing money at Ai all burn.
I agree with you about feeling no pity for the tech bros. However, a big appeal of AI for them is elimination of employees. And that’s going to hurt more regular folks who did not sign up for AI on a much more noticeable level. I dont think any nation is set up to handle the level of unemployment that’s on the horzon. So ignoring the environmental impacts of LLM/AI servers; let’s get national food, shelter, and healthcare systems in place, and then I’d be all for letting the venture capitalists shove their dicks in blenders.
What I am saying is the investments are at a scale that it could cause a resession when these companies fail. Meaning its likely to effect everyone in the economy we all work in. You wont need to be working on AI to feel the impact
I have a theory about this.
The people who had both genius and work ethic made magic. That was the GPT, Dall-E, AlphaGo generation of AI. You can’t make magic because you want to have a good career and you did some seminars. You have to do it because you have it burning inside you, and you’re in a community of people who all have that type of vision, and all pulling in the same direction you can make things that are impossible before you did them.
(Not that I’m saying AI itself is necessarily a good thing, certainly not in its present trajectory. I’m just saying that getting the tech from recognizing digits to ChatGPT was pretty fucking impressive.)
Now probably about 90% of the people in the field are there because it’s a good career. And, the people giving them their marching orders day-to-day are greedy idiots. You just can’t make magic that way. All you can do is follow the road that’s been laid down. The industry is just throwing orders of magnitude more electricity, money, and engineer time at these still-impossible remaining problems and hoping that’ll get them suddenly solved.
Remember when ChatGPT was programmed by competent and humble people (who are now, a lot of them, fired because they fought with Sam Altman), and so it kept emphasizing to people that it was not an AI (meaning an AGI), just a language model? They felt like that was important for people to understand (not that it did any good). Anyway, those days are gone, and with them, the forward progress that people like that can make.
Why? Because we’re trying to make magic with career people. It doesn’t work that way, never has. It’s like trying to start a fire with a bunch of rocks. Rocks are fine. Fire is fine, if a little bit dangerous. But they are not interchangeable.
GPT, DALL-E and AlphaGo are not the type of magic born from passion. They are the type of magic born by years of researchers doing the mostly boring work of science. Those are career people. They are just career researchers.
The current public AI scene is what happens when commercial interests take over. They can push the current state of the art to its limit, but aren’t going to make any fundamental breakthroughs.
Those are career people. They are just career researchers.
People mostly only accept the extremely shitty working conditions of the research industry if they have at least one of a lot of passion, extreme egomania, or independent wealth. Preferably more than one. Most of the people doing a research career do at least start out with a lot of passion.
Yeah. If you talked to any of these people toiling away in obscurity, they would never in a million years describe their work as boring. To an outsider, maybe.
People mostly only accept the extremely shitty working conditions of the research industry if they have at least one of a lot of passion, extreme egomania, or independent wealth.
I can confirm.
Source: I lack passion (for research topics), and I lack independent wealth, and I direct my egomania in healthier directions than research.
I’ll get into research when it pays better (likely never).
Exactly, they are a breakthrough built on top of decades of steadily paced progress. Those decades are conveniently ignored by the commercial interests, who like to act as if those once in a decade breakthroughs are actually the normal pace of research at their company, and the next one is right around the corner.
Ironically, that’s exactly why all the greedy cunt executives think general AI is right around the corner… They haven’t been paying attention and have no clue about the decades of research and development that got it this far.
Nor do they remember the previous AI boom of the 90’s and 2000’s, where the likes of Lernout & Hauspie were also promising the world. In that case they went bankrupt and executives were convicted of fraud, because they resorted to “creative” accounting to paper over solvency issues.
Career people can easily make magic. Lots and lots of career people make magic.
The problem is the greedy fucks. When choice X appears to make 10x the money as creative, difficult decision Y, which choice is made every time when it’s the greedy fuck choosing?
yay!
Yes of course they are at the limit, and because they poisoned the internet with generative bullshit, they can’t scrape it and expect improvement, but they are still scraping it, so they’re poisoning themselves.
The end of the article has classic snake oil trash. The idea that newer AI could be trained to think similar to how humans think. Yes, great, you know scientists have been working on that for decades. Good luck succeeding where nobody else did. There’s a reason that so-called weak AI or so-called expert systems are the ones that we all remember as having lasted for decades.
Shh. Let it happen. Let the poison take hold.
I don’t think it’s just the poison, but an inherent limitation on the technology. An LLM is never going to be able to have critical thinking skills.
Quality over quantity will be the next generation of AI models
Figuring out more efficient models would be a big boost as well.
Lots of improvements over the years, and I’m sure there’s a lot more that can be done.
Exactly