All respect to JetBrains, I’ve loved several of their IDEs… This was a dumb idea from the start. Way too niche and specialized. Honestly the allure of a purpose built language specific IDE is losing it’s lusture as well with modern architectures often blending several languages, configuration frameworks, IaC…
So they stirred the pot, killed adblockers, killed most non-ui add-ons, made 3 new standards, and now they’re just going to give up and go back to the way things were, but worse?
Yeah, sounds like Google. If it were any other company I’d say this was the plan all along
Born in the nineteen hundreds, as they say.
Yeah, but Zelenskyy wasn’t wearing a suit or ending every sentence with “thank you”, duh. Oh right and he isn’t a malignant fascist.
This is canon in the Doc Ock as Spider-Man arc.
2ghz does not measure it’s computing power though, only the cycle speed. Two very different things.
An objective measure is a simple benchmark:
Here’s a quad core 1.5ghz RISC-V SoC (noted as VisionFive 2) vs a quad core 1.8ghz ARM chip (noted as Raspberry Pi 400).
It’s not even remotely close to usable for all but the most basic of tasks https://www.phoronix.com/review/visionfive2-riscv-benchmarks/6
Well that’s true if you have a live animal producing your meat. Not sure that applies if the meat is lab grown though?
100% they absolutely were.
Give geneticists 20 years, we’ll have lab grown T-Rex in the grocery store
I have been that cinderblock for my dad many a time.
If the women don’t find you handsome, at least they can find you handy.
Depends on your goals. For raw tokens per second, yeah you want an Nvidia card with enough™ memory for your target model(s).
But if you don’t care so much for speed beyond a certain amount, or you’re okay sacrificing some speed for economy, AMD RX7900 XT/XTX or 9070 both work pretty well for small to mid sized local models.
Otherwise you can look at the SOC type solutions like AMD Strix Halo or Nvidia DGX for more model size at the cost of speed, but always look for reputable benchmarks showing ‘enough’ speed for your use case.
So that means the prices that just got hiked will come back down, right? …Right?
Yeah… I mean, I did hedge by saying “depends on your CPU and your risk profile”, but I understand your point and will edit my comment to caution readers before playing with foot finding firearms.
From my understanding it’s a mixed bag. Some of those vulnerabilities were little more than theoretical exploits from within high levels of trust, like this one. Important if you’re doing a PaaS/IaaS workload like AWS, GCP etc and you need to keep unknown workloads safe, and your hypervisor safe from unknown workloads.
Others were super scary direct access to in-memory processes type vulnerabilities. On Linux you can disable certain mitigations while not disabling others, so in theory you could find your way to better performance at a near zero threat increase, but yes, better safe than sorry.
I apologize for being glib.
Agreed, shouldn’t affect performance. But also depends on how they see best to patch the vulnerability. The microcode patch mechanism is the currently understood vector, but might not be the only way to exploit the actual underlying vulnerability.
I remember the early days of Spectre when the mitigation was “disable branch prediction”, then later they patched a more targeted, performant solution in.
no performance change
You must be new here.
Joking. In reality it depends.
The first iteration of this comment had a cheeky observation about the performance impact of these CPU mitigations on Linux, some of which have nearly no real world threat to people not running cloud providers.
And while that’s true to a degree, tests disabling some or all of the most modern set of mitigations show that most have become highly optimized and the CPUs themselves have iterated over time to increase the performance of the mitigations as well.
And many of these CPU vulnerabilities actually had in the wild use and can still do horrible things with very little surface exposure from your system. Apologies to the people who read the first version of this comment and took the time to rightly push back.
Yup. When you “both sides” insanity, you normalize it.
I’m seeing more companies making it a “tariff surcharge” right before the tax line item.
Doubt it was on purpose. Bastard is a MAGA sycophant.
But hey, we can hope he’s having a stroke.
Oh hey I’ve seen this one! Can’t wait for my back alley eye transplant!
Bethesda was notorious back in the day for using uncompressed textures. Not lossless textures, just fully uncompressed bitmaps. One of the first mods after every game release just compressed and dynamically decompressed these to get massive improvements in load times and memory management.