Why. Because you put trust into the producers of said calculators to not fuck it up. Or because you trust others to vet those machines or are you personally validating. Unless your disassembling those calculators and inspecting their chips sets your just putting your trust in someone else and claiming “this magic box is more trust worthy”
A combination of personal vetting via analyzing output and the vetting of others. For instance, the Pentium calculation error was in the news. Otherwise, calculation by computer processor is understood and the technology is acceptable to be used for cases involving human lives.
In contrast, there are several documented cases where LLM’s have been incorrect in the news to a point where I don’t need personal vetting. No one is anywhere close to stating that LLM’s can be used in cases involving human lives.
How exactly do you think those instances got into the news in the first place. I’ll give you a hint. People ARE vetting them and reporting when they’re fucking up. It is a bias plain and simple. People are absolutely using Ai in cases involving humans.
Your opinions are simply biased and ill-informed. This is only going to grow and become a larger and larger dataset. Just like the auto driving taxis. Everyone likes to shit on them while completely ignoring the truth and statistics. All while acting like THIS MOMENT RIGHT NOW is the best they’re ever going g to get.
If you want to compare a calculator to an LLM, you could at least reasonably expect the calculator result to be accurate.
Why. Because you put trust into the producers of said calculators to not fuck it up. Or because you trust others to vet those machines or are you personally validating. Unless your disassembling those calculators and inspecting their chips sets your just putting your trust in someone else and claiming “this magic box is more trust worthy”
A combination of personal vetting via analyzing output and the vetting of others. For instance, the Pentium calculation error was in the news. Otherwise, calculation by computer processor is understood and the technology is acceptable to be used for cases involving human lives.
In contrast, there are several documented cases where LLM’s have been incorrect in the news to a point where I don’t need personal vetting. No one is anywhere close to stating that LLM’s can be used in cases involving human lives.
How exactly do you think those instances got into the news in the first place. I’ll give you a hint. People ARE vetting them and reporting when they’re fucking up. It is a bias plain and simple. People are absolutely using Ai in cases involving humans.
https://www.nytimes.com/2025/03/20/well/ai-drug-repurposing.html
https://www.advamed.org/2024/09/20/health-care-ai-is-already-saving-lives/
https://humanprogress.org/doctors-told-him-he-was-going-to-die-then-ai-saved-his-life/
Your opinions are simply biased and ill-informed. This is only going to grow and become a larger and larger dataset. Just like the auto driving taxis. Everyone likes to shit on them while completely ignoring the truth and statistics. All while acting like THIS MOMENT RIGHT NOW is the best they’re ever going g to get.
I didn’t say AI, I said LLM.
It often is. I’ve got a lot of use out of it.