• Anthropic’s new Claude 4 features an aspect that may be cause for concern.
  • The company’s latest safety report says the AI model attempted to “blackmail” developers.
  • It resorted to such tactics in a bid of self-preservation.
  • kkj@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    But it doesn’t know that it exists. It just says that it does because it’s seen others saying that they exist. It’s a trillion-dollar autocomplete program.

    For example, if you take a common logic puzzle and change the parameters a little, LLMs will often recite a memorized solution to the wrong puzzle because they aren’t parameterizing the query correctly (mapping lion to predator, cabbage to vegetable, ignoring the instructions that the two cannot be put together in favor of the classic framing where the predator can be left with the vegetable).

    I can’t find the link right now, but a different redditor tried the problem with three inanimate objects that could obviously be left alone together and LLMs were still suggesting making return trips with items. They had no examples of a non-puzzle in their training data, so they just recited the solution to a puzzle because they can’t think.

    Note that I’ve been careful to say LLMs. I’m open to the idea that AGI/ASI may someday exist, but I’m quite confident that LLMs will not get there. At best, they might be used to offload conversation, like e.g. Dall-E is used to offload image generation from ChatGPT today.