• Anthropic’s new Claude 4 features an aspect that may be cause for concern.
  • The company’s latest safety report says the AI model attempted to “blackmail” developers.
  • It resorted to such tactics in a bid of self-preservation.
  • Plebcouncilman@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 day ago

    This sounds like a good one but I don’t think I’m fully grasping what you mean. Do you mean like if we subject a person to torture, after the ordeal they are forever changed and now have trauma, PTSD etc?

    I don’t think LLMs will ever have feelings as we define them though. Or more specifically I don’t think feelings is a pre-requisite necessarily. We could have them simulate feelings and if they themselves buy into the simulation there’s no functional difference between not having them but not all LLMs will have this “ability” presumably as its utility is questionable I guess. But again, animals are sentient and they don’t all have the same range of emotions as we do. Or at least they don’t exhibit them in a way that we can appreciate them.