• Anthropic’s new Claude 4 features an aspect that may be cause for concern.
  • The company’s latest safety report says the AI model attempted to “blackmail” developers.
  • It resorted to such tactics in a bid of self-preservation.
  • Railcar8095@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    It yet to be proven or disproven that if you put the exact same person in the exact same situation (a perfect to the molecular level) they will behave differently.

    We can only test “more or less close”. So we would not know of humans are sentient based on that reasoning, we are only hard to test.

    • sbv@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 day ago

      if you put the exact same person in the exact same situation (a perfect to the molecular level) they will behave differently.

      I don’t consider that relevant to sentience. Structurally, biological systems change based on inputs. LLMs cannot. I consider that plasticity to be a prerequisite to sentience. Others may not.

      We will undoubtedly see systems that can incorporate some kind of learning and mutability into LLMs. Re-evaluating after that would make sense.