• Jtotheb@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    3 days ago

    That’s unreal. No, you cannot come up with your own scientific test to determine a language model’s capacity for understanding. You don’t even have access to the “thinking” side of the LLM.

    • CanadaPlus
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 days ago

      You can devise a task it couldn’t have seen in the training data, I mean.

      You don’t even have access to the “thinking” side of the LLM.

      Obviously, that goes for the natural intelligences too, so it’s not really a fair thing to require.