CodyIT@programming.dev to Programmer Humor@programming.dev · 10 days agothe beautiful codeprogramming.devimagemessage-square232fedilinkarrow-up12.08Karrow-down111
arrow-up12.07Karrow-down1imagethe beautiful codeprogramming.devCodyIT@programming.dev to Programmer Humor@programming.dev · 10 days agomessage-square232fedilink
minus-squareJtotheb@lemmy.worldlinkfedilinkarrow-up1·3 days agoThat’s unreal. No, you cannot come up with your own scientific test to determine a language model’s capacity for understanding. You don’t even have access to the “thinking” side of the LLM.
minus-squareCanadaPluslinkfedilinkarrow-up1·edit-22 days agoYou can devise a task it couldn’t have seen in the training data, I mean. You don’t even have access to the “thinking” side of the LLM. Obviously, that goes for the natural intelligences too, so it’s not really a fair thing to require.
That’s unreal. No, you cannot come up with your own scientific test to determine a language model’s capacity for understanding. You don’t even have access to the “thinking” side of the LLM.
You can devise a task it couldn’t have seen in the training data, I mean.
Obviously, that goes for the natural intelligences too, so it’s not really a fair thing to require.