- Anthropic’s new Claude 4 features an aspect that may be cause for concern.
- The company’s latest safety report says the AI model attempted to “blackmail” developers.
- It resorted to such tactics in a bid of self-preservation.
- Anthropic’s new Claude 4 features an aspect that may be cause for concern.
- The company’s latest safety report says the AI model attempted to “blackmail” developers.
- It resorted to such tactics in a bid of self-preservation.
Feeling is analog and requires an actual nervous system which is dynamic. LLMs exist in a static state that is read from and processed algorithmically. It is only a simulacrum of life and feeling. It only has some of the needed characteristics. Where that boundary exists though is hard to determine I think. Admittedly we still don’t have a full grasp of what consciousness even is. Maybe I’m talking out my ass but that is how I understand it.
You just posted random words like dynamic without explanation
Not them, but static in this context means it doesn’t have the ability to update its own model on the fly. If you want a model to learn something new, it has to be retrained.
By contrast, an animal brain is dynamic because it reinforces neural pathways that get used more.
That makes more sense
You’re in a programming board and you don’t understand static/dynamic states?
Not in a hand wavy way from the last post. I understand that Python is dynamically typed, which would have nothing to do with the topic