The only way to make AI less power efficient: require an instance of Electron for each GPU thread in the matrix multiplication. Suddenly there is not enough RAM on the planet and optimizing Chrome becomes the all-consuming desire of the world’s wealthiest nations.
The advanced version also has a locally hosted chatbot per primitive.
We’ve figured out annoying casts with next gen type coercion: type hallucination - the LLM will try to figure out how to do any necessary conversions at runtime and just kind of guess any missing details.
The best thing is it doesn’t ever do the same thing twice. So if it causes a bug the first time, it might not the second time.
Each primitive has at least one Electron instance dedicated to it. The advanced version also has a locally hosted chatbot per primitive.
The only way to make AI less power efficient: require an instance of Electron for each GPU thread in the matrix multiplication. Suddenly there is not enough RAM on the planet and optimizing Chrome becomes the all-consuming desire of the world’s wealthiest nations.
We’ve figured out annoying casts with next gen type coercion: type hallucination - the LLM will try to figure out how to do any necessary conversions at runtime and just kind of guess any missing details.
The best thing is it doesn’t ever do the same thing twice. So if it causes a bug the first time, it might not the second time.
Software As A Surprise
Or even better, a different bug!
It’s a dynamic developer experience