Procedurally generated marble runs! Wow! Just wow!
https://hackaday.com/2025/11/09/mesmerizing-marble-runs-from-procedural-generation/
Procedurally generated marble runs! Wow! Just wow!
https://hackaday.com/2025/11/09/mesmerizing-marble-runs-from-procedural-generation/
Very interesting read on the internals of #llms. It appears that arithmetic is in fact memorized and not a part of the "reasoning" pathways in the #neuralnetwork https://arstechnica.com/ai/2025/11/study-finds-ai-models-store-memories-and-logic-in-different-neural-regions/
Digital wet dreams: playing #retrogames with digital microfluidics! #Geeky !
https://hackaday.com/2025/07/12/playing-snake-with-digital-microfluidics/
After a pre-game chat, Gemini swung from being confident to admitting it would ‘struggle immensely’ against the ancient console.
Such predictions remind me a lot of the "golden ages of #AI". Most of the predictions from that time, did not come true either at all, or at least just recently. Time will tell.
#LLM are seemingly getting dumber: 'Large language models (LLMs) are becoming less "intelligent" in each new version as they oversimplify and, in some cases, misrepresent important scientific and medical findings, a new study has found.'