New PhilBlog post, in which I explain why AI Is Not Magic!
https://philliprhodes.name/roller/blog/entry/ai-is-not-magic
New PhilBlog post, in which I explain why AI Is Not Magic!
https://philliprhodes.name/roller/blog/entry/ai-is-not-magic
"I've long maintained that the threat from AI to workers isn't that AI can do your job – it's that an AI salesman can convince your boss to fire you and replace you with an AI that can't do your job" - @pluralistic
https://pluralistic.net/2025/05/07/rah-rah-rasputin/#credulous-dolts
A really nice explanation of the chain rule.
'Empirical Design in Reinforcement Learning', by Andrew Patterson, Samuel Neumann, Martha White, Adam White.
http://jmlr.org/papers/v25/23-0183.html
#reinforcement #experiments #hyperparameters
I have now, a working - albeit incomplete - implementation of a Jena TDB backed BeliefBase for Jason Agent applications. So far it's working with storing arity-0 beliefs and loads those on startup. Higher arity beliefs are also persisted, but I haven't tested initial retrieval of those on startup. And the current impl doesn't yet guarantee consistent ordering of the embedded terms for higher arity beliefs.
#Apache #Jena #SemanticWeb #RDF #TripleStore #Jason #AgentSpeak #MultiAgentSystems
@uexo - Yeah, if you're not already familiar with AgentSpeak and BDI agents, it would take a lot of explaining.
For me, it's cool because I think the BDI approach can be an important foundational piece of an approach to AGI. And chatting with the agent can be a way to educate and interact with it.
But building agents that you can send instructions to remotely via XMPP is interesting IMO, even if they're not AGI.