Couldn't stop myself from wiring up the on-device Apple Intelligence model with tool-calling.

It gets confused quickly as context grows and I don't expect this to land in the App Store version anytime soon, but it does sort of work - with all the usual caveats of LLM unreliability.

Try it out in the current Working Copy beta:
https://testflight.apple.com/join/VduxyR5a

My trusty iPhone 13 Mini doesn’t have on-device models forcing me to add support for server-side models.

Even got ollama to handle tool-calling with gpt-oss after increasing the context size.

Try it out in the latest beta. https://testflight.apple.com/join/VduxyR5a