Run your language models directly on your phone, courtesy of the ApacheTVM machine learning compiler:
My phone got warm, but it feels as if you were doing gpt4 on the cloud, except it is all local!
Run your language models directly on your phone, courtesy of the ApacheTVM machine learning compiler:
My phone got warm, but it feels as if you were doing gpt4 on the cloud, except it is all local!
I knew upgrading my iPhone on launch day would paid off.
Sorry “Apple doesn’t innovate so I only upgrade my phone every six years” peeps.
@Migueldeicaza The implementation is both incredible and bad. Incredible that I’m running an LLM natively on my phone.
Bad because… it neither knows traffic laws in Barcelona nor MC Hammer’s unique style.
@choong @Migueldeicaza You’re explaining LLM’s to me? 😀
My point is that _this_ LLM has worse outcomes than a cloud-based one perhaps because it has fewer hyper parameters and fewer training weights. Essentially: when you evaluate open-source LLMs, you must consider the possibility that they’re of even lesser quality than what you’re seeing with GPT 3.5 or GPT 4.
But that’s also okay. There’s an opportunity to train them the way you want them, evaluate them per your standards.