LLMs are too important to be left to Big Tech. So we built Ensu.
It runs on your device, and doesn’t share your data because it can’t!
https://ente.com/ensu
---
This is an experimental project by Ente Labs: https://ente.com/blog/ensu
---
Come over to https://ente.com/discord to build Ensu with us.
Ensu

Introducing Ensu, our first step toward a private, personal LLM app that runs on your device and grows with you over time.

ente

@ente Woah, this is cool! It runs surprisingly fast on my phone, what model and parameter size are you using? It's a 1.2GB model download, I assume it's Qwen3.5 0.8B since that model is 1GB for llama.cpp

EDIT: Nevermind, found it. Tap multiple times on the build number to get two more buttons on the settings page.

It's using by default LFM 2.5 VL 1.6B (Q4_0), which is even more impressive. I thought it was using a 0.8B model, but it's bigger.

EDIT 2: It uses ~1.0GB of RAM as reported by Running services from the android developer settings page. It doesn't crash the app or background apps nor does it thrash my zRAM despite this phone having shitty RAM + zRAM management on custom ROMs. Neat!

@ente on Debian Trixie neither the .deb or the AppImage work. Once the GGUF models get downloaded, it just crashes with SIGILL (Illegal Instruction), no other details in the log.

I also can't install ensu's .deb without manually installing its dependencies - libdav1d6, librav1e0, libsvtav1enc1, libicu72, libwebkit2gtk-4.0-37, libavif15, libwebkit-6.0-4 by downloading them from https://packages.debian.org

Everything is downloaded to ~/.local/share/io.ente.ensu

(the Android version also uses GGUF models, I initially thought it was using llama.cpp, so that's cool)

Debian -- Packages