Google AI Edge Gallery-app - App Store

Download Google AI Edge Gallery van Google in de App Store. Bekijk schermafbeeldingen, beoordelingen en recensies, gebruikerstips en meer apps zoals Google AI…

App Store

Impressive model, for sure. I've been running it on my Mac, now I get to have it locally in my iPhone? I need to test this. Wait, it does agent skills and mobile actions, all local to the phone? Whaaaat? (Have to check out later! Anyone have any tips yet?)

I don't normally do the whole "abliterated" thing (dealignment) but after discovering https://github.com/p-e-w/heretic , I was too tempted to try it with this model a couple days ago (made a repo to make it easier, actually) https://github.com/pmarreck/gemma4-heretical and... Wow. It worked. And... Not having a built-in nanny is fun!

It's also possible to make an MLX version of it, which runs a little faster on Macs, but won't work through Ollama unfortunately. (LM Studio maybe.)

Runs great on my M4 Macbook Pro w/128GB and likely also runs fine under 64GB... smaller memories might require lower quantizations.

I specifically like dealigned local models because if I have to get my thoughts policed when playing in someone else's playground, like hell am I going to be judged while messing around in my own local open-source one too. And there's a whole set of ethically-justifiable but rule-flagging conversations (loosely categorizable as things like "sensitive", "ethically-borderline-but-productive" or "violating sacred cows") that are now possible with this, and at a level never before possible until now.

Note: I tried to hook this one up to OpenClaw and ran into issues

To answer the obvious question- Yes, this sort of thing enables bad actors more (as do many other tools). Fortunately, there are far more good actors out there, and bad actors don't listen to rules that good actors subject themselves to, anyway.

GitHub - p-e-w/heretic: Fully automatic censorship removal for language models

Fully automatic censorship removal for language models - p-e-w/heretic

GitHub
[flagged]
I'm tired of this concern trolling.

People are allowed to disagree with you, mate. This is a real concern that will affect real people's lives. I'm all for freedom, but that doesn't mean we ought to let just anyone own a nuke.

That said, show me where I'm wrong. I'd love to change my mind on this.

I run mlx models with omlx[1] on my mac and it works really well.

[1] https://github.com/jundot/omlx

GitHub - jundot/omlx: LLM inference server with continuous batching & SSD caching for Apple Silicon — managed from the macOS menu bar

LLM inference server with continuous batching & SSD caching for Apple Silicon — managed from the macOS menu bar - jundot/omlx

GitHub

Haven't built anything on the agent skills platform yet, but it's pretty cool imo.

On Android the sandbox loads an index.html into a WebView, with standardized string I/O to the harness via some window properties. You can even return a rendered HTML page.

Definitely hacked together, but feels like an indication of what an edge compute agentic sandbox might look like in future.

> And there's a whole set of ethically-justifiable but rule-flagging conversations (loosely categorizable as things like "sensitive", "ethically-borderline-but-productive" or "violating sacred cows") that are now possible with this, and at a level never before possible until now.

I checked the abliterate script and I don't yet understand what it does or what the result is. What are the conversations this enables?

The in-ter-net is for porn
that song is going to be stuck in my head all day now. lol

Realistically, a lot of people do this for porn.

In my experience, though, it's necessary to do anything security related. Interestingly, the big models have fewer refusals for me when I ask e.g. "in <X> situation, how do you exploit <Y>?", but local models will frequently flat out refuse, unless the model has been abliterated.

1) Coming up with any valid criticism of Islam at all (for some reason, criticisms of Christianity or Judaism are perfectly allowed even with public models!).

2) Asking questions about sketchy things. Simply asking should not be censored.

3) I don't use it for this, but porn or foul language.

4) Imitating or representing a public figure is often blocked.

5) Asking security-related questions when you are trying to do security.

6) For those who have had it, people who are trying to use AI to deal with traumatic experiences that are illegal to even describe.

Many other instances.

I tried it on my mac, for coding, and I wasn't really impressed compared to Qwen.

I guess there are things it's better at?

You're comparing apples to oranges there. Qwen 3.5 is a much larger model at 397B parameters vs. Gemma's 31B. Gemma will be better at answering simple questions and doing basic automation, and codegen won't be it's strong suit.