🚀 New AI Battle: Gemma 4 on Linux! 🐧

I tested the new Gemma 4 (e4b) running locally via Ollama on Linux. How does it solve the "HORSE-EARTH" poem test?

🎭 Linguistics Grade: B
Gemma 4 nailed the complex acrostic/telestich constraints but had to invent a new word—"gleama"—to make the rhyme work. A "beautiful mess" that shows real creative grit.

All technical details: https://www.lotharschulz.info/2026/04/03/gemma-4-on-linux/

#Gemma4 #Linux #Ollama #OpenSource #AI #MachineLearning #LocalAI #SelfHosted

Gemma 4 on Linux – Lothar Schulz

Google Gemma 4 開源 AI 新模型 細過 ChatGPT 但效能媲美頂級大模型
Google 在 4 月 2 日發布新一代開源模型 Gemma 4,建基於 Gemini 3 核心技術,首次採 […]
#人工智能
https://unwire.hk/2026/04/03/google-gemma-4/ai/?utm_source=rss&utm_medium=rss&utm_campaign=google-gemma-4

Every few months someone announces a model you can “run locally” and every few months the fine print tells the same story. You need 80GB of VRAM. Or a server.

Gemma 4 is different. Not because Google said so. Because of 3.8 billion active parameters inside a 26 billion parameter model. The short version is that for the first time, running a genuinely capable AI agent on a consumer GPU is not a compromise.
https://firethering.com/gemma-4-local-ai-agents/

#gemma4 #ai #aiagent #google #trending #opensource

Gemma 4 Makes Local AI Agents Actually Practical

Gemma 4 is a family of four models. Two dense models built for phones and laptops, E2B and E4B. One MoE model at 26B A4B for consumer GPUs. One dense 31B for workstations and servers. All four are multimodal. Text and image input across the entire family. The two smaller models, E2B and E4B, also handle audio natively which is unusual at that size. Context window sits at 128K tokens for the small models and 256K for the larger two. Every model in the family supports function calling out of the box, which matters if you are building agents. Every model also has a thinking mode you can toggle, so you get chain of thought reasoning without a separate model.

Firethering

💻 NVIDIA e Google presentano Gemma 4, la nuova generazione per PC e dispositivi edge. Un salto nel futuro! #Innovazione #Gemma4

🔗 https://www.tomshw.it/hardware/nvidia-e-google-gemma-4-arriva-su-pc-e-dispositivi-edge-2026-04-03

NVIDIA e Google: Gemma 4 arriva su PC e dispositivi edge

NVIDIA e Google ottimizzano i modelli Gemma 4 per GPU NVIDIA, da PC RTX e DGX Spark fino a Jetson, abilitando AI agentici locali su edge e workstation.

Tom's Hardware

Gemma 4, 스마트폰에서 돌아가는 에이전트 오픈 모델 출시

Google DeepMind가 공개한 Gemma 4는 스마트폰과 라즈베리파이에서 자율 에이전트를 실행하는 오픈 모델 패밀리입니다. Apache 2.0 라이선스로 상업적 활용이 자유롭습니다.

https://aisparkup.com/posts/10798

📌 Gemma 4: Google rivoluziona l'AI open-source con Apache 2.0 | 4 modelli multimodali, 140 lingue e prestazioni superiori a modelli 20x più grandi
https://gomoot.com/gemma-4-google-porta-lai-a-tutti-con-la-licenza-apache-2-0/

#AI #gemma4 #news

Gemma 4 rappresenta i modelli aperti più intelligenti di Google, progettati per il ragionamento avanzato e i flussi di lavoro agentici.

#gemma4 #intelligenzaartificiale
https://youtu.be/jZVBoFOJK-Q

🤖 Gemma 4 : Google lance ses nouveaux modèles IA open source, voici comment les tester

👉 https://www.justgeek.fr/gemma-4-google-modeles-ia-open-source-148771/

#Gemma4 #Google #IA #AI #OpenSource

Gemma 4 : Google lance ses nouveaux modèles IA open source, voici comment les tester

Gemma 4 : les nouveaux modèles IA open source de Google. Performances, besoins en VRAM et comment les tester facilement.

JustGeek

💎 Scopri Gemma 4: la nuova dimensione dei modelli open per il peso. Potenza e precisione come mai prima d'ora. #Gemma4 #OpenModel💪 #socialmedia #artificialintelligence #technology

🔗 https://aibay.it/notizie/gemma-4-i-modelli-open-piu-capaci-per-il-peso-2026-04-03

Gemma 4: i modelli open più capaci per il peso

Google DeepMind lancia Gemma 4, nuova famiglia di modelli open source con architettura avanzata, licenza Apache 2.0 e oltre 400 milioni di download dalla p

AiBay
Tak ne, ani lokální #Gemma4 není pro mne. Míra vědomostního deficitu a halucinací je stále značná (i u největšího modelu).