0xMarioNawfal (@RoundtableSpace)

14세 개발자가 13달러짜리 오프라인 AI 기기를 직접 만들었고, 이를 통해 잠긴 게임의 디버그 시스템에도 접근했다는 흥미로운 사례를 소개합니다. 저비용 오프라인 AI 하드웨어의 가능성을 보여주는 사례입니다.

https://x.com/RoundtableSpace/status/2048730125840331087

#offlineai #aidevice #hardware #opensource #gaming

0xMarioNawfal (@RoundtableSpace) on X

14 YEAR OLD BUILT A $13 OFFLINE AI DEVICE AND ACCESSED LOCKED GAME DEBUG SYSTEMS

X (formerly Twitter)

Ivan Fioravanti ᯅ (@ivanfioravanti)

Qwen3.6 27B가 현재까지 가장 선호하는 로컬 모델이라고 언급했다. 로컬 환경에서 사용하는 대형 언어모델 선택에 대한 실사용 의견이다.

https://x.com/ivanfioravanti/status/2048674164278542561

#qwen #localmodel #llm #offlineai #aimodels

Ivan Fioravanti ᯅ (@ivanfioravanti) on X

For the time being Qwen3.6 27B is still my preferred local model.

X (formerly Twitter)
Et vous, vous trouvez pas curieux
De parler de "maitrise" quand c'est géré par une boite noire
de souveraineté quand c'est hors de votre domaine? 
Ça me rend dingue
#SouverainetéNumérique #DigitalSovereignty #OfflineAI #LocalFirst #DataPrivacy

You can use Gemma 4, the newly released #ai model by #google fully #local on your device. This means that, after the download, you dont need internet to use the AI and conversations are not send to google, which is a huge #privacy win.
You can download the model via the edge gallery app without login.

Im not associated with google in any way.

Do you use AI local on your device?

#gemma4 #googleai #localai #offlineai #PrivacyWins #Ai #dataprivacy #DataProtection #privateai

Yes.
40%
Never.
10%
No, but maybe I'll give it a try.
50%
Unsure.
0%
Poll ended at .

Google Gemma 4 Runs Natively on iPhone with Full Offline AI Inference

https://www.gizmoweek.com/gemma-4-runs-iphone/

#HackerNews #GoogleGemma4 #iPhone #OfflineAI #AIInference #MobileTech

Foundry Local is now Generally Available | Microsoft Foundry Blog

Ship local AI to millions of devices - fast, private on-device inference with no per-token costs.

Microsoft Foundry Blog

Rohan Paul (@rohanpaul_ai)

Google의 Gemma 4 E2B 모델을 Galaxy S25 Ultra에서 완전 오프라인으로, thinking mode를 켠 상태로 구동한 사례가 공유됐다. 약 5.1B 파라미터 구조와 2B 수준의 효율 성능이 언급되며, 모바일 온디바이스 추론의 가능성을 보여준다.

https://x.com/rohanpaul_ai/status/2040830938448609658

#google #gemma #ondeviceai #offlineai #llm

Rohan Paul (@rohanpaul_ai) on X

Someboyd is running Google's Gemma 4 E2B model on a Galaxy S25 Ultra with thinking mode on, fully offline. The speed is nuts. The model uses per-layer embeddings, resulting in about 5.1B total parameters but effective performance around 2B.

X (formerly Twitter)
🤔 Oh wow, a server that lets you hoard Wikipedia like a digital doomsday prepper! 🚀 Because clearly, what the world needs is more offline AI servers for when the internet apocalypse hits. 📚🔌 Let's just hope it's worth all those "free" hours fiddling with GitHub. Who needs the internet, anyway? 🌐🙄
https://www.projectnomad.us #WikipediaHoarding #OfflineAI #DigitalDoomsdayPrepper #GitHubProjects #InternetApocalypse #HackerNews #ngated
Project NOMAD - Knowledge That Never Goes Offline

A self-contained offline server packed with encyclopedic knowledge, local AI, and essential tools — ready when the internet isn't.

Project NOMAD
Just ran Whisper (OpenAI) completely locally on my system (RX 6700 XT / 16 GB RAM).

Whisper is an open source speech recognition model that can transcribe audio, generate subtitles, and even translate between languages.

Test video: The Reason Why Cancer is so Hard to Beat by Kurzgesagt - In a Nutshell
(https://www.youtube.com/watch?v=uoJwt9l-XhQ)

Setup:

- Whisper installed via pip
- Model: small (fast, good enough for English)
- GPU acceleration via ROCm

Result:
~98% accurate transcription with only a few minor errors, already solid for generating subtitles.

Next steps / possibilities:

- Auto-generate subtitles (.srt)
- Correct subtitles with a local LLM
- Translate speech
- Burn subtitles directly into videos

Video workflow:

- Recorded with OBS
- Edited in Kdenlive
- Transcoded with VAAPI (H.264)

No cloud, real hardware.
Everything runs on Linux, so anyone can set this up.
No GPU? No problem, you can also run it using PyTorch’s CPU backend, just much slower.

Background music: End of Me - Ashes Remain [Female Rock Cover by Kryx] (https://www.youtube.com/watch?v=E430M8lKim8)


#Whisper #OpenAI #ROCm #AMD #Linux #SpeechToText #Transcription #Subtitles #FOSS #OpenSource #OfflineAI #localai #Fediverse #nocloud
Build Your Own UNCENSORED AI Running Completely Offline
Privacy-conscious users are moving away from the cloud. We show you how to set up a powerful, uncensored LLM running locally on your own hardware. With no internet connection required, this is the ultimate way to maintain total data sovereignty while utilizing the power of modern AI.
#OfflineAI #Privacy #DataSovereignty #LocalLLM #TechTips #CyberSecurity #UncensoredAI
https://www.technology-news-channel.com/build-your-own-uncensored-ai-running-completely-offline/
Build Your Own UNCENSORED AI Running Completely Offline

In this video, I show you how to run a fully unrestricted and private, offline LLM directly on your own[...]

Technology News