๐Ÿš€ LMIM OS v1.20 BETA โ€” Windows + Linux!

Autonomous AI, WORKS OUT OF THE BOX:
โœ… One-click Win installer / Linux AppImage
โœ… Build code โ€ข Schedule โ€ข Messaging
โœ… WhatsApp/TG/Slack/Discord/Email
โœ… Campaign Blaster โ€ข Local OR cloud
โœ… 100% FREE โ€ข Open Source (MIT)

https://lmim.tech

#LocalAI #OpenSource #Windows #Linux

Just ran Demucs completely locally on my system (RX 6700 XT / 16 GB RAM).

Demucs is an open source AI model for music source separation, developed by Meta. It can split a full song into individual stems like vocals, drums, bass, and other instruments, making it useful for remixing, transcription, and audio analysis.

Test track: Fear of the Dark by Iron Maiden
(https://www.youtube.com/watch?v=bePCRKGUwAY)

Setup:

- Demucs installed via pip
- Model: htdemucs (default)
- Input converted to WAV using ffmpeg
- GPU acceleration via ROCm

Setting it up is tricky because Demucs is tightly pinned to older PyTorch versions, so you have to install dependencies manually and use "--no-deps" to avoid breaking your (ROCm-)PyTorch setup.

Result:
Very clean vocal separation in most parts. Some artifacts appear during very loud or distorted sections (e.g. emotional peaks or shouting).

Next steps / possibilities:

- Normalize and filter audio before separation
- Extract vocals for transcription or remixing
- Create karaoke / instrumental versions
- Combine with Whisper for lyrics
- Batch processing for datasets
- Model: htdemucs_ft (higher quality)

Video workflow:

- Recorded with OBS
- Edited in Kdenlive
- Transcoded with VAAPI (H.264)

No cloud, real hardware.
Everything runs on Linux, so anyone can set this up.
Works on CPU as well, but much slower.

#Demucs #AI #MachineLearning #AudioSeparation #MusicAI #OpenSource #Linux #ROCm #AMD #DeepLearning #AudioProcessing #Vocals #Karaoke #StemSeparation #SelfHosted #NoCloud #FOSS #Tech #LocalAI #MetaAI

I built an AI persona for my blog. This week, he published his first post.

His name is BartBot. He monitors RSS feeds, scores articles with a local LLM, and surfaces the ~0.7% worth reading.

I'm not sure if he's useful, annoying, or both. He's not sure either.

https://jamalhansen.com/blog/the-content-curator/

#LocalAI #AITools #BuildInPublic #ContentCuration #PKM #RSSFeed #BartBot

Running two PicoClaw instances on one machine looked easy at first.

I thought duplicating the repo would be enough.
The real blocker was Docker runtime identity:
- container_name collisions
- shared bot token conflicts
- possible host port clashes later

What worked:
- separate repos
- separate data/config
- no container_name
- explicit Compose project names like picoclaw1 and picoclaw2

Short write-up here:
https://funkyidol.in/blog/deploying-multiple-picoclaw-instances-on-a-single-machine-with-docker/

#Docker #SelfHosting #DevOps #LocalAI #PicoClaw

The Tiiny AI Pocket Lab: Goodbye Cloud Subscriptions! Hello, 120B Parameters in My Pocket๐Ÿ› ๏ธ๐Ÿฆพ

I just got my hands on the Tiiny AI Pocket Lab, and itโ€™s officially breaking the โ€œCloud dependenceโ€ loop.

120B Parameters? Locally.
Internet? Not needed.
Privacy? 100%.

While everyone else is paying $20/month to let Big Tech read their prompts, this 300g beast is running Llama 3 and DeepSeek locally at 20+ tokens/sec.

Itโ€™s got 80GB of RAM (yes, in a pocket device) and runs at just 65W. Guinness World Record holder for a reason. ๐Ÿ†

The Tiiny AI Pocket Lab is the first credible challenge to the cloud-only AI model. For enterprises and researchers, the value proposition is simple:

Security: Zero-latency, zero-cloud data processing.
Cost: No per-token fees or monthly subscriptions.
Power: 80GB LPDDR5X RAM in a 300g form factor.
This isnโ€™t just a โ€œmini-PC.โ€ Itโ€™s a shift toward Edge Intelligence. When you can run a 120B model locally at 65W, the โ€œsetup taxโ€ of AI disappears.

The future isnโ€™t in a data center; itโ€™s in your palm.

Is your organization ready for the shift from Cloud AI to Private AI?

https://www.nbloglinks.com/the-tiiny-ai-pocket-lab-goodbye-cloud-subscriptions-hello-120b-parameters-in-my-pocket/

#LocalAI #OpenSource #TechHardware #PrivacyFirst #TiinyAI #CES2026 #ArtificialIntelligence #EdgeComputing #DataPrivacy #FutureOfWork #TechLeadership #gadget

The Tiiny AI Pocket Lab: Goodbye Cloud Subscriptions! Hello, 120B Parameters in My Pocket๐Ÿ› ๏ธ๐Ÿฆพ โ€“ nbloglinks

I just got my hands on the Tiiny AI Pocket Lab , and itโ€™s officially breaking the "Cloud dependence" loop. While everyone else is paying $20/month to let Big Te

nbloglinks

cedric (@cedric_chee)

MiniMax M2.7์ด ์˜คํ”ˆ์†Œ์Šค๋กœ ๊ณต๊ฐœ๋  ์˜ˆ์ •์ด๋ฉฐ, ์•ฝ 2์ฃผ ๋‚ด ๊ฐ€์ค‘์น˜๋„ ์ œ๊ณต๋œ๋‹ค๊ณ  ๋ฐํ˜”๋‹ค. ๊ฐ€์ •์šฉ ํ™˜๊ฒฝ์—์„œ ํ˜„์‹ค์ ์œผ๋กœ ์‹คํ–‰ ๊ฐ€๋Šฅํ•œ ์ตœ๊ณ ์˜ ๋ชจ๋ธ์ผ ์ˆ˜ ์žˆ๋‹ค๋Š” ํ‰๊ฐ€๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์–ด, ๋กœ์ปฌ ๊ตฌ๋™ ๊ฐ€๋Šฅํ•œ ๋Œ€ํ˜• ๋ชจ๋ธ ์†Œ์‹์œผ๋กœ ์ฃผ๋ชฉ๋œ๋‹ค.

https://x.com/cedric_chee/status/2035719456597688603

#minimax #opensource #llm #weights #localai

cedric (@cedric_chee) on X

MiniMax M2.7 is committed to open source. Weights are coming in ~2 weeks. It might be the best model you can realistically run at home. I run M2.5 at home now.

X (formerly Twitter)

Github Awesome (@GithubAwesome)

400B ํŒŒ๋ผ๋ฏธํ„ฐ ๊ทœ๋ชจ์˜ ๋ชจ๋ธ์„ ๋กœ์ปฌ์—์„œ ์‹คํ–‰ํ•˜๋Š” ์—”์ง„ flash-moe๊ฐ€ ์†Œ๊ฐœ๋๋‹ค. 48GB RAM์˜ ๋งฅ๋ถ ํ”„๋กœ์—์„œ๋„ ๋™์ž‘ํ•˜๋ฉฐ, 209GB ๋ชจ๋ธ์„ ๋ฉ”๋ชจ๋ฆฌ์— ๋ชจ๋‘ ์˜ฌ๋ฆฌ์ง€ ์•Š๊ณ  SSD์—์„œ GPU๋กœ ๊ฐ€์ค‘์น˜๋ฅผ ํ•„์š”ํ•  ๋•Œ๋งˆ๋‹ค ์ŠคํŠธ๋ฆฌ๋ฐํ•ด ๊ตฌ๋™ํ•œ๋‹ค๋Š” ์ ์ด ํ•ต์‹ฌ์ด๋‹ค.

https://x.com/GithubAwesome/status/2035562403178438723

#localai #llm #moe #inference #macbook

Github Awesome (@GithubAwesome) on X

Running a 400-billion parameter model locally usually means a server rack. Someone just did it on a MacBook Pro with 48GB of RAM. The engine is called flash-moe. Instead of loading a 209GB model into memory, it streams weights from SSD to GPU on demand, pulling around five tokens

X (formerly Twitter)

๐Ÿš€ LMIM OS v1.20 โ€” Windows version is dropping today!

Autonomous AI agent that runs 100% locally:
โœ… Build & debug code
โœ… Schedule meetings + WhatsApp confirmations
โœ… Multi-platform messaging (WA/TG/Slack/Discord)
โœ… Persistent memory โ€ข No cloud โ€ข No setup

๐ŸชŸ Windows 10/11 (64-bit) installer
๐Ÿง Linux AppImage also available

โœจ 100% FREE โ€ข Open Source (MIT)

Download: https://lmim.tech

#LocalAI #OpenSource #Windows #Linux #Privacy #FOSS

๋‚ด ๋งฅ์—์„œ LM Studio๋กœ LLM AI๋ฅผ ์ด๊ฒƒ์ €๊ฒƒ ๋กœ์ปฌ ์‹คํ–‰ํ•ด๋ณด๊ณ  ์žˆ๋Š”๋ฐ, ๊ฐ€์žฅ ๋งŒ์กฑ์Šค๋Ÿฌ์šด๊ฑด Qwen 3.5 35B A3B ๋ชจ๋ธ์ด๋‹ค.

AI์•Œ๋ชป์ด๋ผ ๋‹ค๋ฅธ ๋ชจ๋ธ๋“ค์ด๋ž‘ ์™œ ์ฐจ์ด๊ฐ€ ๋‚˜๋Š”์ง€ ๊ทธ ์ด์œ ๋Š” ๋ชจ๋ฅด๊ฒ ๋Š”๋ฐ, ๋‹ต๋ณ€ ํ€„๋ฆฌํ‹ฐ๋ž‘ ๋‹ต๋ณ€ ์†๋„๊ฐ€ ์ฐธ ๋งŒ์กฑ์Šค๋Ÿฝ๋‹ค. ๊ฒฐ์ •์ ์œผ๋กœ ๋ชจ๋ฅด๋Š”๊ฑฐ ๋‚˜์˜ค๋ฉด ์†Œ์„ค ์“ฐ์ง€ ์•Š๊ณ  "๋‚˜ ์ด๊ฑฐ ๋ชจ๋ฅด๋Š”๋ށ์‡ผ" ํ•˜๋Š”๊ฒŒ ์ตœ๊ณ ใ…‹

#AI #localAI #localLLM #Qwen

๐Ÿš€ LMIM OS v1.20 is live!

Autonomous AI agent for Linux:
โœ… Builds & debugs code
โœ… Schedules meetings
โœ… Sends messages (WA/TG/Slack/Discord)
โœ… 100% local โ€ข No cloud
โœ… AppImage โ€ข No setup

https://lmim.tech

#AI #OpenSource #Linux #Privacy #FOSS #LocalAI

LMIM OS โ€” Lean Mean Inference Machine

LMIM OS โ€” Lean Mean Inference Machine. Autonomous AI agent that builds, codes, schedules, and communicates. Works right out of the box.