Final day at #EVS2026! 🚀

Don't miss @MosChip® at @cadence Booth #402. Experience our secure, Voice-Processed AI Chatbot demo featuring SLM and real-time speech recognition.

See you in Santa Clara!📍

#EmbeddedAI #EdgeAI #AI #VoiceAI #EmbeddedVisionSummit #EmbeddedSystems

@mark-carney.bsky.social @[email protected] With a #privacybydesign & #edgeAI evolving in #AppleCloud, #Huawei #Mindspore & various EU projects founded in #Linux (ultimately Finnish OS) including some #NokiaCanada already is involved in... Why the FUCK are you funding obsolete #nVidia AI?

RE: https://bsky.app/profile/did:plc:bs5xd3cr675vdr4mabrs5ktq/post/3lnypcqfrzs2n
Firefly AIBOX-K3 – An Edge AI mini PC powered by SpacemiT K3 RISC-V SoC

Back in July last year, SpacemiT unveiled the SpacemiT K3 SoC. After that, we saw some system information and early benchmarks come out around January this year. The company has just officially launched the K3 Pico-ITX SBC, which is now available through various distributors. Firefly has launched its own K3 hardware with the AIBOX-K3, a complete industrial-grade RISC-V edge computing box. The AIBOX-K3 Edge AI mini PC is built around the SpacemIT Key Stone K3 octa-core processor and features an integrated AI engine that delivers up to 60 TOPS of compute performance, making it suitable for local LLM inference and edge AI applications. Firefly AIBOX-K3 specifications: SoC – SpacemiT K3 CPU 8x 64-bit RISC-V X100 “big” cores clocked up to 2.4 GHz, RVA23 compliance; 130 KDMIPS performance (similar to RK3588) 8x RISC-V A100 AI Cores with support for up to 1024-bit RVV1.0 parallel computing, optimized for matrix operations. GPU – Imagination

CNX Software - Embedded Systems News
Geniatech launches Renesas RZ/V2N, RZ/V2H, and RZ/V2L OSM Size-M/L system-on-modules

Geniatech has introduced three OSM system-on-modules powered by Renesas RZ/V2N/V2H/V2L Cortex-A55/M33 microprocessors, namely the OSM Size-M (45x35mm) SOM-V2N-OSM, plus the OSM Size-L (45x45mm) SOM-V2H-OSM and SOM-V2L-OSM modules, all designed for Edge AI and computer vision applications. Geniatech SOM-V2N-OSM Specifications: SoC – Renesas RZ/V2N CPU Quad-core Arm Cortex-A55 @ 1.8 GHz Arm Cortex-M33 @ 200 MHz GPU – Arm Mali-G31 3D graphics engine (GE3D) with OpenGL ES 3.2 and OpenCL 2.0 FP VPU – Encode & decode H.264 – Up to 1920×1080 @ 60 fps (Renesas specs, but SOMDEVICES also mentions up to 4K @ 30 FPS) H.265 – Up to 3840×2160 @ 30 fps AI accelerator – DRP-AI3 up to 4 dense TOPS / 15 sparse TOPS System Memory – 8GB LPDDR4x RAM Storage – 64GB eMMC flash 476 LGA contacts with Display - 4-lane MIPI-DSI Camera - 2x 4-lane MIPI CSI-2 Audio - 2x I²S Networking - 2x Gigabit Ethernet

CNX Software - Embedded Systems News

Heading to Embedded Vision Summit 2026? 🚀

Visit @MosChip® at @cadence Booth #402 to experience our Voice-Processed AI Chatbot demo and explore the future of embedded intelligence!

📍 May 11-13 | Santa Clara
#EVS #EdgeAI

Amit Kumar V. Swamy Irrinki Sombabu Gunithi

ICYMI 👉 Faster pipelines, smarter inference, and sharper playback.

How our multimedia engineering team helped shape GStreamer 1.28 with hardware acceleration, zero-copy improvements, HDR and color support, AI integration, and key codec, RTP, and WebRTC fixes: http://www.collabora.com/news-and-blog/news-and-events/16-contributors-cross-stack-improvements-collabora-work-gstreamer-128.html

#GStreamer #AIInference #ComputerVision #EdgeAI

16 contributors, cross-stack improvements: Collabora's work on GStreamer 1.28

Our multimedia engineering team delivered major improvements to GStreamer 1.28 including hardware acceleration and zero-copy pipelines, HDR and color support for Wayland, and more.

Collabora | Open Source Consulting

RVA23-compliant K3 Pico-ITX SBC and K3-CoM260 SoM feature SpacemiT K3 octa-core RISC-V AI SoC, up to 32GB RAM, 256GB UFS

https://fed.brid.gy/r/https://www.cnx-software.com/2026/05/11/rva23-pico-itx-sbc-spacemit-k3-octa-core-risc-v-ai-soc-up-to-32gb-ram-256gb-ufs/

RVA23-compliant K3 Pico-ITX SBC and K3-CoM260 SoM feature SpacemiT K3 octa-core RISC-V AI SoC, up to 32GB RAM, 256GB UFS

SpacemiT has now officially launched the K3 Pico-ITX SBC and K3-CoM260 system-on-module with the RVA23-compliant, SpacemiT K3 octa-core X100 CPU with up to 60 TOPS of AI performance, up to 32GB LPDDR5, 256GB UFS, and PCIe Gen3 x4 NVMe SSD support. The board also features an eDP connector, a 10GbE SFP+ cage, a Gigabit Ethernet RJ45 port, built-in WiFi 6 and Bluetooth 5.2 wireless connectivity, two USB Type-C connectors, four USB 2.0 ports, an M.2 Key-B socket coupled with a NanoSIM card slot for 4G LTE or 5G cellular connectivity, and more. K3 Pico-ITX SBC specifications: System-on-Module - K3-CoM260 SoC - SpacemiT K3 CPU 8x 64-bit RISC-V X100 "big" cores clocked up to 2.4 GHz, RVA23 compliance; 130 KDMIPS performance (similar to RK3588) 8x RISC-V A100 AI Cores with support for up to 1024-bit RVV1.0 parallel computing, optimized for matrix operations. GPU - Imagination Technologies BXM4-64-MC1 GPU with Vulkan 1.3, OpenCL

CNX Software - Embedded Systems News

Maddie D. Reese (@maddiedreese)

일반적인 PC나 클라우드 없이, 재고 Game Boy Color에서 실제 트랜스포머 언어 모델을 로컬로 구동하는 데 성공했다. 모델은 Karpathy의 TinyStories-260K를 변환한 것이며, 모델이 게임보이 카트리지와 ROM만으로 자체 실행된다는 점이 인상적인 오픈소스/엣지 AI 데모다.

https://x.com/maddiedreese/status/2053293884323852636

#llm #edgeai #transformer #opensource #embeddedai

Maddie D. Reese (@maddiedreese) on X

I got a real transformer language model running locally on a stock Game Boy Color (thanks Codex)! No phone, PC, Wi-Fi, link cable, or cloud inference. • The cartridge boots a ROM, and the GBC runs the model itself. • The model is @karpathy’s TinyStories-260K, converted to

X (formerly Twitter)

New blog post: Your Brain, But Better (and Local)

Forget sending all your deepest thoughts to the cloud; the real AI revolution is happening right on your own device, and it's set to change everything from managing your blood sugar to automating your life.

https://rhodzy.com/blog/your-brain-but-better-and-local

#localai #type1diabetes #insulin #biohacking #edgeai

rhodzy.com

A $1,999 Mac mini runs a 70B parameter model that a $4,000 Windows workstation physically cannot.
The reason: Apple Silicon's unified memory. No separate VRAM pool. No PCIe bottleneck. Just one shared memory for CPU, GPU, and Neural Engine.
Full breakdown: https://www.buysellram.com/blog/why-mac-mini-is-the-surprising-frontrunner-for-local-ai-agents/

#ArtificialIntelligence #AI #LocalAI #MacMini #AppleSilicon #LLM #AIAgents #MachineLearning #EdgeAI #TechInfrastructure #DataPrivacy #Automation #AIHardware

Why Mac mini Is the Surprising Frontrunner for Local AI Agents

Why does a $1,999 Mac mini outrun a $4,000 Windows workstation for local AI agents? Apple Silicon's unified memory changes the math. A practical hardware guide for 2026.

BuySellRam