Ivan Fioravanti ᯅ (@ivanfioravanti)
M5 Ultra의 메모리 대역폭이 약 1200 GB/s 수준일 것으로 예상되며, 여기에 Neural Engines가 결합되면 Apple Silicon에서 lacl 추론 성능이 크게 향상될 전망. 현재 세대에서 느린 이미지 생성 작업 속도 개선을 기대한다는 내용.
Ivan Fioravanti ᯅ (@ivanfioravanti)
M5 Ultra의 메모리 대역폭이 약 1200 GB/s 수준일 것으로 예상되며, 여기에 Neural Engines가 결합되면 Apple Silicon에서 lacl 추론 성능이 크게 향상될 전망. 현재 세대에서 느린 이미지 생성 작업 속도 개선을 기대한다는 내용.
Mark Gadala-Maria (@markgadala)
개발자 'maderix'가 애플 뉴럴 엔진(Apple Neural Engine)을 완전 역공학해 오픈소스로 공개했고, 이를 통해 iPhone 및 MacBook의 Neural Engine에서 신경망을 직접 훈련할 수 있게 되었다는 속보성 소식입니다. 기존에 추론용으로 설계된 칩을 훈련에 활용하도록 만든 획기적인 결과로, 온디바이스 학습·프라이버시·모바일 AI 활용에 큰 영향을 미칠 수 있습니다.

🚨 BREAKING: YOU CAN NOW TRAIN AI MODELS DIRECTLY ON YOUR IPHONE AND MACBOOK Developer "maderix" just open-sourced a full reverse engineering of Apple's Neural Engine, the chip Apple built for inference only, and got it training neural networks: >Apple's Neural Engine exists in
AI-Driven ‘Guitar Wiz’ App Transforms the iPhone and Apple Watch into a World-Class Music Tutor
#TycoonWorld #GuitarWiz #AIMusic #MusicTech #AIInnovation #ArtificialIntelligence #AppleEcosystem #iPhoneApp #AppleWatch #EdTech #MusicEducation #DigitalLearning #StartupIndia #BengaluruStartups #TechInnovation #MobileAppDevelopment #AIStartup #NeuralEngine #FutureOfLearning #CreativeTechnology #InnovationInMusic #MusicIndustryTech #GlobalStartups #AppInnovation
🚀 Mac Vision Tools: ứng dụng thanh menu macOS dùng mô hình CoreML chạy trên Neural Engine. Tính năng: phát hiện vật thể (YOLO12n), khóa màn hình khi phát hiện 2 người (Privacy Guard), nhận diện cảm xúc khuôn mặt, đồng hồ Pomodoro theo dõi chú ý. Hoàn toàn xử lý cục bộ, tiêu thụ ít pin. #MacVisionTools #AI #Swift #CoreML #NeuralEngine #Privacy #Pomodoro #CôngNghệ
https://www.reddit.com/r/SideProject/comments/1qcjnzo/mac_vision_tools_a_menu_bar_app_for_fun_tasks/
Mac Vision Tools: ứng dụng thanh menu macOS dùng mô hình CoreML chạy trên Neural Engine, thực hiện CV trên thiết bị. Tính năng: phát hiện đối tượng (YOLO12n), Privacy Guard tự khóa màn hình khi có 2 người, Emotion Vibes nhận diện cảm xúc khuôn mặt, Focus Timer (Pomodoro) theo dõi chú ý. Hoàn toàn xử lý cục bộ, không gửi dữ liệu ra ngoài. #MacVisionTools #ComputerVision #Swift #Apple #AI #NeuralEngine #CôngNghệ #MởNguồn
https://www.reddit.com/r/opensource/comments/1qcj4bv/mac_vision_tools_a_simp
I Wanted Podcast Transcriptions. iOS 26 Delivered (and Nearly Melted My Phone).
Testing iOS 26’s on-device speech recognition: faster than realtime, but your phone might disagree
Apple’s iOS 26 introduced SpeechTranscriber – a promise of on-device, private, offline podcast transcription. No cloud, no subscription, just pure silicon magic. I built it into my RSS reader app. Here’s what actually happened.
The Setup
The Good News: It’s Actually Fast
EpisodeDurationTranscription TimeRealtime FactorWordsWords/secTalk Show #4361h 35m15m 22s6.2x17,30318.8Upgrade #5941h 46m20m 4s5.3x19,97516.6ATP #6681h 54m24m 49s4.6x23,89216.04.6x to 6.2x faster than realtime. Nearly 2-hour podcasts transcribed in under 25 minutes. The Neural Engine absolutely crushes this.
The Pipeline Breakdown
The transcription happens in two phases (example from Upgrade #594):
The Bad News: Thermal Throttling Is Real
During my first test, I made a critical mistake: running two transcriptions simultaneously while charging.
The result? My phone got noticeably hot. Battery optimization warnings appeared. And performance dropped dramatically:
ConditionRealtime FactorPerformance HitSingle transcription4.6x – 6.2xBaselineTwo parallel transcriptions2.7x46% slowerThe logs showed alternating progress updates as iOS juggled both workloads:
🎙️ 📝 Progress: 34% - 88 segments // Transcription A🎙️ 📝 Progress: 44% - 98 segments // Transcription B🎙️ 📝 Progress: 37% - 98 segments // Transcription AThe Neural Engine throttles hard when thermals get bad. When I ran a single transcription without charging, the ETA stayed consistent and completed on schedule.
The Ugly: iOS Kills Background Tasks
Even with BGTaskScheduler, iOS terminated my background transcription:
🎙️ Background transcription task triggered by iOS⏱️ Background transcription task expired (iOS terminated it)For long podcasts, you need to keep the app in foreground. iOS’s aggressive app suspension doesn’t play nice with hour-long ML workloads.
AI Chapter Generation: The Real Win
Here’s where it gets interesting. Once you have a transcript, generating AI chapters is blazingly fast.
Note: ATP, Talk Show, and Upgrade already include chapters via ID3 tags – this is an experiment to see what on-device AI can generate. But Planet Money doesn’t have chapters, making it a real use case where AI generation adds genuine value.
And we’re not alone in this approach. As Mike Hurley and Jason Snell discussed on Upgrade #594, Apple is doing exactly this in iOS 26.2’s Podcasts app:
“One of the most interesting things to me is the changes in the podcast app in 26.2… AI generated chapters for podcasts that do not support them… They are creating their own chapters based on the topics.”
Jason nailed the insight: “The transcripts [are] a feature that unlocks a lot of other features, because now they kind of understand the content of the podcast.”
That’s exactly what we’re doing here – using on-device transcription as a foundation for AI-powered chapter generation:
EpisodeTranscript SizeChapters GeneratedTimeATP #669143,603 chars (~26,387 words)27 chapters2m 1sTalk Show #436~17,303 words13 chapters1m 40sThe AI identified topic changes, extracted key phrases for timestamps, and generated descriptive chapter titles – all in under 2 minutes for multi-hour podcasts.
Sample generated chapters:
📍 0:00-2:18: Snowfall in Richmond📍 42:43-49:11: Intel-Apple Chip Collaboration Speculations📍 62:46-65:00: Executive Transitions at Apple📍 95:56-105:04: Core Values and Apple's EvolutionThe Code
Using iOS 26’s SpeechTranscriber is surprisingly clean:
@available(iOS 26.0, *)func transcribe(fileURL: URL) async throws -> String { let locale = try await findSupportedLocale(preferring: "en") let transcriber = SpeechTranscriber(locale: locale, preset: .transcription) let analyzer = SpeechAnalyzer(modules: [transcriber]) let audioFile = try AVAudioFile(forReading: fileURL) if let lastSample = try await analyzer.analyzeSequence(from: audioFile) { try await analyzer.finalizeAndFinish(through: lastSample) } var transcription = "" for try await result in transcriber.results { if result.isFinal { transcription += String(result.text.characters) + " " } } return transcription}Fast vs Accurate Mode: A Surprising Finding
iOS 26 offers two main transcription presets:
.transcription – Standard accurate mode.progressiveTranscription – “Fast” mode with progressive resultsI assumed Fast mode would be… faster. The results were mixed.
EpisodeModeConditionRealtime FactorWords/secTalk Show #436AccurateSolo, cold6.2x18.8Upgrade #594AccurateSolo5.3x16.6ATP #668AccurateSolo4.6x16.0Planet MoneyFastSolo3.8x12.2Planet MoneyAccurateSolo, warm3.5x11.4On the same 31-minute episode, Fast mode (3.8x) was slightly faster than Accurate (3.5x). But both were significantly slower than the longer episode tests – likely due to residual heat from previous runs.
The “progressive” preset appears optimized for live/streaming transcription. For batch processing of pre-recorded files, results are similar when thermals are equivalent.
Lesson: Don’t assume “fast” means faster for your use case. Profile both.
Recommendations
.transcription for downloaded files – It’s actually faster for batch processingThe Verdict
iOS 26’s on-device transcription is genuinely impressive:
The main gotchas are thermal management and iOS’s background task limitations. But for a first-generation on-device transcription API? Apple’s Neural Engine delivers.
Now if you’ll excuse me, I have 26,387 words of ATP to search through.
Tested on iPhone 17 Pro Max running iOS 26.x. Your mileage may vary on older devices.
Raw Test Data
Upgrade #594
ATP #668
ATP #669 Chapter Generation
Talk Show #436
Talk Show #436 Chapter Generation
Planet Money – Chicago Parking Meters (Fast Mode)
.progressiveTranscription (Fast)Planet Money Chapter Generation (Fast Mode)
Planet Money – Accurate Mode (Parallel Stress Test)
.transcription (Accurate)Planet Money – Accurate Mode (Solo, Warm Device)
.transcription (Accurate)Device Observations
Cannot use modules with unallocated locales [en_US (fixed en_US)] – appears in logs but doesn’t block functionality#AppleIntelligence #iOS26 #NeuralEngine #onDeviceML #podcastTranscription #SpeechRecognition #SpeechTranscriber #Swift
Rivian is rolling out its own AI silicon—a neural‑engine built for autonomous driving that balances power and efficiency while meeting ASIL safety standards. With INT8 performance measured in TOPS, it rivals traditional GPUs and TPUs, promising a more open, car‑centric AI stack. #RivianAI #AISilicon #NeuralEngine #ASIL
🔗 https://aidailypost.com/news/rivian-builds-ai-chips-driving-efficiencyperformance-asil-compliance
Apple M5 Pro e M5 Max: ecco cosa aspettarsi da questi chip
#AppleM5 #AppleSilicon #Chip #CPU #Geekbench #GPU #M5Max #M5Pro #Mac #MacBookPro #NeuralEngine #Prestazioni #Processori #TechNews #Tecnologia
https://www.ceotech.it/apple-m5-pro-e-m5-max-ecco-cosa-aspettarsi-da-questi-chip/
M4 vs. M5: Lohnt sich Apples neuer Prozessor?
Mit dem neuen M5-Chip hebt Apple seine Prozessoren auf ein neues Niveau. Doch lohnt sich der Umstieg vom M4 auf den M5 wirklich für euch?
Deutliche Leistungssteigerungen beim M5
Apple hat den M5-Chip als Nachfolger des im Mai 2024 vorgestellen M4 veröffentlicht und verspricht spü
https://www.apfeltalk.de/magazin/news/m4-vs-m5-lohnt-sich-apples-neuer-prozessor/
#Mac #News #Apple #GPU #IPadPro #KI #M4Chip #M5Chip #MacBookPro #NeuralEngine
Apple’s M5 chip unites Vision Pro, MacBook Pro, and iPad Pro in massive AI performance upgrade
https://web.brid.gy/r/https://nerds.xyz/2025/10/apple-m5-vision-pro-macbook-pro-ipad-pro/