Daniel Han (@danielhanchen)

간단한 예고로 'MLX coming soon :)'을 게시하며 MLX의 곧 출시(또는 공개)를 알립니다. 구체 정보는 없지만 신제품/프로젝트 공개 예고로 해석됩니다.

https://x.com/danielhanchen/status/2033938752322867249

#mlx #release #mltools #announcement

Daniel Han (@danielhanchen) on X

@ivanfioravanti @UnslothAI MLX coming soon :)

X (formerly Twitter)

Prince Canuma (@Prince_Canuma)

MLX가 Mistral의 신규 모델 'Mistral Small 4'에 대해 출시 당일(데이-0) 지원을 시작했습니다. 이는 MistralAI팀의 새 모델 공개와 함께 MLX에서 즉시 배포·실험이 가능해졌음을 의미하며, 개발자와 연구자들에게 접근성·테스트 속도 향상으로 이어질 전망입니다. 축하 메시지가 포함되어 있습니다.

https://x.com/Prince_Canuma/status/2033720455673082141

#mistralai #mistralsmall4 #mlx #llm

Prince Canuma (@Prince_Canuma) on X

Day-0 support on MLX for Mistral Small 4🚀 Congratulations to the @MistralAI team on the release.

X (formerly Twitter)
If you use it with a local backend (@[email protected], #llama.cpp , #mlx, #mistral-rs), every step runs on your device; nothing leaves your machine unless you configure a cloud provider (it supports EU-based ones, e.g. #Nebius @[email protected], or #Mistral).

GitHub - CrispStrobe/CrispSort...
GitHub - CrispStrobe/CrispSorter: AI-powered document organiser. Extracts text and/or sorts documents: Drop in a bunch of PDFs, DOCX files, or ebooks, and it extracts Document Text, identifies Title, Author, and Year, with a local or remote LLM, and moves them into folders, and/or keeps the extracted text.

AI-powered document organiser. Extracts text and/or sorts documents: Drop in a bunch of PDFs, DOCX files, or ebooks, and it extracts Document Text, identifies Title, Author, and Year, with a local ...

GitHub

Apple should’ve continued to ignored the #LLM AI hype

Remained focused on #HomeAutomation

Continue #NeuralAccelerator hardware & #MLX software development, enable running useful LLM locally

Partner with Steam, make running #Games on macOS & porting to iOS trivially easy

Embrace a “local first, intermittent connections, eventually consistent” view of the future

Be an alternative to the “cloud first, always on, always connected” future everyone else in trying to sell

#UnpopularOpinion

@twostraws I fine-tuned a language model on a MacBook Neo with 8GB of RAM.
Peak memory: 2.3 GB. Training time: 20 minutes. Cost: $0.
None of this works without MLX. Thank you to the MLX team for making local training actually accessible.
Full writeup on what I got wrong and what finally worked:
taylorarndt.substack.com/p/i-trained-an-llm-on-my-macbook-neo
#MLX #AppleSilicon #MacBookNeo #MachineLearning #FineTuning #LocalAI #Swift #Apple
Big thanks to Prince Canuma for MLX Audio and to Awni Hannun, Angelos Katharopoulos, and David Koski for MLX, MLX Swift, and MLX Swift LM.
This is a preview and we want your help. Find something broken? Drop it in the replies or open an issue.
What would you like to see in Perspective Studio next?
https://github.com/Techopolis/Perspective-Studio
#MLX #OpenSource #AppleSilicon #SwiftUI #LocalAI #macOS #Swift (2/2)
@mikedoise
This is what a local AI model manager should look like on a Mac.
Built with SwiftUI. Runs MLX models natively. Everything on device, nothing leaves your Mac.
Fully open source. Not Electron.
Coming soon.
#BuildInPublic #MacOS #SwiftUI #MLX #AITools #IndieApps #PrivacyFirst #OpenSource

Trevin Peterson (@TrevinPeterson)

Apple Silicon 및 MLX용으로 autoresearch의 포트를 작성해 Mac에서 네이티브 실행 가능하게 했다는 공개 소식입니다. PyTorch 불필요, M4 Max에서 depth=4가 depth=8보다 5분 예산에서 더 많은 옵티마이저 스텝이 유리하다는 실험적 발견도 함께 보고되었고, 관련 코드는 github.com/trevin-creator/autoresearch-mlx 에 공개되었습니다.

https://x.com/TrevinPeterson/status/2030611877198221458

#applesilicon #opensource #autoresearch #mlx

Trevin Peterson (@TrevinPeterson) on X

Built an Apple Silicon / MLX port of your autoresearch — runs natively on Mac, no PyTorch needed. The loop found that depth=4 beats depth=8 on M4 Max because more optimizer steps > more parameters in a 5-min budget. https://t.co/BRvG6kLzuc @karpathy

X (formerly Twitter)

Awni Hannun (@awnihannun)

장기 실행 에이전트와 관련된 지속학습에 대한 관찰과 소규모 실험 보고. MLX로 토이 실험을 해본 결과, 프롬프트 압축(prompt compaction)과 재귀적 서브에이전트(recursive sub-agents)를 결합한 현재의 접근법이 의외로 효과적이며 장기 에이전트 설계에 유용할 수 있다는 판단을 제시함.

https://x.com/awnihannun/status/2029672507448643706

#continuallearning #agents #mlx #prompting

Awni Hannun (@awnihannun) on X

I've been thinking a bit about continual learning recently, especially as it relates to long-running agents (and running a few toy experiments with MLX). The status quo of prompt compaction coupled with recursive sub-agents is actually remarkably effective. Seems like we can go

X (formerly Twitter)

Ivan Fioravanti ᯅ (@ivanfioravanti)

와인 분류 학습 실험을 현재 mlx-lm-lora로 마이그레이션 중이라는 간단한 업데이트. 기존 실험을 LORA 기반 모델/프레임워크로 이전해 훈련을 진행하고 있음을 알림(작성자: @ActuallyIsaak).

https://x.com/ivanfioravanti/status/2029211475680584008

#lora #ml #finetuning #mlx

Ivan Fioravanti ᯅ (@ivanfioravanti) on X

Wine classification training experiment migration to mlx-lm-lora from super @ActuallyIsaak in progress!

X (formerly Twitter)