I just open-sourced **AN1-Core**, achieving **224× compression** of Llama-70B with **+1.81 pp accuracy** across tasks! A lightweight student model generates 256D "meaning fields" directly from text, eliminating transformer-based inference. Repo includes paper, code (GitHub), and benchmarks.Closed-source optimizations remain, but this is a verifiable open reference for post-transformer AI. 🚀 #AI #TríTuệNhânTạo #MôHìnhNgônNgữ #DeepLearning #Llama #MôHìnhDiĐộng #NghiênCứuMởMở #AnimaCore
https://w
