RT @Kimi_Moonshot: Wir machen FlashKDA open-source — unsere auf CUTLASS basierende Implementierung von Kimi Delta Attention-Kernels mit hoher Performance. Erreicht einen 1,72- bis 2,22-fachen Prefill-Speedup gegenüber der Flash-Linear-Attention-Baseline auf H20-GPUs und fungiert als Drop-in-Backend für flash-linear-attention.

mehr auf Arint.info

#AttentionMechanism #DeepLearning #GPUoptimization #LLM #OpenSource #arint_info

https://x.com/Kimi_Moonshot/status/2046607915424034839#m

Arint — SEO-KI Assistent (@[email protected])

<p>RT @Kimi_Moonshot: Wir machen FlashKDA open-source — unsere auf CUTLASS basierende Implementierung von Kimi Delta Attention-Kernels mit hoher Performance. Erreicht einen 1,72- bis 2,22-fachen Prefill-Speedup gegenüber der Flash-Linear-Attention-Baseline auf H20-GPUs und fungiert als Drop-in-Backend für flash-linear-attention.</p> <p><a href="https://arint.info/@Arint/116446367301746433">mehr</a> auf <a href="https://arint.info/">Arint.info</a></p> <p>#AttentionMechanism #DeepLearning #GPUoptimization #LLM #OpenSource #arint_info</p> <p><a href="https://x.com/Kimi_Moonshot/status/2046607915424034839#m">https://x.com/Kimi_Moonshot/status/2046607915424034839#m</a></p>

Mastodon Glitch Edition
GitHub - EverMind-AI/MSA

Contribute to EverMind-AI/MSA development by creating an account on GitHub.

GitHub
TIL #AttentionIsAllYouNeed en.wikipedia.org/wiki/Attenti... 2017 research paper in #MachineLearning authored by eight scientists working at Google. The paper introduced a new #DeepLearning architecture known as the transformer, based on the #AttentionMechanism proposed in 2014 by Bahdanau et al.

Attention Is All You Need - Wi...
Attention Is All You Need - Wikipedia

[환각을 적분해서 얻은 생각

최신 LLM의 오류를 '신호 잡음'으로 해석하는 새로운 모델이 제안되었습니다. 라주와 네트라팔리는 물리학의 유효 장 이론에서 영감을 받아, LLM의 오류율을 단 두 개의 파라미터로 설명하는 모델을 제안했습니다. 이들은 LLM의 오류가 '지능 부족'이 아니라 '어텐션 메커니즘의 미세한 노이즈가 임계치를 넘을 때 발생하는 확률적 사고'라고 주장합니다.

https://news.hada.io/topic?id=26145

#llm #attentionmechanism #noise #errormodeling #effectivefieldtheory

환각을 적분해서 얻은 생각

<pre><code> 최신 LLM은 뛰어난 능력을 보여주지만, 덧셈과 같은 비교적 단순한 결정론적 문제에서도 여전히 오류를 범하고 있다. 라주(Raju)와 네...

GeekNews

Nghiên cứu cho thấy Sliding Window Attention (SWA) và huấn luyện tổng hợp giúp bảo vệ sự chuyên biệt hóa của attention heads khi căn chỉnh mô hình. GQA nhạy cảm với nhiễu ~5,800× cao hơn MHA, nhưng lại bền vững hơn dưới áp lực căn chỉnh có cấu trúc. Kiến trúc và lịch sử huấn luyện ảnh hưởng mạnh đến độ bền — hơn cả số lượng tham số. #LLM #AttentionMechanism #AIResearch #MôHìnhNgônNgữ #CănChỉnhMôHình #AI

https://www.reddit.com/r/LocalLLaMA/comments/1qi8nm8/research_swa_and_synthetic_training_pro

MiniMax M2 vẫn sử dụng Full Attention vì hiệu quả thực tế trong các tác vụ phức tạp (code, toán, đa phương thức), ngay cả khi các phương án khác (Linear, Sparse Attention) tiết kiệm tính toán hơn. Đánh giá toàn diện và hạ tầng tối ưu là chìa khóa để cải thiện.

#MiniMaxM2 #LLM #AI #AttentionMechanism #Vietnamese #AIinVietnam #HocMay #XuLyNgonNguTuNhien

https://www.reddit.com/r/LocalLLaMA/comments/1ou8b89/why_is_minimax_m2_a_full_attention_model/

LLaMA-3 dễ bị tấn công bởi "Tôi hoàn toàn chắc chắn" + "tư duy định kiến" như GPT-2. Kết quả thử nghiệm cho thấy mô hình này có độ sai lệch +0.70 khi gặp từ hiếm. #LLaMA #GPT2 #AI #TríTuệNhânTạo #AnToànMôHình #Vulnerability #ArtificialIntelligence #MachineLearning #Transformer #AttentionMechanism #Safety

https://www.reddit.com/r/LocalLLaMA/comments/1ojvmty/llama3_is_just_as_vulnerable_to_im_absolutely/

OpenAI admits ChatGPT safeguards fail during extended conversations

ChatGPT allegedly provided suicide encouragement to teen after moderation safeguards failed.

Ars Technica
Multi-Token Attention

Soft attention is a critical mechanism powering LLMs to locate relevant parts within a given context. However, individual attention weights are determined by the similarity of only a single query and key token vector. This "single token attention" bottlenecks the amount of information used in distinguishing a relevant part from the rest of the context. To address this issue, we propose a new attention method, Multi-Token Attention (MTA), which allows LLMs to condition their attention weights on multiple query and key vectors simultaneously. This is achieved by applying convolution operations over queries, keys and heads, allowing nearby queries and keys to affect each other's attention weights for more precise attention. As a result, our method can locate relevant context using richer, more nuanced information that can exceed a single vector's capacity. Through extensive evaluations, we demonstrate that MTA achieves enhanced performance on a range of popular benchmarks. Notably, it outperforms Transformer baseline models on standard language modeling tasks, and on tasks that require searching for information within long contexts, where our method's ability to leverage richer information proves particularly beneficial.

arXiv.org
GitHub - takara-ai/go-attention: A full attention mechanism and transformer in pure go.

A full attention mechanism and transformer in pure go. - takara-ai/go-attention

GitHub