SLM for home feed filtering.

ngl feels kinda necessary.

Not everytime there's a specific keyword I can easily filter for and block.

#sml #smalllanguagemodels #feelfiltering

fly51fly (@fly51fly)

LaCy 논문: 작은 언어모델(Small Language Models)이 단순한 손실 최소화만이 아니라 무엇을 학습해야 하는지에 관해 논의한다. S. Ujváry, L. Béthune, P. Ablin, J. Monteiro 등 저자와 Apple 참여(2026, arXiv). 작은 모델의 학습 목표 재설계, 평가 지표 및 훈련 전략에 대한 이론적·실용적 제안 포함.

https://x.com/fly51fly/status/2023154264370294790

#smalllanguagemodels #lacy #languagemodels #research

fly51fly (@fly51fly) on X

[CL] LaCy: What Small Language Models Can and Should Learn is Not Just a Question of Loss S Ujváry, L Béthune, P Ablin, J Monteiro...[Apple] (2026) https://t.co/CXnkodrf4k

X (formerly Twitter)

New research shows that massive LLMs benefit from fine‑grained catalog context, delegating queries to specialized small models. This AI orchestration improves semantic understanding and intent routing, reshaping language model architecture. Dive into how the synergy between large and small models could boost efficiency and accuracy. #FineGrainedContext #AIOrchestration #IntentRouting #SmallLanguageModels

🔗 https://aidailypost.com/news/llms-need-fine-grained-catalog-context-large-model-routes-data-slms

🧠 🆕 Just launched: Deploying Small Language Models (LFWS307)
A live, 3-day, instructor-led workshop with 10+ hours of hands-on labs.

Learn to deploy SLMs across laptops, servers, edge devices, and the browser—skills built for real-world AI infrastructure and ML platform engineering.

Enroll now: https://training.linuxfoundation.org/training/deploying-small-language-models-lfws307/

#SmallLanguageModels #AIEngineering #MLOps #EdgeAI

Deploying Small Language Models (LFWS307) | Linux Foundation Education

Prepare for MLOps and AI infrastructure roles by deploying small language models across laptop, server, edge, and browser environments.

Linux Foundation - Education
A fine-tuned 3B model beat our 70B baseline. Here's why data quality and architecture innovations are ending the "bigger is better" era in AI. https://hackernoon.com/small-language-models-are-closing-the-gap-on-large-models #smalllanguagemodels
Small Language Models are Closing the Gap on Large Models | HackerNoon

A fine-tuned 3B model beat our 70B baseline. Here's why data quality and architecture innovations are ending the "bigger is better" era in AI.

Wondering where to start using small language models? Find top use cases where small language models would be better than large language models. https://hackernoon.com/when-to-use-small-language-models-over-large-language-models #smalllanguagemodels
When To Use Small Language Models Over Large Language Models | HackerNoon

Wondering where to start using small language models? Find top use cases where small language models would be better than large language models.

Latency is becoming the real differentiator in AI… and Small Language Models are proving it.

Discover how quantization, distillation, and smart inference strategies transform compact language models into lightning-fast, edge-ready AI.

If you care about real-time chatbots, on-device assistants, or cost-efficient AI deployment, this one’s for you.

Want AI that responds instantly, even on offline or low-power hardware?

Read more here>https://datasciencenigeria.medium.com/latency-optimization-in-small-language-models-how-to-make-slms-faster-and-more-efficient-d8f0a54ebb6e

#datasciencenigeria #SmallLanguageModels #AIforAfrica

What are small language models and how do they differ from large ones? | The-14

SLMs are efficient, task-focused AI models, while LLMs offer broad, powerful capabilities; choosing depends on speed, cost, and the complexity of tasks. Needed.

The-14 Pictures