๐Ÿšจ Nuovo appuntamento con lโ€™aggiornamento e la riflessione sullโ€™evoluzione dellโ€™#AI: รจ sempre piรน evidente il passaggio dai modelli ai sistemi agentici.

๐Ÿ”— Vai al post: https://www.alessiopomaro.it/generative-ai-novita-e-riflessioni-3-2026/

___ 
โœ‰๏ธ ๐—ฆ๐—ฒ ๐˜ƒ๐˜‚๐—ผ๐—ถ ๐—ฟ๐—ถ๐—บ๐—ฎ๐—ป๐—ฒ๐—ฟ๐—ฒ ๐—ฎ๐—ด๐—ด๐—ถ๐—ผ๐—ฟ๐—ป๐—ฎ๐˜๐—ผ/๐—ฎ ๐˜€๐˜‚ ๐—พ๐˜‚๐—ฒ๐˜€๐˜๐—ฒ ๐˜๐—ฒ๐—บ๐—ฎ๐˜๐—ถ๐—ฐ๐—ต๐—ฒ, ๐—ถ๐˜€๐—ฐ๐—ฟ๐—ถ๐˜ƒ๐—ถ๐˜๐—ถ ๐—ฎ๐—น๐—น๐—ฎ ๐—บ๐—ถ๐—ฎ ๐—ป๐—ฒ๐˜„๐˜€๐—น๐—ฒ๐˜๐˜๐—ฒ๐—ฟ: https://bit.ly/newsletter-alessiopomaro

#AI #GenAI #GenerativeAI #IntelligenzaArtificiale #LLM 

Whatโ€™s hard about RAG in real systems?

โ€ข Document chunking that actually works
โ€ข Preventing hallucinations
โ€ข Access control
โ€ข Production-grade architectures

I cover this all Wed 1 Apr 16:25 Room 10 at #VoxxedAmsterdam.

#GenAI #RAG #Java

Liquid AI has released LFM2.5-350M, a compact 350M parameter model trained on 28 trillion tokens that outperforms models more than twice its size. The model uses a hybrid LIV architecture supporting a 32k context window while maintaining a lean memory footprint. https://www.marktechpost.com/2026/03/31/liquid-ai-released-lfm2-5-350m-a-compact-350m-parameter-model-trained-on-28t-tokens-with-scaled-reinforcement-learning/ #AIagent #AI #GenAI #AIResearch #LiquidAI
Liquid AI Released LFM2.5-350M: A Compact 350M Parameter Model Trained on 28T Tokens with Scaled Reinforcement Learning

Liquid AI Released LFM2.5-350M: A Compact 350M Parameter Model Trained on 28T Tokens with Scaled Reinforcement Learning

MarkTechPost
Railway raises 100M USD to challenge AWS with AI-native cloud. The startup offers sub-second deployments, ten times faster than traditional cloud, with customers reporting up to 65 percent cost savings. Railway abandoned Google Cloud in 2024 to build its own data centers. https://venturebeat.com/infrastructure/railway-secures-usd100-million-to-challenge-aws-with-ai-native-cloud #AIagent #AI #GenAI #AIInfrastructure #Railway

TinyLoRA pushes low-rank adaptation almost to zero.

An 8B Qwen2.5 model reportedly hits 91% on GSM8K with just 13 trained bf16 params, or 26 bytes. The core idea: RL-based post-training may improve reasoning through an extremely low-dimensional update. But this seems to work far better for RL than SFT, which reportedly needs 100โ€“1000x more parameters for similar gains.

https://arxiv.org/abs/2602.04118

#AI #genAI #reasoning

Learning to Reason in 13 Parameters

Recent research has shown that language models can learn to \textit{reason}, often via reinforcement learning. Some work even trains low-rank parameterizations for reasoning, but conventional LoRA cannot scale below the model dimension. We question whether even rank=1 LoRA is necessary for learning to reason and propose TinyLoRA, a method for scaling low-rank adapters to sizes as small as one parameter. Within our new parameterization, we are able to train the 8B parameter size of Qwen2.5 to 91\% accuracy on GSM8K with only 13 trained parameters in bf16 (26 total bytes). We find this trend holds in general: we are able to recover 90\% of performance improvements while training $1000x$ fewer parameters across a suite of more difficult learning-to-reason benchmarks such as AIME, AMC, and MATH500. Notably, we are only able to achieve such strong performance with RL: models trained using SFT require $100-1000x$ larger updates to reach the same performance.

arXiv.org
#Microsoft experienced its worst quarter on Wall Street since 2008, with a 23% stock drop, due to concerns about its #artificialintelligence prospects. While the company remains dominant in #productivitysoftware and Windows, it faces challenges in growing its #AIbusiness and building #cloudinfrastructure. https://www.cnbc.com/2026/03/31/microsofts-stock-closes-worst-quarter-since-2008-financial-crisis.html?AIagents.at #AIagent #AI #ML #NLP #LLM #GenAI
Slack has unveiled 30 new AI features for Slackbot, its most ambitious overhaul since Salesforce's 27.7B USD acquisition. The update transforms the chatbot into a full enterprise agent capable of taking meeting notes across any video platform, operating beyond Slack's desktop app, and executing tasks via the Model Context Protocol. Slack claims the feature is on track to become the fastest-adopted product in Salesforce's 27-year history. https://venturebeat.com/orchestration/slack-adds-30-ai-features-to-slackbot-its-most-ambitious-update-since-the #AIagent #AI #GenAI #AgenticAI #Slack

@alan @Kye
Agreed. Though there must be a clear understanding of what it is that this opposition is aimed at, towit: GenAI (aka #AISlop on the #fedi) and not the niche #AI that powers such things as medical diagnostics tools, and numerous other dedicated, curated and deterministic AI.

The overall โ€˜furious oppositionโ€™ youโ€™ll find here is to that copyright infringing software commonly found in #AIChatBots which have been proven to โ€˜make things upโ€™ and in some cases lead to โ€˜deep depressionโ€™ and โ€˜radicalisationโ€™ with well known consequences.

Another aspect of this opposition has to do with the #infrastructure required for #GenAI (though recent chatbots developed in China and France have been shown to require more frugal requirements) which is impacting access to #water and #energy with obvious negative effects in the communities which host those required #Datacentres.

Last is the economic impact of GenAI systems. The flood of venture capital into GenAI and its infrastructure. The cross ownerships (moving putative money around) among GenAI and ancillary Tech corporations is hyping up a grossly over capitalised market, in effect creating a global financial bubble which is already showing signs of imminent collapse, will very likely drag the global economy crashing or into a prolonged recession.

In essence this is the opposition you will find on the #fediverse. Some can get pretty vocal about it too, but is it any wonder given our politicians, corporate management, corporate #msm, #TechBros and #Billionaires are all hell bent on feeding the bubble regardless of the potential for very dire consequences? Sometimes, it really feels like weโ€™re shouting into the voidโ€ฆ

The antidote to this is of course to introduce fact-based discussions about #AI generally, its pros and cons. I guarantee that the #Fedi is up for that, robust discussion grounded in experiencial and fact-based arguments. Maybe this is where you come in Kye โ€ฆ?

Meta's semi-formal reasoning boosts LLM code review accuracy to 93%. The technique requires AI agents to document premises, trace execution paths, and derive formal conclusions before answering, cutting hallucinations. Tests showed accuracy rising from 78% to 88% on complex examples. https://venturebeat.com/orchestration/metas-new-structured-prompting-technique-makes-llms-significantly-better-at #AIagent #AI #GenAI #AIResearch #Meta

"AI is an incredibly lonely experience", says Dennis Lemm.

"I find myself holding on to a work reality that was shaped by coding together, solving problems together -- and yes, the occasional Nerf gun battle or foosball game was a welcome break for passively mulling over a bug, which back at your desk would often get solved pretty quickly."

"I just discussed the best solution to the problem with my agent."

https://www.lemm.dev/blog/en/dev/26-03-17-ailone/

#solidstatelife #ai #genai #llms #codingai

Ai-lone

A sentence from Salma Alam-Naylor about AI and loneliness hits close to home. On remote work, vanishing office culture, and what we lose when we replace colleagues with agents.