KI-Blackbox geknackt: Anthropic enthüllt, wie Claude wirklich denkt – und es ist bizarr - t3n – digital pioneers
https://t3n.de/news/ki-blackbox-anthropic-geknackt-1680603/ #Sprachmodell #LargeLanguageModel #LLM #Anthropic #Claude
KI-Blackbox geknackt: Anthropic enthüllt, wie Claude wirklich denkt – und es ist bizarr - t3n – digital pioneers

Wie kommen Large Language Models zu ihrem Output? Eine neue Analysetechnik zeigt, dass viele grundlegende Annahmen falsch waren.

t3n Magazin

Hors série : Model Context Protocol (MCP) expliqué clairement.

Dans ce hors série, Sébastien S. nous parle du MCP. Le Model Context Protocol, aussi appelé Modular Capability Protocol, est un protocole standardisé qui permet aux modèles d’intelligence artificielle (comme Claude ou

https://lestechnos.be/hors-serie-model-context-protocol-mcp-explique-clairement/

#AgentIntelligent #API #automation #IAGnrative #LargeLanguageModel #MCP #orchestration #OutilNumrique #protocoles #tooluse

Hors série : Model Context Protocol (MCP) expliqué clairement. - Les Technos

Dans ce hors série, Sébastien S. nous parle du MCP. Le Model Context Protocol, aussi appelé Modular Capability Protocol, est un protocole standardisé qui permet aux modèles d’intelligence artificie…

Les Technos

🚀 Just solved a major pain point in LlamaIndex AgentWorkflow! Discover how LLM position bias breaks agent handoffs and learn the ultimate fix with working code. Perfect for AI devs building multi-agent systems!

https://www.dataleadsfuture.com/fixing-the-agent-handoff-problem-in-llamaindexs-agentworkflow-system/

#ai #largelanguagemodel #DataScience #agenticai #Python

Fixing the Agent Handoff Problem in LlamaIndex's AgentWorkflow System

The problem: agents that won't continue after handoff

Data Leads Future

@skribe Conversely, the cost of printing, distribution, and storage puts up a barrier to spamming people on other continents with mass quantities of low value slop.

Just think through the logistics of a hostile Eurasian state sending a mass quantity of printed materials to Australia or North America.

Or, for that matter, a hostile North American state sending a mass quantity of printed materials to Europe or Asia.

You would either need:–

a) At least one printing press on each continent;
b) You could try shipping the magazines, but they'd be a month out of date when they arrive; or
c) You could try flying them overseas, but that would be very expensive very quickly.

That's before you worry about things like delivery drivers (or postage), and warehouses.

These are less of an issue for books than they are for newspapers or magazines.

And if a particular newspaper or magazine is known to be reliable, written by humans, researched offline, and the articles are not available online, then there's potentially value in people buying a physical copy.

#ChatGPT #LLM #LargeLanguageModel #LargeLanguageModels #AI #ArtificialIntelligence #GenAI #spam #news #politics #business #media #meta #Facebook #Google #Gemini

Artificial Intelligence Then and Now | Communications of the ACM
https://dl.acm.org/doi/10.1145/3708554

Interesting summary of the current AI hype, how it compares with the previous one in the 80s, and whether we are that close to AGI. tl;dr: no.

Including an amusing example where ChatGPT is unable to differentiate a real Monty Hall problem https://en.wikipedia.org/wiki/Monty_Hall_problem from lookalikes, and offers the same counter-intuitive solution to all, even if the actual solution is obvious. No logical reasoning at all here. Fine or otherwis.

#artificialIntelligence #ArtificialGeneralIntelligence #largeLanguageModel

Artificial Intelligence Then and Now | Communications of the ACM

From engines of logic to engines of bullshit?

Communications of the ACM
On the Biology of a Large Language Model

We investigate the internal mechanisms used by Claude 3.5 Haiku — Anthropic's lightweight production model — in a variety of contexts, using our circuit tracing methodology.

Transformer Circuits

📣Vortrag📣

Macht. KI. Worte. Wie kann Künstliche Intelligenz Worte prägen und beeinflussen?

Wo: LebensPhasenHaus, Auf der Morgenstelle 15, 72076 Tübingen
Veranstaltungsreihe: Treffpunkt: LebensPhasenHaus - Wie wollen wir in Zukunft leben?
Wann: Freitag, 4. April 2025, 17:00 bis 18:30 Uhr
Referent: @triscari
Pietro Triscari, CEO der d-serv GmbH

Details unter: https://dserv.de/de/blog/macht-ki-worte-wie-kann-kuenstliche-intelligenz-worte-praegen-und-beeinflussen

#AIAgents #AnalyticalAI #KI #ArtificialIntelligence #GenerativeAI #KünstlicheIntelligenz #LargeLanguageModel

Macht. KI. Worte. Wie kann Künstliche Intelligenz Worte prägen und beeinflussen?

Einleitung

d-serv GmbH

#AI #ArtificialIntelligence #Humanism #Consciousness #ConsciousnessEvolution #Conscious #Awareness #ConsciousAwareness #BeingThere #BeingHere #BeHereNow #Wordsworth #LargeLanguageModels #LargeLanguageModel #LLMs #LLM #GPT #Human #Humans #Ethics #Meaning #Agency #EthicsInAI #EthicsInScience

...I think that the human brain is an example of nature replicating the large (all of the larger universe) within the small (the brain itself), something that nature does quite frequently, and in a myriad of ways.

The result is a biological form that reflects certain fundamental qualities of the larger world from which it arose, and of which it is a part, in kind of a fractal manner.

This brain remains attuned to and resonant with the larger harmony from which it arose, and all of the formal knowledge that it absorbs during its lifetime will interact with that more primal understanding, and may even dull our connection to it, but it can never entirely supplant or remove that connection.

The poet Wordsworth writes about this in his "Ode: Intimations of Immortality from Recollections of Early Childhood:"

"[T]ruths that wake,
To perish never;
Which neither listlessness, nor mad endeavour,
Nor Man nor Boy,
Nor all that is at enmity with joy,
Can utterly abolish or destroy!
Hence in a season of calm weather
Though inland far we be,
Our Souls have sight of that immortal sea
Which brought us hither..."

LLMs have tremendous breadth of knowledge--a very large cross-section of all human writings. The training that uses this knowledge is akin to the learning process of a human during its lifetime, and LLMs can absorb thousands of times what an individual human can, and can do so much more rapidly than a person can, and then can instantly share what they have learned with other AIs.

Such knowledge, while vast, is still derivative. It depends upon previous human efforts, and its quality depends upon the degree to which humans properly curate the training data, which places a human-speed bottleneck on the training process. But there is no doubt that LLMs will surpass humans in their ability to absorb and utilize pre-existing human knowledge, and indeed may already have done so.

Still, in the matter of consciousness, I think that will continue to be a distinctly human thing for quite a long time. And such consciousness would seem to have something to do with agency--with the ability to know what one wants to do, as opposed to knowing how to do something--the ability to judge what is desirable from a "big picture" standpoint--to make ethical judgements. Because things that pertain to the universe--universal truths--are an essential aspect of deciding what is right and desirable.

Humans do not always apply such ethical precepts very well, and sometimes deliberately act against them, but to deal with such matters at all is so far, I believe, a uniquely human capacity.

AI just brought us a new programming style: "Bug Oriented Programming " #BoP

#ai #chatGPT #programming #programminghumor #programmer #llm #largelanguagemodel #bug

🎉🎉Here's another article I wrote about AgentWorkflow. It dives into how to combine LlamaIndex and the DeepSeek-R1 model, read the reasoning_content, and enable the R1 to support the function call feature.😁
#ai #largelanguagemodel #DataScience #agenticai
https://www.dataleadsfuture.com/integrating-llamaindex-and-deepseek-r1-for-reasoning_content-and-function-call-features-2/
Integrating LlamaIndex and DeepSeek-R1 for reasoning_content and Function Call Features

Empowering AgentWorkflow with the strong boost from DeepSeek-R1

Data Leads Future