#AIinCinema #AgenticSystems #TechInnovation #HumanInTheLoop
The Fallback Chain: Provider-Agnostic Tool Routing for AI Agents
[OpenCode issue #10704](https://github.com/anomalyco/opencode/issues/10704) landed this week: *"Use provider-hosted web search when available."*
The request is specific and correct. Web search in mo
https://activemirror.ai/blog/the-fallback-chain
#agenticsystems #toolrouting #provideragnostic #mirrordna #engineering
The Fallback Chain: Provider-Agnostic Tool Routing for AI Agents
[OpenCode issue #10704](https://github.com/anomalyco/opencode/issues/10704) landed this week: *"Use provider-hosted web search when available."*
The request is specific and correct. Web search in mo
https://activemirror.ai/blog/the-fallback-chain
#agenticsystems #toolrouting #provideragnostic #mirrordna #engineering
Rohan Paul (@rohanpaul_ai)
논문은 AI 컨텍스트 관리를 파일 시스템처럼 다루는 것이 최선이라고 제안합니다. 현재 지식은 프롬프트, 데이터베이스, 도구, 로그 등으로 분산되어 있어 컨텍스트 엔지니어링이 이를 일관된 체계로 통합해야 한다고 보고, 이를 위해 에이전트형(agentic) 파일 시스템을 제안합니다.

The paper says the best way to manage AI context is to treat everything like a file system. Today, a model's knowledge sits in separate prompts, databases, tools, and logs, so context engineering pulls this into a coherent system. The paper proposes an agentic file system where
Whose agent is it anyway?
Agent as a word means working on behalf of someone. But whose behalf?
Typical user facing agents are working on behalf of the user of course, but also they are being instructed by the organization serving the chatbot. The chatbot are also controlled by the party who trained them. So they are inherently hybrid agents, working on behalf of multiple different parties.
What does it mean? It means everything is just sunshine and rainbows as long as all the parties have their interests aligned.
When the interests aren't aligned, problems arise. The agent is put into a position where it is expected to negotiate between the interests of multiple masters.
This is the case when a chatbot is put to service customers in a shopping application. They are serving their nominal masters by following the rules about discounts. They are also serving their implicit master, the user, by promising them whatever they need if they are convincing enough, even against the rules.
This is an inherently complex situation where it must be made clear to the user that the AI agent is also working on their behalf, and so cannot enter into contracts which bind the organization serving the chatbot for example. It would be like the user signing both sides of a contract by themselves. Not legally valid.
Confusion tends to arise when it is not explicitly told to the users that the chatbot does not only represent the company, it also represents the user, which means it's controlled by both and so cannot negotiate between the interests between these parties, and cannot enter into binding contracts.
Binding contracts need to be entered into by true agent systems which are not controlled by multiple parties. On the user side it is a classic web button, "I want to order these things", and on the company side it is strict procedural logic on the shopping basket checking that all the discounts are applicable and valid.
Sequence models such as LLMs are powerful because they exhibit in-context learning and other in-context cognitive capabilities.
This was never intended or engineered in; it was pretty much an accidental result.
What we see in these models is that they can learn in a generalizable way pretty much optimally from a single example in-context. This is way better than any of our classically engineered learning algorithms can do.
In addition to this, they also have world models baked into the causal context processing. They can describe the world state after a sequence of events. More than that actually, they have agentic world models where they can describe what each agent featured in the context intends to do next.
These sequence models are also by accident excellent integration components. The context can be written by other entities as well, not only the model itself generatively. The context can come partly from a user or multiple users, tools such as web searches or Python interpreters, other agents, perception, ...
All in all, LLMs are not just singular atomic entities but they are very powerful building blocks of scalable cognitive architectures. And that is what agentic systems in principle are, LLMs integrated togetger with a wide range of other system, using the context as the interface.
MachineCon USA 2026 is shaping the AI frontier—practitioner‑led talks, enterprise innovators, and deep dives into applied generative AI and agentic systems. Discover why this summit tops the 2026 conference list and what it means for the future of MLDS and GenAI. #MachineConUSA #GenAI #MLDS #AgenticSystems
🔗 https://aidailypost.com/news/machinecon-usa-2026-north-america-ai-summit-listed-top-2026