Michael Pfister

@pfista
0 Followers
19 Following
44 Posts
𝓵𝓲𝓿𝓲𝓷𝓰 𝓽𝓱𝓮 𝓮𝓷𝓭𝓵𝓮𝓼𝓼 𝓼𝓾𝓶𝓶𝓮𝓻 • building alerty.dev, dawnpatrol.llc, sharecaster.xyz, check.supply • past life vp product @nylas • co-founded gest.co

You shouldn’t have to ask an agent to help.

Like any good employee, it should solve the problem before you know there is one in the first place.

Proactive agent sees this message, automatically drafts a ticket, and then sends you a quick message:

"I just drafted a ticket for you based on your conversation. Want me to create it"?

All you have to do is say yes.

Imagine you're in slack and your boss starts chatting about a bug they noticed.

They say, "Can you create a ticket?".

Proactive agents listen.

They comb through all your operational data, extract intent, draft work, request approval, and then finish the job.

Instead of having to ask AI what you want it to do all the time, now you only need to review and approve drafted work.

"Hey, I noticed you have 3 unread emails that look important, can I give you a summary and then review some replies I drafted for you?"

AI becomes the initiator, just like "Her".

With Proactive Agents.

First, give AI access to a firehose of data:

- Slack threads
- Github commits
- Jira Tickets
- Emails
- Calendar events
- Meeting transcripts

Now our AI agents can connect to apps and control them, but we're still prompting the AI, repeating ourselves over and over.

So how do we kill prompting once and for all?

We needed to move from agents as information-gatherers to agents as action-takers.

So we created tool calling. MCP captivated us all.

So we connected LLMs to external data sources and gave them reasoning capabilities.

This unlocked deeper research, with accurate answers and timely information.

But how is that research useful if you aren't able to *do* anything with it?

AI started isolated and offline. You asked, it answered based on it's internal world knowledge.

But this often led to outdated or inaccurate answers due to knowledge cutoff