Getting AI to score a phishing email: a few hours.
Getting the output to be useful to someone who almost clicked the link: much longer.
That gap is the whole problem.
| Website | https://wiobyrne.com |
| Newsletter | https://digitallyliterate.net/ |
| License | CC-BY-SA |
Getting AI to score a phishing email: a few hours.
Getting the output to be useful to someone who almost clicked the link: much longer.
That gap is the whole problem.
The Responsibility Shift: Are systems mirroring our behavior, or structuring it?
In the latest "Digitally Literate," I dive into:
- Why courts are finding platforms liable for "outcomes," not just "hosting."
- How AI autocomplete and "yes-person" models quietly reshape our beliefs.
- Practical ways to reclaim human agency.
Read the full issue here: https://digitallyliterate.net/dl-427/
Presenting tonight at the Bowman Symposium on AI Literacy Day.
The argument - Most AI literacy training teaches people to use AI better. That's tool literacy. This session is about something harder.
Resources are live if you want to follow along or share with colleagues:
@Downes agreed. I've been researching how to do this in a VPS. Perhaps I'll head that route as well.
I was exploring that as I was learning about openclaw/clawdbot. Thanks for the reminder.
Can you build a private, local AI tool in a weekend without being a dev?
Yes. I just did it with Ollama + Streamlit + Llama 3.1.
The shift in capability from 2023 to 2026 is staggering. What used to require a team and a hackathon now runs on a standard laptop.
https://wiobyrne.com/how-i-vibe-coded-a-real-tool-in-a-weekend/
Check out the source code: github.com/wiobyrne/trustsense-v2
The things that sustain us rarely begin with us.
They were handed down. A practice. A story. A way of moving through hard times.
And quietly, without always realizing it, we hand them forward.
From "I" to "WE" — that's the shift we're exploring in Chapter 2 of #SignpostSessions.
What thread are you carrying right now?
https://initiativeforliteracy.org/early-echoes-from-the-listening-post/
In 2023, I worked with a team of engineers on an AI tool to detect scams.
We tried Llama 2. It hallucinated constantly. We tried GPT-3.5. Token limits killed us. We ended up hand-building a custom model over the course of weeks.
Last weekend, I rebuilt the whole thing. Alone. On my laptop.
The tools changed. Here's what that looks like in practice.
@stevendbrewer @dajb
I love this as a separate file entirely. Not who you are, but how you want a specific task handled. A "research mode" context file.
My workflow currently has one AI model acting as a "manager" that oversees tasks, then farms them out to a series of local AI models — each guided by their own MD file for that specific job. Still test-driving this, but the MD files are doing a lot of the work.
Wrote about the basics here: https://wiobyrne.com/ian-md/
Great addition. I hadn't thought about the re-read instruction as a verbal shortcut. That's a real gap in what I wrote.
Might steal that for a follow-up.
At OpenAI, one engineer processed 210 billion tokens last week. At Anthropic, a user racked up $150K in a month. This is "tokenmaxxing" — using AI as a status signal.
One engineer: "It's becoming a career risk to not use AI at an accelerated pace, regardless of output quality."
The leaderboards don't measure what actually matters.
This + fake AI soldiers, Google's medical retreat, and AI consciousness:
https://digitallyliterate.net/dl-426/