Kyle Hughes

@kyle@mister.computer
1,056 Followers
645 Following
7.5K Posts

I am a polite software developer and I am always committed to the bit. I will be your favorite app maker’s favorite app maker.

I am enamored by user-facing software, especially mobile, and especially iOS. Swift is my current weapon of choice but I will use any made-up word to solve any problem. I ship my side projects for free.

I run this single-user Mastodon instance in good faith and I intend to keep it that way.

pronounshe/him/his
websitehttps://kylehugh.es
githubhttps://github.com/kylehughes
threadshttps://www.threads.net/@kyle_hughes
I understand the “I am morally opposed to using AI” person much better than the “I won’t use it until it’s perfect” person.
It took me so long to manually recreate the icon selection screen from Shortcuts, along with all of its categories.
My use and discussion of AI systems is not an implicit endorsement of them or their means or their ends or their benefactors. I just don't trust anyone else to judge the field, figure out the trajectory, identify what is useful, articulate what heavy exposure does to one's impulses and thoughts, etc. I think having a good handle on those aspects is important to my own survival and to helping me realize the world I want to see, and there's no other way for me to get there.
With RiteAid and Bartell Drugs becoming CVS I think we’re down to just Walgreens and CVS. It’s like Pharmacy 2048 out here. Anyone selling event contracts on who wins?
I think we are barreling toward a future where an agent is the frontend for most software products. The companies that will be able to exist in that world are ones that own customer data, have proprietary data themselves, or provide access to gatekept services (e.g. brokerages). There is no moat for solving problems at runtime. Bespoke UI, if necessary, will be generated just-in-time. These big beautiful screens will mostly be used for video consumption.
GraphQL via MCP is a slept-on giant. I have never felt like it lived up to its promise for frontend technologies, but letting an agent introspect the schema and dynamically generate scoped queries at runtime is the perfect fit. It amounts to emergent behavior. I used apollo-mcp-server to expose my company’s schema over MCP, then paired that with documents for domain knowledge, as well as schema pattern knowledge, and Claude Code can autonomously use our entire application without a frontend.
I feel like I'm having an out of body experience wherein my whole community is obsessing over Liquid Glass and I'm saying this is the best software interface I have ever used or could even imagine.
I legitimately think that agentic LLMs are the future of personal computers, the new operating system. Using Claude Code to interact with your own software over MCP, and see it autonomously solve problems with it and using it, is transcendent. The rest of the computer feels so antiquated, handmade GUIs feel cumbersome. Our computers will use our computers soon.
Techmeme (@Techmeme@techhub.social)

Anthropic now lets Claude app users build, host, and share AI-powered apps directly in Claude via Artifacts, launching in beta on Free, Pro, and Max tiers (Jay Peters/The Verge) https://www.theverge.com/news/693342/anthropic-claude-ai-apps-artifact http://www.techmeme.com/250625/p34#a250625p34

TechHub
This is funny and topical but no one on Mastodon has listened to a song produced in the last 15 years. My bad.
https://mister.computer/@kyle/114745413864415216
Kyle Hughes (@kyle@mister.computer)

Thought I'd end up with Sonnet But it wasn't a match Wrote some code with Gemini Now I'm programming less Even almost paid two hundred And for Claude, I'm so thankful Wish I could say thank you to o3 Cause it was an angel

Mister Computer
×
@chris @simon @kentbeck @gergelyorosz If you get paid for that, sure 🙂 I still think AI is a huge issue because of the "stealing" part (doesn't matter what you call it, but I think we all understand the issue and that it leads to content not being published anymore publicly).
It's essentially the other side of the seniority coin, if everyone just synthesises code, there is no new input for number 5.
@helge @chris @simon @kentbeck @gergelyorosz I'm sure that StackOverflow is a huge input for programming information, and I know I'm not alone in dramatically reducing my time there. After writing over 5700 answers over 15 years, I haven't answered anything in 2025. I do expect this to become an existential problem for the models across a lot of fields.
@helge @chris @simon @kentbeck @gergelyorosz But to Beck's point, I think to a first order, you should think of coding assistants like a higher-level programming language, not a complete reinvention of programming. I find that most of the usual skills still apply, even when they're running at their best. And to "how will junior devs learn the low level skills I know," I'd say the same way most devs learn assembly language. They don't. And it's mostly fine.
@helge @chris @simon @kentbeck @gergelyorosz My much bigger concern, having used coding assistants quite a bit now, is that they have the ability to really trash a code base really fast when they get confused, which is often. And I expect that this will be a major problem. I really like using them at the very start, but they tend to go off the rails pretty often, and I have to take control back. I expect that will give plenty of experience to junior devs.

@cocoaphony @chris @simon @kentbeck @gergelyorosz I actually think it is mostly the same issue, just at a much, much lower speed. Someone who knows assembly usually writes better code (not because she is older, but has a solid understanding of how things work internally, though admittedly that may be counter today).
But AI brings that issue (not understanding what happens) from an acceptable level to 100x.

Either way, I think it can be helpful to senior devs because they can rate the output.

@helge @chris @simon @kentbeck @gergelyorosz Of course when higher level languages were first developed, senior engineers did not feel that the issue was an acceptable level :D

I'm seeing some junior devs get in way over their heads following AI advice. (I'm kind of developing a stock lecture about it…) I'm also seeing junior-to-intermediate devs use AI to explore and learn deep things they wouldn't have dared before. I'm seeing the dig into details that before they'd have left as unknowable.

@helge @chris @simon @kentbeck @gergelyorosz (And yes, AI hallucinates. And also, when you research things on the internet, the internet hallucinates. And when you study things in books, they also sometimes are just flat wrong. There are definitely differences, but it is not a fundamental break with the past.

@cocoaphony @helge @chris @simon @kentbeck @gergelyorosz the difference (to my mind) is that if you have a solid foundation in “how things actually work”, you can design an experiment to figure out what is really happening so you don’t get misled. Absent that foundation … all you can do is hope.

That doesn’t have to mean “writing assembly,” but “knowing that there’s an actual spec and knowing how to read one” is an _invaluable_ skill that’s severely diluted by vibe coding. Even the idea of having a spec is diluted.

@steve @helge @chris @simon @kentbeck @gergelyorosz Also true. Though time and again I discover that how I think things actually work, based on my decades of foundations, is...huh, not actually how things work (which I learn when someone like Steve explains it to me... :D)

Certainly, learning foundations will always be valuable. But I remember my mentor in 1996 fussing that I shouldn't waste so much time digging into details. I think they were wrong. But it's not like *everyone* used to care.

@cocoaphony @helge @chris @simon @kentbeck @gergelyorosz The thing I worry about losing (and which we had already lost to some extent pre-wet-hot-AI-summer) isn't any specific skill or knowledge, but rather the common agreement that computer systems are deterministic things and you can figure out how anything works (or why it doesn't work) by a combination of deliberately reading a specification and conducting experiments to validate it.
@steve @helge @chris @simon @kentbeck @gergelyorosz But I am with you about "vibe coding." I expect that's a fairly short-lived thing. When I see people who are really successful with it, it turns out there was often a *lot* of planning that went into that "vibe." :D
@cocoaphony @steve @chris @simon @kentbeck @gergelyorosz I wouldn't underestimate that. It isn't that different to "regular" users hacking up Excel macros or say Shortcut scripts (which usually s*** from a dev pov, but they do the job). It is an enabler, and as long as the users can properly rate the output, that is kinda ok (seniority again).

@helge @steve @chris @simon @kentbeck @gergelyorosz Agreed that coding assistants are definitely lowering the bar (in a good way) for non-devs to build their own tools. I've seen quite a lot of that, and it's a great thing.

I think some folks are scaling that incorrectly to "AI, build and deploy a replacement for Netflix, I'll come back in an hour." This is very related to how folks confuse prototypes with "almost ready to ship."

@cocoaphony @helge @steve @chris @simon @kentbeck @gergelyorosz

Don’t you think we (collectively) need less, but better quality software?

@tuparev @helge @steve @chris @simon @kentbeck @gergelyorosz not at all. I’m very excited about things that allow semi-technical people to build moderate-quality, bespoke solutions to their personal and esoteric problems.

Excel macros gave people an approachable way to build terrible software to solve real problems that they never could have gotten from “high-quality” accounting software. I think that was a good addition to what VisiCalc had already done.

@tuparev @cocoaphony @helge @steve @chris @kentbeck @gergelyorosz I think we need more, better quality software. I want quality AND quantity!
@cocoaphony @chris @simon @kentbeck @gergelyorosz Of course that always was a thing. The level w/ AI is *way* higher, like 100x (or 1000x according to the article) vs like 2x or maybe 10x. Entirely different scale and problem domain.
Suggesting that this is just the regular "new abstraction" complaint is distracting from the broadness of the issue, IMO.