David Smith

@_Davidsmith
17.4K Followers
329 Following
2.1K Posts
Independent app developer. Independent in general. Maker of Widgetsmith, Pedometer++, Sleep++ and Watchsmith.
Bloghttps://david-smith.org
Podcasthttps://www.relay.fm/radar
Widgetsmithhttps://apps.apple.com/us/app/widgetsmith/id1523682319
Pedometer++https://apps.apple.com/us/app/pedometer/id712286167

While I don't yet feel like I have fully settled on how the I'll end up using LLMs in my day-to-day programming tasks, I have found a handful of prompts which I repeatably find to be generally useful and applicable regardless of whether I'm manually or agentically programming.

These are for:
- SwiftUI Previews
- Realtime Documentation
- Newly Localizable Strings
- Testing Plans
- Bug Finding
- Draft Release Notes

Detailed here: https://david-smith.org/blog/2026/03/20/generally-useful-prompts/

I was exploring the repairability of a MacBook Neo and came across this absolutely gorgeous exploded views Apple publishes in their Repair Manuals for their products.

I love the vibe of ‘em.

Also, kinda wild that the MacBook Neo is only 22 parts (vs the 35 of a Pro).

Neo: https://support.apple.com/en-us/126172
MBP: https://support.apple.com/en-gb/102712
Air: https://support.apple.com/en-gb/121934

It's a wild feeling when you are debugging a bit of logic and think you find the bug, so you fire up `git blame` to see where the code in question came from…and it turns out you write it nearly a decade ago. Mind bending that it is still in active use.

For a recent Widgetsmith feature I wanted to know how common the use of Display Zoom was among iOS users, but I couldn't find any published data on it. So I added the necessary analytics for collecting this data point, so that I could publish.

I found that around 1.9% of users have it enabled. Which is much lower than I would have guessed.

It is most common for with SE and Mini models, and least common for the Pro models.

Full write-up: https://david-smith.org/blog/2026/03/12/display-zoom-stats/

As a lover of the old 12″ MacBook I'm a bit sad to see that the MacBook Neo weights 2.7lbs (the same as the MacBook Air). There was something really magical about the 12″ weight of 2lbs. It was light enough you could carry it around and almost not notice it. I would (literally) carry it around in an inside jacket pocket.

Physically it’s nearly identical to the Air. I was hoping it would be a return of a Mac which focused around portability. A nice update, but not quite what I wished it was.

Reading through the System Prompts / Hints that Xcode 26.3 injects into the agents is fascinating..and honestly is just helpful documentation to read…essentially concise examples of best practices and implementations recommendations.

Reminds me of the old programming "Guides" which apple used to publish alongside the main documentation which were more focused on how to use the API, than what it was.

There're in: Xcode.app/Contents/PlugIns/IDEIntelligenceChat.framework/Versions/A/Resources

Ran into a weird data consistency bug today, where certain step count badges weren't updating correctly when the user changed their step goal. Took me about 10 minutes of poking around, breakpointing and on device testing to find that it was a bad `Equatable` implementation which was tripping up the updater.

Reverted my fix and asked Codex in Xcode 26.3 to give it a try. About 1 minute later it came up with the same fix.

Wild stuff.

RE: https://breakpoint.cafe/@brunoph/116013783280847722

Ha, look at that...that feature already exists.

What makes me the most excited to see Xcode get first-party agentic coding support is that it likely means that come June when iOS 27 gets announced it is realistic to expect that we'll be able to use agentic coding models with the new APIs right away.

The first part of my summer is always spent generating dozens of throwaway prototype projects exploring the new capabilities. This is the type of work in which these models already are incredible for, and would make me sooo much more productive.

For example, in this project I wanted to have a mechanism for verifying my automatic bound detection algorithm. So I wondered if I could have the widgets render their bound values within them and then I could use OCR to read the value and compare the to computed value.

I know I could have worked out how the Vision framework does this and gotten it to work, but here I just asked and it built it. Which allowed me to catch a few bugs in my detection method, which I'd have missed otherwise. 🧵