Alex Ozun

@alexozun
365 Followers
471 Following
246 Posts
Staff iOS Engineer | 
Writing: https://swiftology.io
Based in 🇬🇧 Born in 🇺🇦
All opinions are my own
@huwr @orj algebraic effects is a more general and flexible mechanism with which you can replicate SwiftUI env semantics or other classic DI patterns.
@huwr @orj there are certainly similarities between SwiftUI env and algebraic effects. But the key diff is that SwiftUI env is basically a flavour of Service Locator DI, where you inject and retrieve instances from env. Whereas with algebraic effects we don't inject anything or retrieve instances at call sites. Instead, control flow is passed from call sites up to enclosing effect handlers (which can compose by passing control further up), enabling dynamic binding of behaviours.
@huwr that was my first thought, but interestingly, many Indian places I tried, especially on food courts, modified their normally vegetarian dishes to include meat, probably to appease to local preferences. Which is fair enough, but pretty annoying nevertheless.
Singapore is one of the most #vegetarian unfriendly metropolises I've been to.
Which is really unfortunate because it's pretty cool otherwise.
@younata this is great work! Shame it wasn't available earlier, when we migrated to Swift Testing at Amex, would've spared us the need to create custom XCTest/Testing shims for helper methods. But we might still benefit from it when migrating our Snapshot tests (this required XCTAttachment support in Swift Testing).
@mattiem I use Dijkstra's definition of abstraction: "The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise."
Under this definition, AI prompting is certainly *not* a programming abstraction, it's the opposite. It's a semantic level that allows to be vague and imprecise, and rely on AI ability to infer the most statistically probable meaning from context, or interrogate you until the meaning is clear (hence, the Plan mode).
@mattiem you're not alone with this intuition.
I too don't think that (agentic) AI is a tool. And I don't think that prompting is programming at a higher level of abstraction.
I think that in both cases it's something categorically different.
When a PM assigns a task with a detailed spec to a programmer, the PM is not "programming" at a higher abstraction level, and not using the programmer as a tool to "compile" their prompt to a lower-level representation (e.g. Java). They're collaborating.

@fodwyer @mattiem
> Then await the thing that causes the callbacks and see how that set looks?

This would only work if the *thing* itself is awaiting on the callbacks, right? But usually it's fire-and-forget, and you're back to awaiting on expectations inside the callbacks themselves.

@fatbobman @pedro @StewartLynch thank you for featuring my article 🙏

This is my reinterpretation of Kent Beck's old tweet.
But it accurately describes what my experience leveraging agents effectively has been like so far.

https://x.com/KentBeck/status/250733358307500032

Kent Beck 🌻 (@KentBeck) on X

for each desired change, make the change easy (warning: this may be hard), then make the easy change

X (formerly Twitter)