please someone disrupt the email space with an app that uses a LLM to give me options like:

“block every email that looks like an ad person reaching out to me”

“suggest the solution i am typing out now every time you see a support email that asks this question”

“automatically unsubscribe from everything that is a newsletter"

“block every pr request i get”

"rewrite every angry email i receive into a neutral tone”

@helvetica every generation reinvents the bash script
@zaratustra @helvetica talking about reinventing bash, I read this Verge article a few months ago that began by explaining how computers used to have hard to use text-based command line interfaces, and now we have amazing advances like chatGPT. then they started talking about how to coax the right answers out of chatGPT by understanding how it works, and the notion of “prompt engineering”. it seemed the author did not catch on to the fact that ChatGPT is just a really opaque/imprecise text CLI

@jakintosh @zaratustra lol. to be fair, i think it's maybe more nuanced than just being opaque/imprecise.

im also not entirely convinced that the raw cli style interface we have for this stuff right now is going to be the end-all interface for most of it

@helvetica @zaratustra I just can’t help but be 100% critical of a technology whose primary feature is “it’s a black box!”. modern post-industrial societies have used their infinite cheap energy to develop unsustainable amounts of technological abstractions that nobody understands, and LLMs/NNs/etc are the ultimate “lean in” to a *shrug idk whatever* approach to engineering. the more knowledge we offload to the black box, the more knowledge we stop reproducing and carrying forward as a society

@jakintosh @zaratustra i dont think that’s it's primary feature. the primary feature of transform models is that they can identify patterns that we cannot. they're like an electron microscope for patterns.

it also happens that we cannot look through this particular electron microscope

@helvetica @zaratustra I was having a conversation last week where people were excited to use it to write their boilerplate code, or translate crochet patterns from Russian, but they were not thinking about how that friction is what pushes humans towards creative new solutions, and the *process* of working through problems like that is where we learn and synthesize ideas across disciplines. in many cases this is where culture itself comes from and we’re ready to sacrifice that for “productivity”
@helvetica this is to say that we *can* look through the microscope of identifying patterns, and we *do*, it’s just in a distributed manner through cultural evolution of knowledge and process, and not at the individual level. we risk losing the “process” of everything in favor of the “result”

@jakintosh it’s been pretty amazing to knock together prototypes with i will admit

no i think we don’t look through it in the specific way that transform models too.

for example Google has a way to identify diabetes through a photo of someone’s eye. That is not something we could previously do.

@jakintosh you’re talking about an abstract process by which people learn that you’re nervous we will stop doing. I'm talking about a literal actual process that exposes things that we previously did not have access to.

I think I am not so worried that people will stop doing the process. You’re worried about losing it just because we have this new tool.

@jakintosh those kinds of fears feel to me a little bit like the people who are worried that we're gonna stop understanding how to program because we no longer program in assembly or C++ and we're gonna stop learning how to use computers because we no longer have to build our own computers.

@jakintosh But what has instead happened is people are learning and writing of variety of wild and interesting programming languages, and they're building and working with microcontrollers on a scale way beyond anything anyone was doing along time ago because we have raspberry pis and FPGAs etc.

Tools that make the stuff more accessible may mean a smaller percentage of users in a given field are working with raw technology, but they almost always mean more overall are still doing the hard work

@helvetica I guess my criticism comes from a place where, in reality, the worries of "we'll forget how to program" have actually come true, and that our technology is more resource intensive, opaque, and brittle than it used to be, and that have not yet fully "paid for" that cost or learned our lesson since it takes a long time for the system to break down. I also think there is a fine line between increasing accessibility and handing out dangerous tools with the sharp edges sanded off
@jakintosh when you say "we’ll forget how to program" has come true, what are you referring to?
@helvetica in this case the "consequences". I agree that strictly numbers-wise there are a lot more people in the world who engage with technology at a low level than say 40 years ago, but proportionally it's much, much smaller. the tech industry is filled with people who don't really understand what they are doing, and our tech systems as a whole are worse-off because of it. they're not resilient, secure, efficient, or performant, and the main tool for improvement is "more abstraction".

@jakintosh i think we may disagree fundamentally on the causes and requirements here.

so much of our world is tech based we have a requirement of so many tech workers, this leads to abstractions required so we can allow for lower-skill labor, there just isn't enough high skill labor out there to support the system.

also, afaik most security and efficiency problems dont come from bad software construction, but collisions between environments and workflows

@helvetica I guess my main point is just that we have willingly chosen the positive feedback loop path of infinite abstractions, and we have not really begun to pay for that cost yet. LLMs/Transformers are the ultimate realization of that path, saying "we, collectively, give up on trying to make sense of the messes we've created/encountered". and that energy-wise, we will not be able to sustain the massive resource requirements of this vision of computing as we transition off of fossil fuels.

@jakintosh im going to skip responding to the energy use thing because that seems like a late stage addition that isn't really what we were talking about before.

i do think it is a full-scale embrace of messiness. I guess I think this idea that problems can be solved elegantly is just not really true in the real world. I can't think of a single real-world problem that has an elegant solution.

even insurance and permitting, which should be straightforward is messy all the way down.

@jakintosh i think part of moving programming into a space where it is more accessible, more functional, and more practical for use in the world, means making it more world-like, and less abstraction-math like.

and i do think you're right that it is a kind of anti-engineering sentiment.

i think where we maybe have the disagreement is that I think the engineering idea that problems can be solved elegantly just does not scale at all

@jakintosh and all it takes is 5 minutes talking to a security researcher to realize how much it doesn't scale (like a major proximity attack vector is ipv6, for example, a system that is beautifully elegant, but was just never really used).

or considering that we don't have a single elegant functional secure software system for an entire business

@helvetica I guess the energy concerns that I brought up and the complexity concerns that you raise are the source of the disagreement. There's a relationship between the scale of a system and its inherent stability, and IMO the abundant energy situation gave us the (short term) ability to ignore and control the stability of overly complex systems. it's not about elegantly solving impossibly huge and complex problems, but about breaking up and simplifying the problems to something manageable.
@jakintosh yeah that makes sense. we have a lot of largescale systems that would have to be dismantled for an elegant engineering solution to be viable because they are simply too big for that kind of solution (companies of more than 100 people for example)
@jakintosh i guess ultimately i think big systems probably are a net stabilizer for humanity, despite their many drawbacks (big companies, the educational system, organized democratic governments, etc)
@jakintosh and thanks for the clarity on the energy. I totally agree, we have been creating a lot of great gains for humanity that are significantly wasteful, and this stuff certainly follows the trend, though not quite on the scale of airplanes and coal power and crypto, it is definitely in the style
@helvetica yeah, thanks for the convo, even though microblogging isn't always a great venue for it. it's hard to really get out all of the ideas, especially on something like this that is so complex and far-reaching, and in my case is intertwined with many different aspects of my work; hard to touch on the social, structural, and material implications and assumptions baked into our technological aesthetics and infrastructure in narrow chunks of 500 characters 😅.

@jakintosh yes hahah. thanks too for the conversation. also, just as a note, I am glad that there are smart people like you who are approaching this stuff differently.

I am deeply pragmatic at my core, and that pragmatism has been reinforced fairly strongly by successfully using it to navigate systems in the world. It's really useful, but I do know it has occasionally blinded me to some more radical approaches that are very reasonable and productive