@ceejbot But also: _this is why we design processes and contexts to minimize harms_.
Unfortunately that means now _re_designing a bunch of them.
@aredridel @ceejbot fundamentally this _is_ the difference between a good programmer and a bad programmer.
a good programmer will think "I am not a good programmer. because of this, I will design for safety, because I will make mistakes."
a bad programmer thinks that they can try a little harder and be safe that way
"Always has been", quite literally.
I went to one of those reputed to be wise [...] and when I considered him and conversed with him, men of Athens, I was affected something like this: it seemed to me that this man seemed to be wise, both to many other human beings and most of all to himself, but that he was not.[...]
For my part, as I went away, I reasoned with regard to myself: “I am wiser than this human being. For probably neither of us knows anything noble and good, but he supposes he knows something when he does not know, while I, just as I do not know, do not even suppose that I do. I am likely to be a little bit wiser than he in this very thing: that whatever I do not know, I do not even suppose I know.
An excerpt from Plato's Apology of Socrates, written over 2400 years ago.
@fred @glyph @aredridel wait wait wait Marc Andressen just assured me that we only invented introspection around 400 years ago; this can't be true…
(easy dunk is easy, yet satisfying)
@semanticist Some I'm sure but I really don't think that's the mode or anything near it.
(It actually does make lots of people more capable!)
@glyph @aredridel @ceejbot I think a lot of people are working on the assumption that mistakes aren't as costly anymore.
You won't have to live with the consequences very long and you can just rewrite everything if the technical decisions you make end up being wrong.
This doesn't hold for genuine safety issues, like things affecting the privacy and security of your users, but industry was already caring about those things pretty reluctantly.
@glyph @aredridel @ceejbot given the choice of being out competed by someone using AI and losing all your customers data because you used AI the choice if obvious.
There are provably no consequences for the latter.
You can calculate exactly how much a year of free credit monitoring for all your users will cost.
@glyph @dreid @aredridel At some companies there's huge pressure from fairly ignorant/credulous leadership -- or worse, leadership with a financial incentive to promote use of tech that doesn't really work-- to pump out lines of code with these things. This has predictable outcomes.
Microsoft/GitHub has a history of doing this, but this time the bad tech speeds up the bad code production instead of getting in the way and slowing people down.
I don't know how to express online with its context collapse problem exactly how mixed my opinion is about all this. Writing software is changed forever AND using these tools has a real place in your workflow if you learn how AND it's a horrible mess because capitalism has its usual incentives.
@ceejbot @glyph @aredridel there is the assumption that they won't make bad code forever so we'll just use them to replace the bad code with better code later.
Also we can put the person who committed the bad code on a PIP while ignoring the organizational defects that prevented putting any safety procedures into place.
On paper there is lots of interesting stuff happening. But in practice I can't figure out a way to use it without enriching the worst fucking people.
Most tradespeople and cooks never really learned how to do it properly, and yet they still get a knife and a chainsaw.
@camertron This has been what I've been thinking. Sadly. Along with how to cope with it.
(Why do my colleagues who write typescript have more trouble with it? Many reasons, I think. In this essay I will…)