I watched a neat if somewhat synthetic ML demo based on this paper - https://arxiv.org/pdf/2005.11401.pdf - and on the one hand, the technique is interesting because it lets you isolate cited sources for the generated text fairly easily. But on the other hand... I remember when people thought using Wikipedia for research was cheating. And it still - sort of - is, in the sense that it's not your primary research, it's other people's distillates of other people's research. And that's what this demo is doing.

And to take that logic a step further: the reason Wikipedia was considered cheating or in some way adjacent to plagiarism wasn't just its presumed unreliability - some people just need to see gatekeepers, I get it even if I don't agree - but the fact that all this supplementary labor went not just uncited or uncredited but unacknowledged.

That is to say, laborwashing.

Yes, it's novel that ML demos can distill coherent sentences out of the products of a tectonic amount of human effort. But.

So far I haven't seen one of these tools that isn't, in one way or another, glossing over the step where a lot of un- or under-paid humans are doing all the heavy lifting that makes any of this work at all. Do you presume to criticize the great and powerful Oz^H AI model? Pay no attention to the hundreds or thousand of people behind the curtain.

https://www.youtube.com/watch?v=YWyCCJ6B2WE

Put differently: every current ML demo is an abstraction layer that gives us license to ignore the details of exploitation.

Pay no attention to that man behind the curtain.

YouTube
I'm naive enough to believe humane computing is possible, that computational literacy matters, that access to computation should be a human right, but for any of those wild-eyed techno-hippie ideals to matter, computation can't have exploitation at its core. And all the talk about "AI safety" and "human-in-the-loop decision making" is a distraction from the fact that it does, hiding questions about where humans already are in these systems with chin-stroking questions about where they should be.
To some extent, I think our language is failing us here, in the way it always fails us in the presence of real novelty; we seek out analogy to tether the novelty to our existing frames, and the shortcomings of those analogies are where the charlatans of the world do their best work. "We are encoding knowledge" no sir you're encoding text. The knowledge is represented by the text but it is not knowledge. "Signs of AGI" sir you put googly-eyes on a very complicated spreadsheet, we are not fooled.

This is, I think, where my bottomless contempt for the threat-of-AGI, Roko's Basilisk crowd comes from, the callous willingness to overlook the real cost of systematically undervaluing humans in their desire to either saved or destroyed by a God they're trying to bring about out of the fear it would be unhappy if they didn't.

It's just so dumb. It is incredibly dumb, but it's a brand of dumb that comes at huge human cost, built on an abstraction layer that lets believers ignore that cost.

Anyway, the number of self-educated people out there who believe that if you put a dictionary in a blender there's a chance the blender will get extremely angry you didn't do it sooner and seek revenge, so logically we need to put all the dictionaries we can find into the biggest blenders we can build as soon as possible all the while calling themselves 'atheists' is prima facie evidence that funding robust liberal arts programs is a national defence issue.

Maybe we need "You must have read and understood a book that did not have a dragon, a robot, a wizard, a spaceship or cartoon boobs on the cover before operating this machine" warning labels on laptops, compilers and language models.

[Late note: with all due respect, if you're considering replying this post saying "but my computering book has a wizard on the cover" please read this thread again.]

@mhoye @ob1quixote As a person who has read and loved many such rocket/dragon books, but also has an MA in history, I think many of them work best in conversation with other books. The failure to get beyond genre is how we get the Torment Nexus joke.
@mhoye There are plenty of people who -- primarily, even -- engage heavily with books not featuring dragons, robots, wizards, spaceships, or cartoon boobs who nevertheless dived headfirst into their own flavor of self-justifying eschatology.
@mhoye And, like, I REALLY don't think the current situation WOULD have been improved at all if more of the people hyping up theur nonsense HAD engaged with the liberal-arts philosophical underpinnings of their ideology and read Nick Land.
@mhoye Doesn’t one of the pre-eminent compiler books have a dragon on the cover?

@mhoye Thank you. I love your images

""Signs of AGI" sir you put googly-eyes on a very complicated spreadsheet, we are not fooled."

😘

"if you put a dictionary in a blender there's a chance the blender will get extremely angry you didn't do it sooner and seek revenge, so logically we need to put all the dictionaries we can find into the biggest blenders we can build as soon as possible"

@mhoye Late note is fair enough; in my defense I did read (and agree with) the thread, wasn't trying to quibble ...
@mhoye Nice, 20000 leagues under the sea is legit.

The basic points about unrecognized human labor, and lack of critical thinking towards AI (as religion) stay in danger of getting lost.

I would also like to see ethical computing and not creating gods out of AI.

The cover art quip is where this went sideways as in fact fantasy literature is used all the time to teach ethics, good govt, & even critical thinking skills.

@mhoye

@mhoye New insight: Torment Nexus is made in Blender.
@mhoye This toot showed up in my timeline without being marked as a reply or part of a thread and it was quite a trip
@GreenSkyOverMe I think it stands on its own pretty well tbh.

@mhoye

@GreenSkyOverMe

I think it helps to understand that what most people call large language model AI works by predicting words and that some of these AI true believer cultists belive they will be tortured by any AI that is created if they don't help bring it into existence or some bullshit like that..

@mhoye There's also considerably irony in the fact that atheists have often remarked on how the moral certainty of the religious can enable them to rationalize doing terrible things. And then this group of (I assume mostly) atheists come up with a quasi-religion that does exactly that thing in any even more extreme way.

@mhoye

That's a wonderful metaphor 😂

@mhoye thank you for the "dictionary in blender" metaphor, that will immediately enter my active vocabulary!
@mhoye Really the joke is on Roko's Basilisk, because I'm probably just a Boltzmann brain that's going to wink out existence in a moment anyway in a regression to the mean. ;-)
@internic Who throws away backups though?
@mhoye @internic Everyone. But only inadvertently and only the ones that are eventually actually needed.

@mhoye @nyrath While my takes on AI are... complicated... I'll just say that Roko's Basilisk is duuuumb. It's literally a reskinned Pascal's Wager. And yet those "rational atheists" eat it up because it has a sci-fi flavor. Pascal's Wager doesn't even hold up by itself (but that's another story).

LessWrong is a circus act masquerading as an intellectual club. In actuality it is a 24/7 circlejerk of average-to-below-average-intelligence people patting each other on the back for being so smart.

@maxthefox @nyrath Yeah, that's a good way to see it; it's definitely the Bring Your Own God edition of Pascal's Wager, which itself should be understood as a clever brand of epistemic cowardice. At least Pascal had the intellectual dignity to apply the argument to an existing divinity of predefined consequence, and didn’t just cobble some shambling golem together out of his own cleverness and vanity to put over the altar.
@mhoye Googly-eyes on a spreadsheet! ❤️❤️❤️