Human beings have built an abstract model of language that *multiplies* their own linguistic powers. But most of my friends are really depressed about it because they’ve seen so much science fiction that they interpret anything called #AI as a *diminishment* of human agency. It’s a testament to the power of fiction, but also a little loopy if I’m being honest.
I seriously think it might be smart to move toward a discussion of “language models” & “image models” and ditch the AI acronym, which seems to contribute nothing but millenarian fantasy + apocalyptic despair + reactive moralism.
@TedUnderwood Could you get rid of that word "train" as well, then? Because "train" makes it seem like they're toddlers or pets, when really, they're just machines sifting a lot of data, for better or worse over time with "garbage in/garbage out", you know?
@prokofy Nah, I’m quite committed to the view that learning is happening; I think we have a good mathematical theory of what it means when we say that. The problem in my view is not that it’s wrong to say “learning” “training” or “intelligence,” but that SF has taught people to assume those words imply goals and agency.

@TedUnderwood @prokofy where SF is read quite narrowly only.

@annaleen 's AUTONOMOUS actually drives that question in the opposite direction by exploring all the ways in which humans are made into machines and instruments

@trochee @TedUnderwood @annaleen

“The danger of computers becoming like humans is not as great as the danger of humans becoming like computers."

Konrad Zuse