NYT "AI" explainer misleads. Deep learning techniques date from the 1980s, & "AI" had been hot/cold for decades, not slow until 2012. There was no new "single idea" in 2012. What WAS new, & propelled the AI boom, was concentrated resources (data/compute) controlled by tech cos.
The access to massive data (aka surveillance) and compute made old "AI" techniques do new things. And showed that "AI" could profitably expand "what could be done" with the surveillance data already created by the targeted ad companies that dominated the industry.
Telling an accurate story leads to much more relevant questions than simply narrating "AI" as resulting from scientific progress -- a singular idea from Zeus' head! Indeed, conflating scientific progress w tech co products is one way these cos staved off regulation for so long
To grapple with "AI" and the concentrated power on which it's predicated, we certainly need to understand what it is. But pieces like this do the opposite, further mystifying & obscuring, & ultimately making it harder to strategize how to shape and resist.
For an antidote, and a precise material analysis, I suggest the latest @AINowInstitute report, which provides a masterful landscaping and diagnosis. https://ainowinstitute.org/
Home

We challenge & reimagine the current trajectory for AI.

AI Now Institute
@CadeMetz @kevinroose I think this should be addressed. I recognize that you write for a general audience, but it's possible to be more precise without getting into the weeds. It's imperative that we don't further mystify these tech, especially via such a prominent platform.
*To be more precise myself, the specific deep learning techniques that animated the early 2010s AI boom (CNNs) date from the 1980s. But not all DL techniques.

@Mer__edith So I guess nobody associated with this piece had ever seen T2: Judgment Day? Because in that, there's a bit where the T-800 says "My brain is a neural net processor. I can learn."

21 years before 2012.

It's like they didn't do even rudimentary fact-checking.

@kagan So many computing historians on the job market at this very minute. And yet!

@Mer__edith @kagan Came here to mention T2. Nobody watches the classics any more. 😢

And while computing historians are great, taking 30 seconds to Google "neural network history" would be a good start. 😡

@Mer__edith that's so weird to read from them. In the early 2000s I had a book called "Yes, We Have No Neutrons" about cold fusion and other failed technologies. It included neural nets! Presented as a cool idea from the 80s that didn't work out.

In fact, it was also weird when I saw a talk in 2012 about new ML developments and realized it boiled down to "we figured out if we used enough computers, neural nets start to do what we were hoping."

Thanks for pushing back on the NYT!

@Mer__edith

A lot of what we're seeing now falls under the purview of the Chinese Room problem, formally presented as such by John Searle in 1980 but with a rich history going all the way back to Liebniz and even before. This is hardly new philosophical or even technological ground.

But in obfuscation and mysticality, the narrative serves a purpose in attracting those investors who are always after the new, the magical, the wondrous, and unwilling to crack open a history book.

@Mer__edith yeah, apparently the world forgot about deep blue and others.

I remember back in 2008-2010 bumping into a bunch of shitty chat bots in secondlife.
Only recently has their been hype again. Why?
-its matured enough that it might be practical for a non-compsci major to leverage
-its matured enough to have more obvious practical applications (generating scripts and regular expressions as a programmer)
-its been opened to the public so they can play with it, leverage it