| website | https://www.14watts.com |
| newsletter | https://newsletter.14watts.com |
| https://www.linkedin.com/in/rajeshbilimoria/ |
| website | https://www.14watts.com |
| newsletter | https://newsletter.14watts.com |
| https://www.linkedin.com/in/rajeshbilimoria/ |
Statement from the listed authors of Stochastic Parrots on the “AI pause” letter
https://www.dair-institute.org/blog/letter-statement-March2023
"Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices."
With @timnitGebru @meg and Angelina McMillan-Major
Big problem: authors often support claim X with with a citation to paper Y, even though Y has no bearing on X or even directly refutes X.
Estimates suggest that between 5% and 35% (the latter seems too high to me) of scientific citations do this. It's a grave sin, akin to claiming statistical significance when you clearly don't have it. Yet it's very common.
An extreme case: the first citation in the new FLI letter "Pause Giant AI experiments".
@timnitGebru explains: https://fediscience.org/@timnitGebru@dair-community.social/110110514822795454
The very first citation in this stupid letter, https://futureoflife.org/open-letter/pause-giant-ai-experiments/, is to our #StochasticParrots Paper, "AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]" EXCEPT that one of the main points we make in the paper is that one of the biggest harms of large language models, is caused by CLAIMING that LLMs have "human-competitive intelligence." They basically say the opposite of what we say and cite our paper?