This is a great summary by @SashaMTL of the environmental and human costs of so-called "AI" technology.

https://arstechnica.com/gadgets/2023/04/generative-ai-is-cool-but-lets-not-forget-its-human-and-environmental-costs/

>>

The mounting human and environmental costs of generative AI

Op-ed: Planetary impacts, escalating financial costs, and labor exploitation all factor.

Ars Technica

@SashaMTL

"For instance, with ChatGPT, which was queried by tens of millions of users at its peak a month ago, thousands of copies of the model are running in parallel, responding to user queries in real time, all while using megawatt hours of electricity and generating metric tons of carbon emissions. It’s hard to estimate the exact quantity of emissions this results in, given the secrecy and lack of transparency around these big LLMs."

https://arstechnica.com/gadgets/2023/04/generative-ai-is-cool-but-lets-not-forget-its-human-and-environmental-costs/

>>

The mounting human and environmental costs of generative AI

Op-ed: Planetary impacts, escalating financial costs, and labor exploitation all factor.

Ars Technica

@SashaMTL

"it’s difficult to carry out external evaluations and audits of these models since you can’t even be sure that the underlying model is the same every time you query it. It also means that you can’t do scientific research on them, given that studies must be reproducible."

https://arstechnica.com/gadgets/2023/04/generative-ai-is-cool-but-lets-not-forget-its-human-and-environmental-costs/

>>

The mounting human and environmental costs of generative AI

Op-ed: Planetary impacts, escalating financial costs, and labor exploitation all factor.

Ars Technica
@emilymbender @SashaMTL Few things in the social are truly reproducible (i.e., stochastic processes). Just because version controlled software, known parameters and deterministic behaviour regarding start conditions *can* provide mechanistic reproducibility this is by no means a precondition for scientific research. In the negative "evidence is evidence" (i.e., credible evidence of one occurrence can falsify a theory).
@tg9541 @SashaMTL That's a strange defense of OpenAI et al's extremely closed practices.
@emilymbender @SashaMTL It hasn't been my intention to defend OpenAI. My statement was about a simple observation regarding scientific research and reproducibility. On the contrary, scientists should always try to falsify what's claimed to be true, and they should lay bare any claim that's not falsifiable. Unfortunately 500 chars text is quite limiting 🙂