Okay, so are these 8 pages of motivated reasoning formatted like they've been submitted to Science or to Nature?

https://assets.researchsquare.com/files/rs-2724922/v1_covered.pdf?c=1680083818

Look, you can't count the carbon emissions that people have for (check notes) existing as the "carbon cost" of the work that they do.

I can't believe this needs to be said, but: LLMs are *optional*. Humans are not.

And here's a new twist on "we used ChatGPT to write our paper". Of course.
@emilymbender "To guarantee the integrity and originality of our work, we ran the text through TurnItIn plagiarism detection software." 🤣🤣🤣
@emilymbender "when managed responsibly"
... this is going to be a sh*tshow, isn't it?
@emilymbender ChatGPT Boilerplate generated by ... ???
@grebmar @emilymbender it'd be ironic if all this text had been generated by chatGPT and there wasn't actually any checks
@emilymbender I hope this gets so rejected and slapped (and ML/LLM (so-called "AI") is everything but environmental/responsible/…)
@emilymbender "environmentally sound decision"? I'd love to see their model. Then again, maybe it's just the usual transhumanism.

@emilymbender 😭

this is ... so bad it's hard to believe it's not parody.

I remember writing parodies of world-destroying-tech in the form of papers, but it was a simpler time (2005)

https://trochee.livejournal.com/118376.html

@emilymbender Honestly, they could have rephrased that as "we used ChatGPT to make writing the paper both longer and more work-intensive."

@emilymbender

thousands of GPU-hours are good for the environment somehow? wtf?

@rocketdyke @emilymbender
This is where quote-post would be handy. See higher up the thread @gerrymcgovern
@emilymbender they have understood absolutely nothing.

@emilymbender Oh wow.

That's about the same level of reality bending as AlphaGo's "beating humans at Go, we only need 20 MWs (or so) for training, but then after it's cheaper than the human's 20W..."

@ftranschel @emilymbender And AlphaGo isn't even better than humans: https://goattack.far.ai/
Adversarial policies in Go

Examples and analysis of superhuman Go AI systems beaten by adversarial attacks

@emilymbender I don’t find this paper particularly meaningful, but the comparison is between a unit interval consumption of *human time* as *applied to writing*. I understood the idea behind that comparison to be that the human *could be doing something else* in that time instead (e.g., building renewable technologies….).

that doesn’t strike me as stupid…

@UlrikeHahn
Problem is that using the LLM isn't an "or" scenario, it's an "in addition to" scenarion: the humans would still exist & MOST LIKELY still be having the same impact, if not more (since they might be driving, using power tools, etc).

@emilymbender

@FeralRobots @emilymbender

well, that’s the question: what’s the appropriate baseline comparison for the energy budget.

there are many tasks we simply don’t use humans to do any more…

I’m not at all a fan of letting LLMs loose on the world, but I do think we should try and be factual about what choices entail.

@UlrikeHahn
I don't think I made myself clear.
Let's say humans generate x amount of carbon for task a, while LLMs generate y, where y<x.

You seem to be arguing that if humans don't do task a, they will gnerate -x carbon.

But that's almost certainly not the case. They will very likely generate approximately as much, whether they do the task or it's done by an LLM.

Whereas if we don't use the LLM, that's always going to be -y.

@emilymbender

@UlrikeHahn
In other words: We haven't yet discussed a case where substituting LLMs for humans clearly reduces the carbon footprint, because there's a flawed assumption at the heart of the calculations.

@emilymbender

@UlrikeHahn
Put another way: The only CLEAR way to reduce the carbon footprint by using LLMs instead of humans is for the humans to no longer exist.

@emilymbender

@FeralRobots @emilymbender I think you were perfectly clear. I was merely pointing out in what way (presumably) the comparison in the paper was intended: hence my example of one human hour ‘writing’, versus say ‘building renewables’

@UlrikeHahn
OK; but when would that ever happen?
It's pretty unlikely, so it's rather odd for them to use that basis for comparison if that's what they're doing.

@emilymbender

@FeralRobots @emilymbender

that’s where I disagree: the value of the unit (such as it is, like I said, I don’t wish to particularly promote this paper) is that we can use that unit to calculate the circumstances under which LLMs would be energy beneficial: ie, if tasks they took over from humans were used, by said humans, to do something with lower carbon foot print than their doing that task. The logic of it is fine. Where that will apply in practice is another matter

@UlrikeHahn
The logic's *not* fine, though.

The logic requires assuming something that's really unlikely. That becomes kind of a verbal sleight of hand, which leads us to behave as though what's unlikely (people spending time mitigating carbon that they would otherwise spend on creative work) is what *will happen*.

@emilymbender

@emilymbender Huh. I'm no academic, but isn't it odd to do a paper on carbon emissions with no experts on carbon emissions? I count 2 informatics profs, 1 cs prof, and a law prof.
@emilymbender also, the per capita emissions of a country include the LLM's emissions.

@emilymbender I really feel like this, and other stuff like that "AI focus group" thing, are really about trying to reduce a human to "a piece of equipment, perhaps now obsolete, for accomplishing a task" - rather than "a person, with their own experiences and value and dignity, deserving of respect and consideration."

That scares the heck out of me.

@arcanesciences
Reducing humans to the status of perhaps-obsolete-equipment is a necessary moral precondition to the #Longermist program of replacing us with 10**48 (or whatever their made-up quantity is) hypothetical simulations of humans at some hypothetical future time.
#Longtermism #ExistentialRisk
@emilymbender
@emilymbender Extremely weird article. I assume that the authors just didn't think this through at all, but the implication that we can or should *replace human populations with AI* is disturbing at best.
@ouro @emilymbender It seems like playing down the value of humans and playing up the value of models has been a pattern for a while. The implications are a little unsettling.
@emilymbender This paper is proper box-of-frogs mad.
@emilymbender Apparently under review in Scientific Reports (https://www.researchsquare.com/article/rs-2724922/v1)
The Carbon Emissions of Writing and Illustrating Are Lower for AI than for Humans

As AI systems proliferate, their greenhouse gas emissions are an increasingly important concern for human societies. In this article, we present a comparative analysis of the carbon emissions associated with AI systems (ChatGPT, BLOOM, DALL-E2, Midjourney) and human individuals performing equival...

@emilymbender

It's being reviewed by Scientific Reports, Nature's open access title, 6th most-cited journal in the world, so close enough.

And you're spot on the the ethics of their sums.

A much more sensible comparison is the carbon cost of computing during the writing process, which is 1.6g for the AI servers and 27g for the writer's laptop.

(It'd be nice if the output was anywhere near as good as 1/16th of the writers, though..).

Highlights the CO2 cost of shitty jobs, though...

@emilymbender

Also, it's a pre-print -- you can spout any crap you please in a pre-print, apparently 6-7% of submissions are published in Nature, and I don't think those odds are likely to be in favour for that paper.

@emilymbender
🤨
and some "researchers" have lost it;
completely
- the most absurd start -
Nevertheless, at present, the use of AI holds the potential to carry out several major activities at much lower emission
levels than can humans.