Folks, I encourage you to not work for @OpenAI for free:

Don't do their testing
Don't do their PR
Don't provide them training data

https://dair-community.social/@emilymbender/110029104362666915

Emily M. Bender (she/her) (@[email protected])

Oh look, #openAI wants you to test their "AI" systems for free. (Oh, and to sweeten the deal, they'll have you compete to earn #GPT4 access.) https://techcrunch.com/2023/03/14/with-evals-openai-hopes-to-crowdsource-ai-model-testing/

Distributed AI Research Community

I see people asking: How else will we critically study GPT-4 etc then?

Don't. Opt out. Study something else.

GPT-4 should be assumed to be toxic trash until and unless #OpenAI is *open* about its training data, model architecture, etc.

I rather suspect that if we ever get that info, we will see that it is toxic trash. But in the meantime, without the info, we should just assume that it is.

To do otherwise is to be credulous, to serve corporate interests, and to set terrible precedent.

@emilymbender Not just the algorithm, but also the logistics of it. Who trained it ? For how many hours ? Was human labelling and reinforcement necessary (of course there was) ? Where were they located ? How much were they paid ? Pretty much all the things we need to be doing with any product that is offered today. Because if it is free or cheap, it is probably being subsidised by exploitation somewhere in the world. Out of sight should not mean out of mind.

@emilymbender It's hard to balance that with the edge it can give you at work. I honestly feel like I can be way more effective at my job by using LLMs. Summarizing meetings, soundboard for ideas, asking for POVs of different stakeholders.

While there's problems with it, from both provenance and usage, it's still a very powerful tool.

It's hard to resist the temptation to use it when it can lead to promotions, less stress and other positive personal impacts.

@mkhoury @emilymbender Those are the appealing factors that lead to other ethical failings, though. This is not new with AI and the outcomes aren't different.

@Linza @mkhoury

Definitely. This is the same argument plantation owners had.

@emilymbender

@mkhoury @emilymbender
No. By design, it will make you the most trite, hacky and *replaceable* person possible. Useless and powerless. Saying nothing useful, smoothly, doesn't help anyone.

Remember the definition: AI produces the most statistically likely language, without regard to truth or analysis. That's all it can do.

Why would that help you? Why would you want to work at a place that rewards that?

@taoish @emilymbender

1) Nothing says you have to take its output and use it to express yourself. That's not how I use it personally. Rather I use it as a system to collapse large amounts of data into digestible pieces. Summarizing meetings, code changes, etc

2) You can enhance the output by framing the prompt to give the "most statistically likely language" of an expert rather than average

Hard to be convinced that it doesn't help me when it does everyday

@taoish @emilymbender Like I've said in another part of the thread, arguing that the system is not good or helpful is the wrong approach in my opinion. Making it more factual and helpful is entirely aligned with the incentives of their owners.

Rather, focusing on the training data (biases, exploitation, climate impact, etc) and the misuses (workforce impact, scamming potential, deepfaking, marginalization, etc) is something we need to force alignment on.

@mkhoury @emilymbender What if one thinks - as I do - that human individuality is the essence of good writing, and that the work AI seeks to replace - including synthesis and synopsis - is essential to understanding and thinking through problems?

@taoish @emilymbender I agree with you. In an ideal world, if I had unlimited time and resources, I would be 100% invested in all my meetings, would take the time to summarize every meeting and take the time needed to really ingest that stuff.

In terms of ROI though, getting a summary of meetings is good enough for my needs. I can concentrate my energy on doing the deep creative that I actually want to do. It leads to higher quality output overall.

@mkhoury @emilymbender
I can see that. I guess I come from the point of view of a non-fiction writer, who didn't get very good at it until my 50s (and worked in other fields up to that point.)

My worry is that people may never get the experience writing to fluidly express their weird-ass selves. Not unlike, in a less important example, the way GPS directions have pretty much destroyed the acoustic navigation skills of digital natives (imho).

@emilymbender Are there alternatives to study or use?

@emilymbender

I agree with "don't help them", but if we really had a rule that people shouldn't study toxic trash that affects a lot of people, so much for my decades of study of actual toxic pollutants and greenhouse gasses

@RichPuchalsky @emilymbender They are already being produced though, right? And your studies might stop them? Feels different.
@emilymbender studying GPT-4 without access to the code and learning material is like studying litteratur by standing across the street from a library with binoculars anyway.
@emilymbender tbh, the minute I saw Peter Thiel was behind this project I knew it was absolutely toxic in ways we canโ€™t even imagine.
@emilymbender Using an unvalidated system to take over parts of your own thinking, always a good idea!1!
@emilymbender Microsoft admitted it was using gpt-4 fin bing and you can already tell that it's an insecure, hallucinogenic, manipulative, jerk.
Tane Piper (@[email protected])

Inspired by #StochasticParrotsDay yesterday I ended up building a Mastodon Bot that takes prompts generated from an empty question to #ChatGPT4 and some random numbers and has a parrot repeat it verbatim - @[email protected] I've now thrown up a website too - https://stochasticparrot.lol/ @[email protected] @[email protected] @[email protected]

Tane's Fedeverse
@emilymbender Iโ€™m given a strong impression that a lot of this is snake oil being pumped by techbros to get investment capital, having followed AI for 20+ years Iโ€™m sensing peak dotcom huckster hype.

@emilymbender

> GPT-4 should be assumed to be toxic trash until and unless #OpenAI is *open* about its training data, model architecture, etc.

Also treat it as toxic trash until all input data is specifically attributed to its source, and traced through to the generated output.

Too difficult for OpenAI to manage all that? Too bad, they have that obligation and we must hold them to it. Until then, no permission to redistribute has been granted.