Hey y'all, i know you know this, but while you definitely shouldn't use GPTs for legal research, also don't rely on GPTs for RESEARCH, PERIOD.

They are neither giving nor TRYING to give you intersubjectively associated and derived facts; they are not even remixing factual CONCEPTS into new forms.

They are modelling human biases out into digestible bullshit with a statistically-determined high probability of being swallowed.

That is all.

They don't have to be this way, but, at present, the people making them have no incentive to change them. So. Don't lean on them for fact stuff. It's not what they do.

@Wolven I've started thinking of them as procedurally generated choose-your-own-adventure games.

You wouldn't try to find your way in the world using a procedurally generated gameworld map. Don't make the same mistake anywhere with an LLM.

@MarekMcGann @Wolven My favourite model is still https://play.aidungeon.io/
AI Dungeon

AI Dungeon, an infinitely generated text adventure powered by deep learning

@Wolven it’s just so wrong at so many levels! Who could really think using a LLM trained from the internet (the universal source of misinformation and bs) could help them write a scientific paper with actual facts?

That and providing OpenAI with access to all your unpublished results for whatever they’re using their user input/interaction data, which is totally obscure and many countries have raised concerns about it.

@Wolven so much this. There is so much potential to LLMs as tools for all sorts of things, and even more for machine learning in a broader sense... But geez, right now, using gpt to supercharge your research just isn't one of them, not by a long shot.
@b4ux1t3 nope, and until some fundamental thigs change inside them and the companies which develop them, it's just going to get worse

@Wolven
Someone asked ChatGPT for safety advice:

https://www.makeuseof.com/can-chatgpt-save-your-life-in-the-wilderness/

I am now ready to fight a bear. Thanks, Chat-GPT!

Can ChatGPT Save Your Life in the Wilderness?

Why turn to Google when generative AI has all the answers?

MUO

@Wolven

According to chat GPT, I'm a dead Canadian MP, either for The Liberals or the NDP... So, no, I would not trust it any more than Jack at work.

(Jack is very smart, extremely knowledgeable of just about anything, great to chat with about random stuff and a great asset on your pub quiz team, but I don't think it would be acceptable to cite "Jack from work"' in a paper)

@Wolven

The worse problem about it is that some clever experts in SEO will use it to generate pages and pages of content based on current search trends, flood the searches to artificially inflate traffic to their site and bumping out many legitimate pages providing the accurate information users are looking for.

@Lily_and_Frog very much already happening, yes. And with Microsoft integrating openAI tools directly into the windows environment, it's going to get even worse, real fast.

@Wolven

indeed.

It's flooding publishers with shit.
It's flooding music streaming services with shit.
It's flooding Amazon and Kindle with shit.

I experienced it yesterday as I was doing some html (I'm really really novice at it) and I had a simple question. At least two of the links I visited from bing were unreadable waffle that might or might not answered my question and very possibly generated by chat GPT.

Even if one tries to avoid it, it's still imposed on us.

@Wolven

Someone tried to tell me with a straight face that Microsoft putting them as a search engine on Bing wasn't a marketing claim.

The abject total cognitive surrender to #TESCREAL BS is both real and deeply vexing.

@Wolven The best use they have is to pretend they are your drunk friend who sits there babbling entertainingly and occasionally throws you a diamond of an idea.
@Wolven I only somewhat differ from you here. Even ChatGPT 3 is usually pretty accurate. It's just that all too often, it's so elaborately, convincingly, misleadingly and even dangerously wrong, as to be useless as a research tool. In my own experiments as using it as such, I spent more time fact-checking its responses (because I was looking for non-existent info), than I would have spent doing old-fashioned research without the "AI".

I've previously argued that as long as LLMs aren't your
primary tool, they could be useful - kinda like a fancy Wikipedia (also unreliable on its own). But at least Wikipedia's references are genuine, and there's a community to flag up errors. What kind of tool not only creates more work by sending you off up blind alleys, but has no mechanisms of accountability to its users for mistakes?

Nah, I'm done with it.
@Wolven ChatGPT does language statistics. It has no capability to assess if its output is factually correct at all.

@Wolven

Indeed!

They do *sort of* have to be this way, in the sense that we don't have a solution to the factuality problem in large language models. What they do, architecturally, is spout stuff that sounds sort of like their training set, but without any notion of true or false.

So as far as this particular specific technology is concerned, they do have to be this way; it's just what they do! The problem isn't as much incentive as it is basic knowledge. People are working hard on altering or adding to the basic technology to make them emit truths where that's important, but it's definitely an unsolved problem.

@Wolven Can you give an example of GPT providing biased and/or misleading information? I see a lot of these kinds of anti GPT posts but not a lot of concrete examples where it's actually problematic.

I'm not claiming it's not a problem, just curious whether you are describing actual experiences you've had with the tech or if it's more about theoretical concerns and/or second-hand stories.

@mypalmike If you click through to my profile and look at the media, you'll find several examples.