Very easy
@Some_Emo_Chick My initial reaction to them was something along the lines of "Ok, sure, so, what's the point?" And, as it turned out, the point was money laundering, tax evasion, and scamming people. I don't think making climate change worse was entirely intentional, people making them just didn't give a shit.

@StarkRG @Some_Emo_Chick my guess is the last thing people who launder money, evade taxes and scam other people care about is the climate.

But glad that grift is over, can't wait to see what the next obvious one is

@erikcats @StarkRG @Some_Emo_Chick the next obvious one is "AI" (put in quotes because it's not intelligent, it's just a marketing scam)

@Aradiel @erikcats @Some_Emo_Chick LLMs are a solution looking for a problem. You can usually tell by the way it's marketed as being useful for anything and everything while not actually being better than anything that already exists.

Other types of generative AI aren't as bad, though that isn't saying much since LLMs are the literal worst. There are, at least, a handful of cases where they have advantages over existing solutions, but they still need a lot of handholding.

@StarkRG @Aradiel @Some_Emo_Chick explain to someone who's not a techie what LLMs are, without resorting to LMGTFY or similar things

@erikcats
LLM was trained by "looking" at text and finding patterns and rules. The original text itself is not stored in the trained model. Only the patterns which has been found. LLM is creating text word for word. Always calculating the most probable word based on all the words preceding it.

Summary: The created text by LLM is a patchwork of guessing and not a copy of information.

@StarkRG @Aradiel @Some_Emo_Chick

@seismographix @erikcats @StarkRG @Some_Emo_Chick what is the training data of not a collection of patterns of words?
@Aradiel @seismographix @erikcats @Some_Emo_Chick Among other things, you're unlikely to get the original back as an output, just said that's vaguely similar to the original. It's still close enough to plagiarism that I think it counts.
@StarkRG
We should start to differentiate. Create an example please. Take an news article and recreate it with chatGPT. One rule so: You are not allowed to instruct chatGPT how to fix the output afterwards. In the last case you as human being would be the driver for plagiarism.
@Aradiel @erikcats @Some_Emo_Chick
@seismographix @StarkRG @erikcats @Some_Emo_Chick for such an example I would want the training data to be restricted to only that article

@Aradiel

Why would someone train a LLM only on one news article? And the question would be, is this enough training data for the LLM to create meaningful sentences afterward.

@StarkRG @erikcats @Some_Emo_Chick

@seismographix @StarkRG @erikcats @Some_Emo_Chick because it would prove my point that it is copying the data. It's transforming it first, but it is storing a copy of it

@Aradiel

Nice thought. 😀 But often relations are not linear dependent on each other. Your example could lead to overfitting (point proved) or underfitting (point missed).

I added a screenshot for the explanation of overfitting and underfitting.

@StarkRG @erikcats @Some_Emo_Chick

@seismographix @StarkRG @erikcats @Some_Emo_Chick getting into a grey area here, but in my view, copied data that is corrupted in copying is still copied (in this case it's the transformation corrupting it)

Eg. Download two files, which are 1s and 0s. Shuffle them together
You can't get either file back out, but you still copied them in the first place

@Aradiel

Of course, the training input must be from free sources. And it would be correct to let people decide if they want to contribute to the training data.

@StarkRG @erikcats @Some_Emo_Chick

@seismographix @StarkRG @erikcats @Some_Emo_Chick if only that were actually what's happening

@Aradiel

It is not text only, but here is the image and text database LAION-5B. https://laion.ai/blog/laion-5b/

@StarkRG @erikcats @Some_Emo_Chick

LAION-5B: A NEW ERA OF OPEN LARGE-SCALE MULTI-MODAL DATASETS | LAION

<p>We present a dataset of 5,85 billion CLIP-filtered image-text pairs, 14x bigger than LAION-400M, previously the biggest openly accessible image-text datas...