I’m officially done with takes on AI beginning “Ethical concerns aside…”.

No! Stop right there.

Ethical concerns front and center. First thing. Let’s get this out of the way and then see if thre is anything left worth talking about.

Ethics is the formalisation of how we are treating one another as human beings and how we relate to the world around us.

It is *impossible* to put ethics aside.

What you mean is “I don’t want to apologise for my greed and selfishness.”

Say that first.

@janl the unfortunate reality is that society decided that ethics is a luxury, something that you can take care of if all is well.

but nothing is well, it's all about growth. So nothing ever will be well.

I don't have a non depressing thing to say to this

@vitloksbjorn

From my understanding, it's the other way around. People started to believe that a certain ethics is no luxury anymore, and started manifesting it in things like universal human rights.

Yet, from a historical point of view, there seems to be a large spectrum of different beliefs here, and also a spectrum of different ethics. Like utilitarianism ("the greatest good for the greatest number"): people following _this_ kind of ethics would surely be happy if their work was used to train AI models.

Or not? That's a big question for me. Let aside the problems of intellectual property: do we as a society want tools like huge powerful AI systems? On the long term, will it make us better or worse?

@janl

@fxnn @janl

I don't believe in utilitarianism, or "longtermism" as Musk tends to call it.

Who in their right mind would be happy to become fuel for a machine of growth that doesn't even guarantee that the returns will go to the right people?

I find it very disconcerting that a huge amount of people don't see this glaring ethical contradiction

@vitloksbjorn

Which doesn't answer my question, whether these tools improve us as a society.

About the returns, I think there's the problem of the huge amount of documents used for training. These things are being fed with the knowledge of the world. If you leave out a single work, would they change? No. So the reward for a single work would be marginal. That's at least my hypothesis, I'd be curious to see some calculations, but I believe the Spotify model doesn't really work here.

@janl

@fxnn @janl
I think I answered your question - the power goes to the wrong people, so no, it doesn't make the society better. And even if this wasn't the case, it tends to actually make us lesser:

https://futurism.com/experts-ai-stupider

As for the other argument... "does stealing a single dollar make any difference? No? So I'll steal all of them". That's the same logic.

Experts Concerned That AI Is Making Us Stupider

A new analysis found that humans stand to lose way more than we gain by shoehorning AI into our day to day work.

Futurism

@vitloksbjorn

No, it is not, because we're talking about knowledge and not money. Knowledge has a value, but for LLMs the value doesn't lie in an individual contribution, but in the massive amount of contributions. That's opposed to a book, which in itself brings much value.

So if even an author of many documents would just get 0,004$ per year, why even bothering? As I said, I'd love to see an exact calculation here, but as long as we, as a society, decide that the technology brings us value, I see it as valid decision to not reward individual contributors.

If however the calculation would reveal that it's realistic to reward authors with a notable amount of money, I think we can and must establish concepts like the collecting societies in the music industry.

@janl

@fxnn @janl

Wait, are you implying that a calculation of value is what determines whether an action is theft or not?

Besides, I thought we're talking about the enrichment of society, not of extracting economical value?

@vitloksbjorn

Precisely the two topics we're talking about. Enrichment of society on the one hand side, using valuable work from contributors who want to be rewarded, so economical value on the other side.

Don't you agree?

@janl

@fxnn @janl

I don't agree, no. But let's look at it like this: do you know of any artists that would willingly sell their work, knowing that this will lead to their replacement?

@vitloksbjorn

We're talking about another kind of industrial revolution. Weavers lost their jobs because of machines, carpenters lost their jobs because of industrial manufacturing, and photographers that earn their money from stock photo sites will also loose their jobs. It will have severe impact on our society, and it's part of why I'm saying that the society needs to decide whether they want to have that technology or not.

Nevertheless I don't agree calling this theft. When a photographer learns photography by looking at other photographs, they don't steal. Every artist learns their handicraft by copying prior art. Is that stealing? AI does the same, just on a larger scale.

Neither is the student stealing who goes to the library and learns from books. AI does the same, just on a larger scale.

We're automating, as we did before with steam machines and serial production, just that this time it's capabilities at the heart of our human culture that we're automating. Learning, reasoning, formulating.

You cannot steal what one doesn't own, and ideas cannot be owned. Books can be owned and stolen, but AI doesn't sell books. AI has the power to relieve us from recurring tasks, allowing us to focus more on what's important. But at the same time, it comes at the risk of manipulating us, increasing our stress levels even further. And yes, at the moment, a few people get too rich from this.

I find this decision, good or bad, very difficult, because the more you look into it -- unbiased! --, the more you see both the risks and the opportunities.

@fxnn I disagree with saying that an AI learning is equivalent to a human learning, that's like saying that a database is learning by copying copyrighted data.

But anyway, you said "it allows us to relieve us from recurring tasks, focusing on what's important". But for many, AI is taking what is important to them. Creating art. Being creative as a writer, engineer. What is more important to you to be able to automate this?

@vitloksbjorn

First of all, I don't think that AI will be able to replace original art. Novels, paintings, really creative photography. I think that this will always need the inspiration, creativity, deep thoughts of the human mind.

But anyways, that's merely a side note. We're in midst of an industrial revolution (https://en.m.wikipedia.org/wiki/Fourth_Industrial_Revolution), and it will be tough. Can we stop it? I don't think so.

There are huge fears and concerns, but also high hopes. What effects will it have? What will be good and bad during the transition, and especially afterwards: will human societies be better, worse, or will it be all the same? We can't know.

That's what I meant with my first post. We, as a society, as individuals, can try to stem against it or try to embrace it. We can talk and argue against this revolution, or try to look forward and find our place.

Fourth Industrial Revolution - Wikipedia

@fxnn Oh and as for this - I don't think what we have is a fourth industrial revolution, nor I think the AI "takeover" is inevitable. In fact, I'm almost certain that the hype will end either this year or the next, due to the diminishing returns of hyperscaling and no ROI on the products.

@vitloksbjorn

I wouldn't give it up so quickly. Many people are fascinated by it, and already made good use of it. Not only hyper scalers and big VC founded companies.

It is backed by a very capable open source community. With Ollama, you can run the LLMs on your own machine (and in the future, you will probably even be able to run them on your smartphone). With Aider, you can use AI for software development. There's a huge set of GenAI-related tools, and new ones are added all the time.

And there's scientific research, of course. So no, I don't think the trajectory will end with OpenAIs business model. (And btw. I also don't believe that their business model will fail, but I don't really have a clue about that.)