I’m officially done with takes on AI beginning “Ethical concerns aside…”.

No! Stop right there.

Ethical concerns front and center. First thing. Let’s get this out of the way and then see if thre is anything left worth talking about.

Ethics is the formalisation of how we are treating one another as human beings and how we relate to the world around us.

It is *impossible* to put ethics aside.

What you mean is “I don’t want to apologise for my greed and selfishness.”

Say that first.

@janl ethical and environmental concerns
@seb321 “the world around us” ethics has nature covered.
@janl Ethics or Morals? In this modern world there are 3 competing ethical systems. Corporate Ethics, Political Ethics, Individual Human Ethics. Putting Human Ethics aside is exactly what the Corporate Ethical system does all the time in the pursuit of quarterly growth. The Political Ethics system is supposed to keep a check on this for longer term social benefits. But in practice that's been captured by the Corporate system.
@jbond You’re missing the point by a bunch of miles.
@janl @jbond so corporations and politicians are allowed to "put ethics aside"? what even is your point then?
@mitsunee @jbond I don’t think that’s what I said.
@mitsunee Be careful with replies and who you are replying to.
@jbond oh sorry am I not allowed to have opinions on Sundays anymore?

@mitsunee Gah!

You criticised me and that's fine. But you also replied to the OP and they took it as criticism of them.

@janl Yes, I'm probably messing the pitch by unnecessarily widening the points. In normal speech we use Moral and Ethical interchangeably. We implicitly judge corporations by our own personal moral code. On that basis we can criticise corporates for deliberately refusing to address moral concerns. They should not be allowed to get away with this. But from their POV, they are still following their own system of Business Ethics.

https://en.wikipedia.org/wiki/Ethics
https://en.wikipedia.org/wiki/Business_ethics

Ethics - Wikipedia

@janl The corporate is explicitly saying "the end justifies the means" because in their view it will lead to a greater good. The problem is that it's a greater good for them and not necessarily for us.

As individuals and politicians we should not allow corporates to get away with this.

@jbond ah yes, business ethics. the thing that allowed ford to make the exploding pinto, that let coca cola to use death squads to kill union leaders in colombia, that had nestlé killing babies to sell formula in developing nations, union carbide in bhopal, citi in haiti, pacific island company in nauru, and so forth. thanks for that contribution to the discussion. it is important to be reminded that the profit motive is the making of monsters. great chat. let's move on.

@janl

@janl the way i understood it is by simile w/ what's been described as the American Civil Religion: no matter your political alignment nor deity, yanks perform certain rituals and hold certain concepts sacred in their civil existence

and this “corp ethics uber alles” [including the individual's] fits 100% w/ that yank worldview

@jbond You misspelled "Corporate Greed". Calling rapaciousness "ethics" doesn't make it ethical.

@janl

@kagan @jbond @janl Legal ethics is also a thing. Certain things are proscribed and may result in disbarment, depending on circumstances and who you know, but mostly whether you're caught.
@janl When it comes to ethics, AI is worth as much as a toaster.
@dirk @janl idk. If you want sandwiches a toaster can be pretty valuable 😁
@janl What I do when I start with "setting ethical concerns aside for the moment" it'll be in order to discuss why it's terrible even if you're an ethically bankrupt sociopath, but I'll always end with, "and then let's not set aside the ethical concerns."
@janl That's why the billionaires had to invent hogpodge like #tescrealism and are basically worshipping Max Boström's racist ass as a saint!
@janl Ethical concerns aside, chattel slavery gives *me* so much more time to enjoy life. /s

@janl
The argument that nobody may consider or work on any of a multitude of topics until they have solved a single problem which they are not expert in and the arguer, presumably is, is common, flawed, uneconomic of effort, and usually directed from someone who is not currently god-emperor of the universe at people who regard themselves as autonomous.

A more explicit phrasing ...

...
might be "ethical problems exist but for the moment I shall leave them for others and focus on one other point I understand particularly" but that is uneconomic of limited space, and most of us understand that's what the phrase used implies.

And one ends up double-tooting.

@midgephoto ethics isn’t some esoteric subject that only people who’ve devoted their lives to studying it can possibly speak to. Thinking about how our actions affect others is a core requirement of every person living in a society.

@janl Mostly at this point it is users who are trying to say this to feel better about their inability to say no and take a stand.

The companies keep saying that they cannot profit if they're ethical or even if they follow laws. They know and admit "ethics be damned".

@janl You're coming at this from an angle that presupposes ethics are a necessary component, they're coming at it from an angle that presupposes ethics are an unnecessary burden to be abstracted away and carried by others.
@nini no that’s exactly my point.
@janl Oh, I got it, just that mine and your positions stand against theirs and they'll never want to see it as we do.
@janl @nini Some people will invariably remove the burden from their own actions. If there's an argument that also shows those people why AI is not the right choice, then that's a win for all of us.
@janl odd how ethics is always disposable and optional but economics never is.

@janl holy shit I've been saying this for months and literally just said it to one of the more senior people in the place I work. Stop hand waving and making excuses!

There can be some thoughtful deliberation about using specific types of AIs for specific purposes (I saw an article about a use case needing dramatically fewer resources to run the tech).

But no one is doing any critical thinking at all about the now well documented harms, let alone harm reduction.

@janl This is a fantastic point, and it goes so much deeper, culturally, than AI. For about 20 years or so, some writers have openly prized amoral business behaviour. The cringey days of idiots going on about 'wow, sociopath CEOs are so cool' and quoting Walter White and Tywin Lannister are hopefully gone, but the attitude is still there.

'Ethical concerns aside' have been code for 'shove that ethics shit, check out this cool new grift!' ever since the CharismaonCommand types grew up.

@janl
Ethical concerns are what keeps us from jumping over the table and strangling middle managers with their own neckties when they suggest using AI, and I don't know how we could make that any more obvious.
@petealexharris @janl wait, are we *not* strangling them now?

@CartyBoston @janl
Of course not that would be unethical, what did I just say.

Unrelated, I have a rolled up carpet at the office I need to dispose of. It's surprisingly heavy so I'll need a couple of uncurious people to help.

@janl the unfortunate reality is that society decided that ethics is a luxury, something that you can take care of if all is well.

but nothing is well, it's all about growth. So nothing ever will be well.

I don't have a non depressing thing to say to this

@vitloksbjorn

From my understanding, it's the other way around. People started to believe that a certain ethics is no luxury anymore, and started manifesting it in things like universal human rights.

Yet, from a historical point of view, there seems to be a large spectrum of different beliefs here, and also a spectrum of different ethics. Like utilitarianism ("the greatest good for the greatest number"): people following _this_ kind of ethics would surely be happy if their work was used to train AI models.

Or not? That's a big question for me. Let aside the problems of intellectual property: do we as a society want tools like huge powerful AI systems? On the long term, will it make us better or worse?

@janl

@fxnn @janl

I don't believe in utilitarianism, or "longtermism" as Musk tends to call it.

Who in their right mind would be happy to become fuel for a machine of growth that doesn't even guarantee that the returns will go to the right people?

I find it very disconcerting that a huge amount of people don't see this glaring ethical contradiction

@vitloksbjorn

Which doesn't answer my question, whether these tools improve us as a society.

About the returns, I think there's the problem of the huge amount of documents used for training. These things are being fed with the knowledge of the world. If you leave out a single work, would they change? No. So the reward for a single work would be marginal. That's at least my hypothesis, I'd be curious to see some calculations, but I believe the Spotify model doesn't really work here.

@janl

@fxnn @janl
I think I answered your question - the power goes to the wrong people, so no, it doesn't make the society better. And even if this wasn't the case, it tends to actually make us lesser:

https://futurism.com/experts-ai-stupider

As for the other argument... "does stealing a single dollar make any difference? No? So I'll steal all of them". That's the same logic.

Experts Concerned That AI Is Making Us Stupider

A new analysis found that humans stand to lose way more than we gain by shoehorning AI into our day to day work.

Futurism

@vitloksbjorn

No, it is not, because we're talking about knowledge and not money. Knowledge has a value, but for LLMs the value doesn't lie in an individual contribution, but in the massive amount of contributions. That's opposed to a book, which in itself brings much value.

So if even an author of many documents would just get 0,004$ per year, why even bothering? As I said, I'd love to see an exact calculation here, but as long as we, as a society, decide that the technology brings us value, I see it as valid decision to not reward individual contributors.

If however the calculation would reveal that it's realistic to reward authors with a notable amount of money, I think we can and must establish concepts like the collecting societies in the music industry.

@janl

@fxnn @janl

Wait, are you implying that a calculation of value is what determines whether an action is theft or not?

Besides, I thought we're talking about the enrichment of society, not of extracting economical value?

@vitloksbjorn

Precisely the two topics we're talking about. Enrichment of society on the one hand side, using valuable work from contributors who want to be rewarded, so economical value on the other side.

Don't you agree?

@janl

@fxnn @janl

I don't agree, no. But let's look at it like this: do you know of any artists that would willingly sell their work, knowing that this will lead to their replacement?

@vitloksbjorn @fxnn I can be dropped from this conversation now ✌️
@janl
oops, I'm new to masto so I didn't realise. no probs.

@vitloksbjorn

We're talking about another kind of industrial revolution. Weavers lost their jobs because of machines, carpenters lost their jobs because of industrial manufacturing, and photographers that earn their money from stock photo sites will also loose their jobs. It will have severe impact on our society, and it's part of why I'm saying that the society needs to decide whether they want to have that technology or not.

Nevertheless I don't agree calling this theft. When a photographer learns photography by looking at other photographs, they don't steal. Every artist learns their handicraft by copying prior art. Is that stealing? AI does the same, just on a larger scale.

Neither is the student stealing who goes to the library and learns from books. AI does the same, just on a larger scale.

We're automating, as we did before with steam machines and serial production, just that this time it's capabilities at the heart of our human culture that we're automating. Learning, reasoning, formulating.

You cannot steal what one doesn't own, and ideas cannot be owned. Books can be owned and stolen, but AI doesn't sell books. AI has the power to relieve us from recurring tasks, allowing us to focus more on what's important. But at the same time, it comes at the risk of manipulating us, increasing our stress levels even further. And yes, at the moment, a few people get too rich from this.

I find this decision, good or bad, very difficult, because the more you look into it -- unbiased! --, the more you see both the risks and the opportunities.

@fxnn I disagree with saying that an AI learning is equivalent to a human learning, that's like saying that a database is learning by copying copyrighted data.

But anyway, you said "it allows us to relieve us from recurring tasks, focusing on what's important". But for many, AI is taking what is important to them. Creating art. Being creative as a writer, engineer. What is more important to you to be able to automate this?

@vitloksbjorn

First of all, I don't think that AI will be able to replace original art. Novels, paintings, really creative photography. I think that this will always need the inspiration, creativity, deep thoughts of the human mind.

But anyways, that's merely a side note. We're in midst of an industrial revolution (https://en.m.wikipedia.org/wiki/Fourth_Industrial_Revolution), and it will be tough. Can we stop it? I don't think so.

There are huge fears and concerns, but also high hopes. What effects will it have? What will be good and bad during the transition, and especially afterwards: will human societies be better, worse, or will it be all the same? We can't know.

That's what I meant with my first post. We, as a society, as individuals, can try to stem against it or try to embrace it. We can talk and argue against this revolution, or try to look forward and find our place.

Fourth Industrial Revolution - Wikipedia

@fxnn Oh and as for this - I don't think what we have is a fourth industrial revolution, nor I think the AI "takeover" is inevitable. In fact, I'm almost certain that the hype will end either this year or the next, due to the diminishing returns of hyperscaling and no ROI on the products.

@vitloksbjorn

I wouldn't give it up so quickly. Many people are fascinated by it, and already made good use of it. Not only hyper scalers and big VC founded companies.

It is backed by a very capable open source community. With Ollama, you can run the LLMs on your own machine (and in the future, you will probably even be able to run them on your smartphone). With Aider, you can use AI for software development. There's a huge set of GenAI-related tools, and new ones are added all the time.

And there's scientific research, of course. So no, I don't think the trajectory will end with OpenAIs business model. (And btw. I also don't believe that their business model will fail, but I don't really have a clue about that.)

@vitloksbjorn

Two more things. A database doesn't learn, it stores. A LLM, on the other hand, is really bad at storing exact information. Just as with our own brain (only that we often know when we don't know something.) The learning metaphor just fits there.

The other thing: one outcome of AI and this industrial revolution, that I hope for, is a universal basic income. At some degree of automation, why can't we all just work a bit less, and rather care for fun things in our lives? But maybe that simply won't work for us humans, who knows...

@fxnn Alright. the "storing" argument is incorrect, let me quote two pieces of evidence:

https://www.researchgate.net/figure/Actual-screenshot-from-Dune-2021-versus-its-Midjourney-generated-counterpart-evolving_fig1_379693193

https://news.ycombinator.com/item?id=33226515

it learns in the sense that it nearly memorises. I don't think the difference matters here.

And second, it is very naive to assume that the current structure of power will do something as magnanimous as UBI. I do hope that if it indeed comes to full replacement, something like this happens.

@vitloksbjorn

I do know that AI is capable of reproducing particular popular scenes, settings and texts. On the other hand, it miserably fails on lots of things it should (or could) know, some factuality benchmarks I found show error rates around 30%, depending on the domain, and of course lots of hallucination.

And that matches quite well what we know about learning, right? Popular stuff, it has often seen, can be reproduced more accurately than the long tail. With a database, you could just retrieve what you put inside.

Likewise, a database can't generate new things based on what it stored.

@fxnn
I don't see how the hallucination rate is relevant here, it usually hallucinates when you're asking it to do something new, rather than recite something that exists.

As for learning - I'm not entirely sure if AI can create "new" stuff. It merely reconfigures what other have already made, or said. That's what it is, isn't it? A very complex search-and-combine engine.

@vitloksbjorn

Knowing and hallucinating is all the same for a LLM. They are made to just predict the most probable next token for the given input, based on what they learned. They knew the fact? Then their prediction was good. They hallucinated? Then their prediction was bad, maybe because the knowledge is bad at that point.

About creating something new: that's one of the really fascinating aspects of GenAI. They were just tasked to predict tokens, but along the way, they learned to abstract! This can be shown:

- on visual tasks, see https://arxiv.org/abs/2305.18354
- by detecting abstract linguistic concepts, see https://arxiv.org/abs/2404.15848

Now, abstraction is the key to recombine existing knowledge and apply it to new domains. To me, that explains how LLMs can solve new coding tasks, explain strange topics to 5 year olds, compose poems about the absurdest subjects, or do other things that can't just be copied from somewhere.

LLMs and the Abstraction and Reasoning Corpus: Successes, Failures, and the Importance of Object-based Representations

Can a Large Language Model (LLM) solve simple abstract reasoning problems? We explore this broad question through a systematic analysis of GPT on the Abstraction and Reasoning Corpus (ARC), a representative benchmark of abstract reasoning ability from limited examples in which solutions require some "core knowledge" of concepts such as objects, goal states, counting, and basic geometry. GPT-4 solves only 13/50 of the most straightforward ARC tasks when using textual encodings for their two-dimensional input-output grids. Our failure analysis reveals that GPT-4's capacity to identify objects and reason about them is significantly influenced by the sequential nature of the text that represents an object within a text encoding of a task. To test this hypothesis, we design a new benchmark, the 1D-ARC, which consists of one-dimensional (array-like) tasks that are more conducive to GPT-based reasoning, and where it indeed performs better than on the (2D) ARC. To alleviate this issue, we propose an object-based representation that is obtained through an external tool, resulting in nearly doubling the performance on solved ARC tasks and near-perfect scores on the easier 1D-ARC. Although the state-of-the-art GPT-4 is unable to "reason" perfectly within non-language domains such as the 1D-ARC or a simple ARC subset, our study reveals that the use of object-based representations can significantly improve its reasoning ability. Visualizations, GPT logs, and data are available at https://khalil-research.github.io/LLM4ARC.

arXiv.org

@janl
Its a classic sign of a Business Idiot...

https://www.wheresyoured.at/the-era-of-the-business-idiot/

The Era Of The Business Idiot

Fair warning: this is the longest thing I've written on this newsletter. I do apologize. Soundtrack: EL-P - $4 Vic Listen to my podcast Better Offline. We have merch. Last week, Bloomberg profiled Microsoft CEO Satya Nadella, revealing that he's either a liar or a specific kind of idiot. The

Ed Zitron's Where's Your Ed At