How many people do the same?
I think we should be talking about who they will be doing all of our jobs for. Who is going to benefit from offloading all of our work onto them. As things stand now, it is not going to be us.
They could, yes. But it will take more than just supporting open source and open standards. We need to re-think our whole economic model. The focus needs to be changed over to benefiting all of us.
It probably sounds like I'm suggesting communism, but I honestly don't think that is the answer. But neither is unfettered capitalism. Undoubtedly we will use elements of both, but something new is needed.
I'm not even concerned with methods to achieve it yet. I'm still trying to get a good idea of where we need to get to.
The only thought I know of that was given to a future where labor is no longer required to meet basic needs is the Star Trek Universe. But that is not at all fleshed out.
@drewharwell I think this is completely true. I think the writing needs to be clearer on that.
But I also think we will see these word patterns have the potential to persuade humans to do things they shouldn't and wouldn't otherwise do, so it's also reasonable to write about the risks of wide-open use.
@drewharwell That's true. But, in a vagely horrifying way, they do reflect ourselves - and our motives.
What, after all, separates a language model from, e.g. a PR executive who sits in an office all day, crafting upbeat corporate rebuttals for the abstract reward of maximising a bank balance or, at best, their dopamine levels, according to a human-resources appraisal matrix?
Or, for that matter, a journalist who's judged by the word-patterns they make out of text on the internet...
@GuerillaGrue @drewharwell You make good points about ability and intention (though ability is not, even among humans, necessarily a given).
But what's unsettling me is that humans (and songbirds) behave in the ways they do because it lifts their dopamine levels via a mechanism they're unconcious of.
While the language models will, I assume, be behaving as they do to maximise a score held in memory through a mechanism that they, too, are unconscious of.
I may be alone, but I find that spooky.
@drewharwell 100%. Something else that I feel is getting missed is: the robots are being used by generate money for their owners by grazing on other people’s hard work.
By saying “AI is making art”, journalists are giving the actual jerks that are fucking other people over a pass.
Bad AI copies, genius AI steals.
@drewharwell I'm not worried about computers outsmarting humans (okay, they've pretty much ruined the game of chess by learning how to routinely beat the grandmasters), but it is disturbing that they can generate paragraphs or pages of text, and the average human can't tell the difference.
@drewharwell Speaking to some people is like listening to a malfunctioning autocorrect, so it’s no wonder people anthropomorphise ChatGPT.
It’s the point of the Turing Test.
@drewharwell Wait til you see the next generation of LLM’s. You might change your mind.
Starts to bring into question just how much of an automaton humans are when it comes to verbal communication. We follow patterns of thought and speech that we are barely aware of.
@drewharwell Eh. On twitter I've seen more than a few posts by seemingly smart researches exclaiming they've cracked code execution on ChatGPT and can run it as a terminal etc. They're shocked when they'd discover they've been duped and the system just knows what results they want to see.
Google will never knowingly lie to you.
Why seemingly intelligent researchers never test their findings on interact.sh is a different question though.
Less worried about Chat Bots when you realize that a corporation is a robot, and AI bot, programmed by CEO and Board to make money at any cost.