Just in case anyone forgot, Altman of Open AI is reminding us that they believe they are actually building "AGI" ("AI systems that are generally smarter than humans") and that ChatGPT et al are steps towards that:

https://openai.com/blog/planning-for-agi-and-beyond/

>>

Planning for AGI and beyond

Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.

That is, the very people in charge of building #ChatGPT want to believe SO BADLY that they are gods, creating thinking entities, that they have lost all perspective about what a text synthesis machine actually is.

I wish I could just laugh at this, but it's problematic because these people living in a fantasy world are also influencing policy decisions while also stirring up the current #AIhype frenzy, which also makes it more difficult to design and pass effective policy.

@emilymbender Though in fairness, the research papers e.g. about InstructGPT are a lot more reserved and balanced. This feels like the common misaligned between management/communication and research.
@jmbuhr And you're carrying water for them why?

@emilymbender

I wonder if OpenAI still view their role in this through the lens of Effective Altruism

https://masto.ai/@bornach/109922557750696778

Bornach (@[email protected])

[Philosophy Tube] on #EffectiveAltruism and #Longtermism https://youtu.be/Lm0vHQYKI-Y Incidentally the founders of #OpenAI are among the wealthy EA proponents. And Musk has famously promoted his Longtermist views. https://techpolicy.press/chatgpt-safety-in-artificial-intelligence-and-elon-musk/

Mastodon
@emilymbender could there be a thing where people who spend years training their minds to emulate computers are more inclined to look into the matrices and see themselves?
@milesmcbain @emilymbender shouldn’t be, because once you know what’s going on it’s pretty obvious it isn’t general AI, unless you really want to believe it is.
@milesmcbain @emilymbender but that was true of expert systems and they got the same sales pitch and the same religious fervour, so 🤷🏻‍♂️.
@milesmcbain @emilymbender in fact it reminds me of a phenomenon I noticed when working on a PhD many decade ago: newly developed formal methods tended to be ready for industrial application at just about the time the lead researcher started needing to settle down and buy a house, have some kids.
@Colman @emilymbender I’m suggesting they may have a more reductive view of what general intelligence actually is, influenced by the training of their brains to work in mathematical approximations. And that this may be an unconscious bias.
@milesmcbain @Colman @emilymbender Strong disagree. Folks who actually train their brains to do intense math see no magic here. It's the ones who see coding as "writing 100000 lines of boilerplate class interface definitions" who think LLMs are going to replace humans.

@dalias I tend to view that kind of coding as a combination of bad framework design & bad language design.

A few macros can typically reduce such boilerplate to a minimum.

Yes this is again my #Lisp is superior argument.

No need for AI, just tools that aren't completely useless.

@lispi314 It's absolutely bad framework, language, and programming idiom design. If any part of the programming task is so idiotically repetitive a LLM could do it, it shouldn't have been there to begin with. That code should be (rigorously, not AI junk) generated automatically as part of the build process (not by an IDE then checked in and editable, which is just awful).
@dalias Oh dear, the IDE-generated code gives me a lot of #Java flashbacks.
@dalias @milesmcbain @Colman @emilymbender The ones who haven't written their own template files, anyway.
@dalias @milesmcbain @Colman @emilymbender Seeing no magic in LLMs does not imply thinking of human thought as somehow magically above that
@anoreon @milesmcbain @Colman @emilymbender It's not "magically above that". It's different from that. I hesitate to even say "above" because they're not levels on a ladder, they're different things. Just like an ALU and human brain activity are different things.
@dalias @milesmcbain @Colman @emilymbender I mean I agree to a certain extent, an ALU and a human brain have important differences, and above maybe wasn't the right word. I was just pointing out that viewing the prospect of AGI downstream of LLMs as possible can come also from viewing the human brain as fundamentally unmagical and thus comparable, which I think is also the point miles was trying to make
@Colman @milesmcbain @emilymbender almost no one working on this thinks it’s general AI right now, but it’s clearly a stepping stone on the path to general ai
@Techronic9876 @Colman @milesmcbain @emilymbender I see no evidence that this is clear. In the 70s/80s folks claimed lisp was a clear stepping stone to "AI". 🤪
@dalias @Colman @milesmcbain @emilymbender what kind of evidence would you find convincing?
@Techronic9876 @Colman @milesmcbain @emilymbender Short of actual demonstration of it working, I guess maybe research showing correspondence to human brain processes. However I think that's missing the point. X isn't really worth calling a stepping stone to Y if X is the ridiculously easy part.
@dalias @Techronic9876 @Colman @milesmcbain @emilymbender I'd say maybe people committing to serious attempts to make systems that reason *about* their data instead of mostly detecting patterns and blindly extrapolating new data points. And no, I have no idea how exactly that would work or what it would even look like. (and yes, I've done my share of amateur Prolog, it's not the same)
@Techronic9876 @Colman @milesmcbain @emilymbender Nothing LLMs are doing is revolutionary. Most of the concepts were known decades ago. What's new is the scale of scraping of data and the scale of resource burning to use that data for training. This is particularly unlike humans. We don't need to have read the entire indexed internet to bullshit. Our corpora of textual learning are much smaller but with much better results.
@Techronic9876 @Colman @milesmcbain @emilymbender As such this makes me doubt that the current direction has much value as a step towards intelligence. It is what it is: a statistical model for making text you can convince humans to believe in a selected context. I.e. for making bullshit.

@dalias @Colman @milesmcbain @emilymbender “a statistical model for making text” defines most human communication (if you start saying statistically improbable things ppl will think you’re crazy)

LLMs don’t know whether they’re making up bullshit (neither do a lot of humans for that matter), but they will soon

One-shot and zero-shot prompting took scientists completely by surprise, so did chain of thought reasoning— these were emergent, unknown capabilities of LLMs

@Techronic9876 @Colman @milesmcbain @emilymbender Your first paragraph mixes up necessary & sufficient conditions (equivalently, proposition and its converse).

Of course the majority of communication is statistically probable in some probability distribution. But that doesn't mean probability in such a distribution "defines" it. Being probable given textual context does not make a statement non-batshit-wrong.

@Techronic9876 @dalias @Colman @milesmcbain @[email protected]

anything that says that these products actually can do excellence sometimes and not just average mediocrity at the absolute best and if you ignore that they have no ontologies whatsoever

@milesmcbain @Techronic9876 @dalias @Colman @emilymbender

Mincing words here. The advent of networked computing and mass data collection is what “started it” in my opinion.

@Colman @milesmcbain @emilymbender People lie best when they lie to themselves.
@milesmcbain @emilymbender Exactly the opposite. If you understand any of this at all on a technical level, it has no such magic and obviously has no relationship with intelligence.
@emilymbender unlike us who aren't text synthesis machines at all

@lritter there are a lot of machines that can do things we also do and we don't claim they are thinking or do anything like general intelligence

there are plenty of text synthesis systems we don't claim that about either, many not particularly architecturally different from the latest and greatest except for in size

and even the very simplest text synthesizers if presented in chatbot form can make us think there's a person writing the text, but that says more about us than the text generators

@emilymbender the USA also has a very active political immune system that attacks any non-defense industrial policy that isn’t driven by the VC community. 🤷🏻‍♂️
@emilymbender
Is the mentioned article not an attempt to discuss "policy" and is it not better to engage in discussion then to just laugh and say they are living in a fantasy world ?🤔
@emilymbender
... especially if their thinking about AGI is flawed in your view it's good to make that clear..without agreeing with all parts of the article, i find it not that terrible
@emilymbender
... to start off, i don't think even OpenAI thinks a text synthesis machine alone will lead to AGI (whatever that may be)

@ErikJonker @[email protected]

why treat charlatans and fraudsters as if they deserve discussion?

@emilymbender
That is a valid comment on pretty much all things we'd call "technology". With an LLM, that we anthropomorphize it with ease also makes us project our view of human abilities as complex and "God created" on to it and pretend we can't see them as "next word predictors" and "regurgitators" or (to use what you coined) "stochastic parrots".
@emilymbender
The funny bit for me is that LLM output (based on older models at least) looks lame if you strictly use the word/token that was predicted as the one to follow with highest probability every time. So it looks like we're attributing "intelligence" and even "sentience" to randomness injected to display some variety.
@emilymbender Putting the tech issues aside, I find it disturbing that their intention is to create a new class of enslaved sentient beings. I'm grateful the tech *can't* do what they claim it can.
@emilymbender the UNESCO Internet governance meeting this week was relatively sane. Do you think OpenAI rushed out chatgpt to try to derail EU AIA draft consolidation?
@emilymbender even if they are merely text-synthesis machines, should we not still be cautious of what they might be further developed into? Or, of what humans might put them in control of?
I mean, I think it could be risky to just shrug it off as "nah, they're just text-synthesis machines. Nothing to worry about."

@emilymbender

So far ChatGPT has been tried in:

1. Customer call centers (customers *loathe* chatbots)
2. News articles (paying subscribers start canceling their subscriptions)
3. Recommendation systems (users start harassing authors & libraries for non-existent books & articles)
4. Content farms (followers start unfollowing)

Are there any examples of "successful" launches of ChatGPT aside from dating & porn sites, Twitter, Facebook, & Instagram chatbots?

@Npars01 @emilymbender
It is still early to say but companies are exploring all kind of applications.
I heard of an interesting use case, in which a human would be trained to perform a series of specific commands on an old system in order to fullfil a customer request, they use to spend months teaching people the usage of this commands.

ChatGPT was able to generate the list of commands from the customer request in plain words.
@Npars01 @emilymbender Yes, I have and am helping clients with it. The key thing is - they’re being used as augmentation systems for humans, not replacements. For example, suggesting responses, or suggesting different tones (when asked!). Basically - powerful as an exoskeleton, not a cyborg.
@Npars01 @emilymbender Also it helps that the leaders involved all have a healthy dose of skepticism around AI and its limitations. Being excited because of how it can help, not how it can cut labor costs

@cory_foy @Npars01 @[email protected]

what you describe is absolutely about labour cost, it's about not having people talk to each other because it costs more

@Npars01 @emilymbender

I think it could be a useful aid to the current search engines and organizing information (if they can get it right that is ;-), but wouldn't want the keyword search engines to be replaced (yet?).

I never get what I'm looking for with asking a question and have to filter through quite a few pages to find what I'm looking for when using keywords, but at least find what I'm looking for.

Could never understand how people stop at page one of the search results.

@emilymbender

We’re at the very precipice of a major sociological shift. On the level of the printing press, which brought us things like modern science.

The problem solved is the 8 billion minds out there- how do you understand what is known. Nobody since the 1700s has had a full grip on human knowledge.

When the internet hit we started accumulating vast stores of knowledge. AI now lets us search through the mass of information.

Still in the inception. Baby steps.

@PChoate Excuse me, but what did you think the totality of human knowledge was in the 1700s and who could possibly have had a "full grip" on it?

(To be very clear, this is me calling out what I suspect is Eurocentrism in your post.)

@emilymbender

I think you’re missing my point. The volume of knowledge out there got beyond the capacity of a human brain at some point, likely in the 1700s.

AI breaks that limit.

@emilymbender Is there really a necessary equivalence between AGI and thinking entities? I've mostly seen AGI defined as "general problem solving" rather than saying something about thought or sentience, which is a totally separate concept. You can prove general problem solving, but it's a harder task to prove thinking and sentience. Non-thinking general problem solvers are enough to create societal-level problems and power imbalances, which I think OpenAI acknowledges.
@emilymbender they do everything for keeping the investments flowing while profiting from the attention they are getting. Indeed, attention is all they have in the current situation. As all these transformer networks are unreliable, it will be difficult to use them for tasks requiring reliable responses. In other words, the technology is unsuitable for creating a billion dollar business. So OpenAI multiplies its promises in order to continue burning investors’ money.

@emilymbender
Out of curiosity, is this a generally used definition of AGI?
I'd consider a computer at the level of a child to still be AGI.

I know OpenAI has defined it that way in recent blog posts, but I think that's more an example of the hubris and hype you are rightly pointing out.