Am I the only one getting agitated by the word AI?

https://discuss.tchncs.de/post/10040937

Am I the only one getting agitated by the word AI? - tchncs

Am I the only one getting agitated by the word AI (Artificial Intelligence)? Real AI does not exist yet, atm we only have LLMs (Large Language Models), which do not think on their own, but pass turing tests (fool humans into thinking that they can think). Imo AI is just a marketing buzzword, created by rich capitalistic a-holes, who already invested in LLM stocks, and now are looking for a profit.

Ai is 100% a marketing term.

It’s a computer science term that’s been used for this field of study for decades, it’s like saying calling a tomato a fruit is a marketing decision.

Yes it’s somewhat common outside computer science to expect an artificial intelligence to be sentient because that’s how movies use it. John McCarthy’s which coined the term in 1956 is available online if you want to read it

“Quantum” is a scientific term, yet it’s used as a gimmicky marketing term.
Yes perfect example, people use quantum as the buzzword in every film so people think of it as a silly thing but when CERN talk about quantum communication or using circuit quantum electrodynamics then it’d be silly to try and tell them they’re wrong.
yep and it has always been a leading misnomer like most marketing terms

They didn’t just start calling it AI recently. It’s literally the academic term that has been used for almost 70 years.

The term “AI” could be attributed to John McCarthy of MIT (Massachusetts Institute of Technology), which Marvin Minsky (Carnegie-Mellon University) defines as "the construction of computer programs that engage in tasks that are currently more satisfactorily performed by human beings because they require high-level mental processes such as: perceptual learning, memory organization and critical reasoning. The summer 1956 conference at Dartmouth College (funded by the Rockefeller Institute) is considered the founder of the discipline.

perceptual learning, memory organization and critical reasoning

i mean…by that definition nothing currently in existence deserves to be called “AI”.

none of the current systems do anything remotely approaching “perceptual learning, memory organization, and critical reasoning”.

they all require pre-processed inputs and/or external inputs for training/learning (so the opposite of perceptual), none of them really do memory organization, and none are capable of critical reasoning.

so OPs original question remains:

why is it called “AI”, when it plainly is not?

(my bet is on the faceless suits deciding it makes them money to call everything “AI”, even though it’s a straight up lie)

so OPs original question remains: why is it called “AI”, when it plainly is not?

Because a bunch of professors called it that 70 years ago, before the AI winter set in. Why is that so hard to grasp? Not everything is a conspiracy.

I had a class at uni called AI, and no one thought we were gonna be learning how to make thinking machines. In fact, compared to fact the stuff we did learn to make then, modern AI looks godlike.

Honestly you all sound like the people that snidely complain how it’s called “global warming” when it’s freezing outside.

just because the marketing idiots keep calling it AI, doesn’t mean it IS AI.

words have meaning; i hope we agree on that.

what’s around nowadays cannot be called AI, because it’s not intelligence by any definition.

imagine if you were looking to buy a wheel, and the salesperson sold you a square piece of wood and said:

“this is an artificial wheel! it works exactly like a real wheel! this is the future of wheels! if you spin it in the air it can go much faster!”

would you go:

“oh, wow, i guess i need to reconsider what a wheel is, because that’s what the salesperson said is the future!”

or would you go:

“that’s idiotic. this obviously isn’t a wheel and this guy’s a scammer.”

if you need to redefine what intelligence is in order to sell a fancy statistical model, then you haven’t invented intelligence, you’re just lying to people. that’s all it is.

the current mess of calling every fancy spreadsheet an “AI” is purely idiots in fancy suits buying shit they don’t understand from other fancy suits exploiting that ignorance.

there is no conspiracy here, because it doesn’t require a conspiracy; only idiocy.

p.s.: you’re not the only one here with university credentials…i don’t really want to bring those up, because it feels like devolving into a dick measuring contest. let’s just say I’ve done programming on industrial ML systems during my bachelor’s, and leave it at that.

These arguments are so overly tired and so cyclic that AI researchers coined a name for them decades ago - the AI effect. Or succinctly just: “AI is whatever hasn’t been done yet.”
AI effect - Wikipedia

i looked it over and … holy mother of strawman.

that’s so NOT related to what I’ve been saying at all.

i never said anything about the advances in AI, or how it’s not really AI because it’s just a computer program, or anything of the sort.

my entire argument is that the definition you are using for intelligence, artificial or otherwise, is wrong.

my argument isn’t even related to algorithms, programs, or machines.

what these tools do is not intelligence: it’s mimicry.

that’s the correct word for what these systems are capable of. mimicry.

intelligence has properties that are simply not exhibited by these systems, THAT’S why it’s not AI.

call it what it is, not what it could become, might become, will become. because that’s what the wiki article you linked bases its arguments on: future development, instead of current achievement, which is an incredibly shitty argument.

the wiki talks about people using shifting goal posts in order to “dismiss the advances in AI development”, but that’s not what this is. i haven’t changed what intelligence means; you did! you moved the goal posts!

I’m not denying progress, I’m denying the claim that the goal has been reached!

that’s an entirely different argument!

all of the current systems, ML, LLM, DNN, etc., exhibit a massive advancement in computational statistics, and possibly, eventually, in AI.

calling what we have currently AI is wrong, by definition; it’s like saying a single neuron is a brain, or that a drop of water is an ocean!

just because two things share some characteristics, some traits, or because one is a subset of the other, doesn’t mean that they are the exact same thing! that’s ridiculous!

the definition of AI hasn’t changed, people like you have simply dismissed it because its meaning has been eroded by people trying to sell you their products. that’s not ME moving goal posts, it’s you.

you said a definition of 70 years ago is “old” and therefore irrelevant, but that’s a laughably weak argument for anything, but even weaker in a scientific context.

is the Pythagorean Theorem suddenly wrong because it’s ~2500 years old?

ridiculous.

Yes your summary is correct, its just a buzzword.

You can still check if its a real human if you do something really stupid or speak or write giberisch. Almost every AI will try to reply to it or say “Sorry i couldnt understand it” or recent events ( most of the LLMs arent trained on the newest events )

I call it a probability box.
A lot of the comments I’ve seen promoting AI sound very similar to ones made around the time GME was relevant or cryptocurrency. Often, the conversations sounded very artificial and the person just ends up repeating buzzwords/echo chamber instead of actually demonstrating that they have an understanding of what the technology is or its limitations.
I’ve ranted about this to several people too. Intelligence is hard to define and trying to define it has a horrible history linked to eugenics. That said, I feel like a minimum definition is that it has the capacity to understand the meaning and/or impact of what it is saying and/or doing, which current “AI” is so far from doing.
Yep, it says things though has no understanding of what it is saying: much like strolling through a pet shop, passing the parrot enclosure, and hearing and recoiling at the little kid swear words it cheeps out.
The word “AI” has been used for way longer than the current LLM trend, even for fairly trivial things like enemy AI in video games. How would you even define a computer “thinking on its own”?
I think a good metric is once computers start getting depression.
A LLM can get depression, so that’s not a metric you can really use.

No it can’t.

LLMs can only repeat things they’re trained on.

Sorry, to be clear I meant it can mimic the conversational symptoms of depression as if it actually had depression; there’s no understanding there though.

You can’t use that as a metric because you wouldn’t be able to tell the difference between real depression and trained depression.

But will they be depressed or will they just simulate it because they're too lazy to work?
If they are too lazy to work that would imply they have motivation and choice beyond “doing what my programming tells me to do ie. input, process, output”. And if they have the choice not to do work because they dont ‘feel’ like doing it (and not a programmed/coded option given to them to use) then would they not be thinking for themselves?

simulate [depression] because they’re too lazy

Ahh man are you my dad? I took damage from that one. has any fiction writer done a story about depressed ai where they talk about how depression can’t be real because it’s all 1s and 0s? Cuz i would read the shit out of that.

It’s only tangentially related to the topic, since it involves brain enhancements, not ‘AI’. However, you may enjoy the short story “Reasons to be cheerful” by Greg Egan.
Not sure about that. A LLM could show symptoms of depression by mimicking depressed texts it was fed. A computer with a true consciousness might never get depression, because it has none of the hormones influencing our brain.

Me: Pretend you have depression

LLM: I'm here to help with any questions or support you might need. If you're feeling down or facing challenges, feel free to share what's on your mind. Remember, I'm here to provide information and assistance. If you're dealing with depression, it's important to seek support from qualified professionals like therapists or counselors. They can offer personalized guidance and support tailored to your needs.

Give it the right dataset and you could easily create a depressed sounding LLM to rival Marvin the paranoid android.
Hormones aren’t depression, and for that matter they aren’t emotions either. They just cause them in humans. An analogous system would be fairly trivial to implement in an AI.
That’s exactly my point though, as OP stated we could detect if an AI was truly intelligent if it developed depression. Without hormones or something similar, there’s no reason to believe it ever would develop those on its own. The fact that you could artificially give it depressions is besides the point.

I don’t think we have the same point here at all. First off, I don’t think depression is a good measure of intelligence. But mostly, my point is that it doesn’t make it less real when hormones aren’t involved. Hormones are simply the mediator that causes that internal experience in humans. If a true AI had an internal experience, there’s no reason to believe that it would require hormones to be depressed. Do text-to-speech systems require a mouth and vocal chords to speak? Do robots need muscle fibers to walk? Do LLMs need neurons to form complete sentences? Do cameras need eyes to see? No, because it doesn’t matter what something is made of. Intelligence and emotions are made of signals. What those signals physically are is irrelevant.

As for giving it feelings vs it developing them on its own-- you didn’t develop the ability to feel either. That was the job of evolution, or in the case of AI, it could be intentionally designed. It could also be evolved given the right conditions.

First off, I don’t think depression is a good measure of intelligence.

Exactly. Which is why we shouldn’t judge an AIs intelligence based on whether it can develop depression. Sure, it’s feasible it could develop it through some other mechanism. But there’s no reason to assume it would, in absence of the factors that cause depressions in humans.

Oh. Maybe we dis have the same point lol
It’ll probably happen when they get a terrible pain in all the diodes down their left hand side.
The real metric is whether a computer gets so depressed that it turns itself off.
Wait until they found my GitHub repositories.
it does not “think”
The best thing is enemy “AI” only needs to be made worse right away after creating it. First they’ll headshot everything across the map in milliseconds. The art is to make it dumber.

I assume you’re referring to the sci-fi kind of self-aware AI because we’ve had ‘artificial intelligence’ in computing for decades in the form of decision making algorithms and the like. Whether any of that should be classed as AI is up for debate as again, it’s still all a facade. In those cases, people only really cared about the outputs and weren’t trying to argue they were alive or anything.

But yeah, I get what you mean.

It really depends on how you define the term. In the tech world AI is used as a general term to describe many sorts of generative and predictive models. At one point in time you could’ve called a machine that can solve arithmetic problems “AI” and now here we are, Feels like the goalpost gets moved further every time we get close so I guess we’ll never have “true” AI?

So, the point is, what is AI for you?

Adobe Illustrator
hahaha couldn’t resist huh?

This has been a thing for a long time

Clippy was an assistant Cortana was an intelligent assistant Copilot is AI

None of these are accurate, it’s always like a generation behind

Clippy just was Cortana was an assistant And copilot is an intelligent assistant The next one they make could actually be AI

The distinction between AI and AGI (Artificial General Intelligence) has been around long before the current hype cycle.
What agitates me is all the people misusing the words and then complaining about what they don’t actually mean.

Yes, AI term is used for marketing, though it didn’t start with LLMs, a couple of years before, any ML algorithm was called AI together with the trendy data scientist job.

However, I do think LLMs are very useful, just try them for your daily tasks, you’ll see. I’m pretty sure they will become as common as a web search in the future.

Also, how can you tell that the human brain is not mostly a very powerful LLM hosting machine?

@Rikj000

which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).


How do you know that?

“somewhat old” person opinion warning ⚠️

When I was in university (2002 or so) we had an “AI” lecture and it was mostly "if"s and path finding algorithms like A*.

So I would argue that us the engineers have been using the term to define a wider use cases long before LLM, CEO and marketing people did it. And I think that’s fine, as categorising algorithms/solutions as AI helps understand what they will be used for, and we (at least the engineers) don’t tend to assume an actual self aware machine when we hear that name.

nowadays they call that AGI, but it wasn’t always like that, back in my time it was called science fiction 😉

I saw a streamer call a procedurally generated level “ai generated” and I wanted to pull my hair out
I think these two fields are very closely related and have some overlap. My favorite procgen algorithm, Wavefuncion Collapse, can be described using the framework of machine learning. It has hyperparameters, it has model parameters, it has training data and it does inference. These are all common aspects of modern “AI” techniques.
I thought “Wavefunction Collapse” is just misnamed Monte Carlo. Where does it use training data?
WFC is a full method of map generation. Monte Carlo is not.

WFC is a full method of map generation. Monte Carlo is not afaik.

MC is a statistical method, it doesn’t have anything to do with map generation. WFC is a form of MC.

To answer your question, the original paper on WFC uses training data, hyperparameters, etc. They took a grid of pixels (training data), scanned it using a kernal of varying size (model parameter), and used that as the basis for the wavefunction probability model. I wouldn’t call it AI though because it doesn’t train or self-improve like ML does.

Could you share the paper? Everything I read about WFC is “you have tiles that are stitched together according to rules with a bit of randomness”, which is literally MC.

Ok so you are just talking about MC the statistical method. That doesn’t really make sense to me. Every random method will need to “roll the dice” and choose a random outcome like a MC simulation. The statement “this method of map generation is the same as Monte Carlo” (or anything similar, ik you didn’t say that exactly) is meaningless as far as I can tell. With that out of the way, WFC and every other random map generation method are either trivially MC (it randomly chooses results) or trivially not MC (it does anything more than that).

The original Github repo, with examples of how the rules are generated from a “training set”: github.com/mxgmn/WaveFunctionCollapse A paper referencing this repo as “the original WFC algorithm” (ref. 22): [www.google.com/url?sa=t&source=web&rct=j&…](long google link to a PDF)

Note that I don’t think the comparison to AI is particularly useful-- only technically correct that they share some similarities.

GitHub - mxgmn/WaveFunctionCollapse: Bitmap & tilemap generation from a single example with the help of ideas from quantum mechanics

Bitmap & tilemap generation from a single example with the help of ideas from quantum mechanics - mxgmn/WaveFunctionCollapse

GitHub

I don’t think WFC can be described as an example of a Monte Carlo method.

In a Monte Carlo experiment, you use randomness to approximate a solution, for example to solve an integral where you don’t have a closed form. The more you sample, the more accurate the result.

In WFC, the number of random experiments depends on your map size and is not variable.

Sorry, I should have been more specific - it’s an application of Markov Chain Monte Carlo. You define a chain and randomly evaluate it until you’re done - is there anything beyond this in WFC?
I’m not an expert on Monte Carlo methods, but reading the Wikipedia article on Markov Chain Monte Carlo, this doesn’t fit what WFC does for the reasons I mentioned above. In MCMC, your get a better result by taking more steps, in WFC, the number of steps is given by the map size, it can’t be changed.
I’m not talking about repeated application of MCMC, just a single round. In this single round, the number of steps is also given by the map size.

it doesn’t train or self-improve like ML does

I think the training (or fitting) process is comparable to how a support vector machine is trained. It’s not iterative like SGD in deep learning, it’s closer to the traditional machine learning techniques.

But I agree that this is a pretty academic discussion, it doesn’t matter much in practice.