What is something that AI will NEVER be able to replicate?
What is something that AI will NEVER be able to replicate?
An artist and an AI, when given the same prompt, will produce similar outputs.
yeah thats what art is about, you got it
Is there a turing test for art, and what’s the detection quota?
I think any clear definition will either positively identify lots of AI works as art (along collections of random junk), or deny the qualifier to lots of supposed artworks from human artists.
Coming from theater, I agree it is about “conveying a meaning beneath the surface”. Having studied computer science, I note that is very much not in a strict sense, but very vague. It seems to be a feature, not a bug, that everyone in the audience can see something different.
I think you can pretty much present random nonsense, and someone will still find it brilliant and inspiring, and a lot more people will tell you what patterns they saw, and of what it reminded them. The meaning is created in the minds of the observers, even if the creator explicitly did not put another, or any, meaning into the “art”.
this is an interesting one cause it feels like a mobile philosophical goalpost, what would classify as ‘feeling enough’ for gyou?
Definitively the AI is able to understand the meaning behind a prompt and expand on it, before I’ve asked it for a picture of a cartoon cat and instinctivly it put a ruler beside it to show it was only a couple cm across
It certainly is a very efficient form of this compared to what were used to, cutting about as many corners as you can - but then again it still produces the output, and what other goalposts can we reliably argue for?
Since AI is trained by us, using the fruit of human labor as input, it’ll have to be something we can’t train it to do.
Something biological or instinctual… Like being in close proximity to an AI will never result in synchronized menstruation since an AI can’t and won’t ever menstruate.
So… That 👍
Computers will never consistently beat humans and humans will never consistently beat computers as snakes and ladders.
Or rock-paper-scissors, for that matter.
I don’t know, there are a couple pretty good ones here by chatgpt:
Of course! Here are some classic dad jokes for you:
Stupid comments like this one
And this one
And that one
And those ones over there
I guess a good part also comes from learned experiences. Having a body, growing up, feeling pain, being mortal.
And yes, the brain is an incredibly complex system not only of neurons, but also transmitters, receptors, a whole truckload of biochemistry.
But in the end, both are just matter in patterns, excitation in coordination. The effort to simulate is substantial, but I don’t see how that would NEVER succeed, if someone with the capabilities insisted on it. However, it might be fully sufficient for the task (whatever that is, probably porn) to simulate 95% or so, technically still not the real deal.
What makes you say that so definitely?
Funny enough I have the opposite opinion, human brains are the type of thinking we have most experience with - so we’ve devised our input methods around what we notice most, and so will be able to most easily train the AI.
I also believe that we’ll be abke to reduce the noise to a level lower than actual person variation fairly easily, cause an AI has the benefit of being able to scale to a populous size - no human even has that much experience with humans
I use to work on research on microscopic mechanisms of the brain, and I work in AI.
Human thoughts derive from extremely complex microscopic mechanisms, that do not “average out” when moving to the macroscopic world, but instead create very complex non-linear stochastic process that are thoughts.
Unless some scientific miracle happens, human thoughts will stay human.
But an AI does anything but average out, else we wouldn’t be any more advanced than the earliest mathematicians.
Its skill comes from being able to have millions to billions of parameters if required, and then contain data within them all.
It doesn’t seem entirely unreasonable that it could use those (riding off our suprisingly good math skills) and create a model that represents a human with low enough noise we wouldn’t even notice.
(but also I’m in a similar more chemically focused field, nanotechnology so I have experience with nanoscopic-microscopic structures, and what can we artificially build from them while not killing the biological side of things)
As you are in nanotechnologies, when I say average out I am talking in a statistical mechanics way, i.e. the macroscopic phenomenon arising from averaging over the multiple accessible microscopic configurations. Thought on the other hand are multiple complex non linear stochastic signals. They depend on a huge amount of single microscopic events, that are not replicabile in a computer, and likely not reproducible in a parametrized function. Nothing wrong with that, we might be able to approximate human thoughts, most likely not reproduce them.
What area of nanotechnology are you? Main problem of nanotechnologies is that they cannot reproduce the complexity of the biological counterparts. Take carbon nanotubes, we cannot reproduce the features of the simpler ion channels with them, let alone the more complex human ones.
We could be built nice models, with interesting functionality, as we are doing with current AI. Machine that can do logic, take decisions, and so on. Even a machine that can predict human thoughts. But the real human thoughts will most likely stay human, as the processes from which they arise are very human
nano engineering, and course were talking some years in the future, but if anything nano’s convinced me were all just math when you break it down - when just depends on how much math we can do.
Even a simple conversation can be broken down into tokenizable words recently and bam chatgpt, reasonably the rest of our ‘humanity’ could be modeled following a similar trend until the Turing test is useless
What I mean is different. A dog thinks as a dog, a human thinks as a human, an AI will think as an AI. It will likely be able to pretend to think as a human, but it won’t.
It won’t understand Proust’s madalaine, have the need to travel to some “sacred” location looking for spirituality, miss the hometown were it grew up, its thinking won’t be driven by fears of spiders, need of social recognition, pleasure to see a naked women.
These are simple examples, but in general it will think in a different way. Humans will tune it to pretend to be “as human as possible”, but human will remain unique
An exact 1:1 realtime copy of itself emulated within a simulated universe.
Pretty much everything else mentioned in this thread falls into the “never say never” category.
Probably still a never say never problem:
In their new paper, the five computer scientists prove that interrogating entangled provers makes it possible to verify answers to unsolvable problems, including the halting problem.
Which is why I said it was still a “never say never” and not an already solved problem.
The halting problem is impossible for Turing machines, but if hypercomputation ends up possible, it isn’t impossible.
For example, an oracle machine as proposed by Turing, or a ‘real’ computer using actual real values.
The latter in particular may even end up a thing in the not too distant future assuming neural networks continue to move into photonics in such a way that networks run while internals are never directly measured. In that case the issue would be verifying the result - the very topic of the paper in question.
Effectively, while it is proven that we can never be able to directly measure a solution to the halting problem, I wouldn’t take a bet that within my lifetime we won’t have ended up being able to indirectly measure a solution to the problem and directly validate the result.