For those who like sci-fi, how have real life LLMs changed how you view robots and AI in science fiction?

https://sh.itjust.works/post/45505166

For those who like sci-fi, how have real life LLMs changed how you view robots and AI in science fiction? - sh.itjust.works

Lemmy

not at all, just as how boston dynamics’ atlas didn’t change how i viewed robocop.

text generators just have very little in common with intelligent, autonomous artificial entities.

This doesn’t really answer the question but I was reading Asimov short story the other day “Belief” and it felt like he’d hit the nail on the head such a long time ago.
Real life LLMs have shown me the potential for the world to be just as miserable and dystopia as in a lot of sci-fi but also if this is where we are now, then maybe most sci-fi doesn’t take it far enough. People will stop thinking for themselves and rely on AI for everything and blindly believe what it tells them.

In sci-fi, AI devices (like self-driving cars or ships, or androids) seem like an integrated unit where any controls or sensors they have are like human limbs and senses. The AI “wills” the engine to start. I always imagined AI would be like a single organism where neurons are connected directly to the body.

Given the development of LLM:s and how they are used, it now seems more likely that AI will be an additional “smart layer” on top of the dumb machinery, and actions are performed by emitting tokens/commands (“raise arm 35 degrees”) that are sent to API:s. The interaction will be indirect in the way that we control the TV with the remote.

I now consider it stupid and destructive to treat AI as having emotion just because they act human.
In other words, the bad guys in Blade Runner were right all along
Blade Runner’s a bit different since the replicants are flesh and blood, just not naturally born.
Who’re the bad guys in Blade Runner? The giant corporation that creates human-like entities only to enslave them?

They’ve made fictional AI seem that much more far-fetched.

Obviously, we all learn by imitation and instruction - but LLMs have shown that’s only part of the puzzle

I think LLMs could provide a human friendly interface for robots. There’s a lot of interesting work happening with embodied AI now, and in my opinion embodiment is the key ingredient for making AI intelligent in a human sense. A robot has to interact with the environment and it builds an internal model of the world for making decisions. This creates a feedback loop where the robot can learn the rules of the world and do meaningful interaction, and that’s precisely what’s missing with LLMs.
So an LLM with realtime learning/updation?
Not necessarily just an LLM on its own. The key part is that the internal model is coupled with reinforcement learning where it becomes rooted in the behaviors of the physical world. Real time continuous learning is the way to get there, but it can be done using different approaches. For example, neurosymbolic AI combines deep neural networks with symbolic logic. The LLM is used to parse and classify noisy input data, while a logic engine is used to make decisions about it. My expectation is that we’ll see more of these types of approaches where different machine learning techniques are combined together going forward. LLMs will just be one part of the bigger whole.
Neurosymbolic AI -- Why, What, and How

Humans interact with the environment using a combination of perception - transforming sensory inputs from their environment into symbols, and cognition - mapping symbols to knowledge about the environment for supporting abstraction, reasoning by analogy, and long-term planning. Human perception-inspired machine perception, in the context of AI, refers to large-scale pattern recognition from raw data using neural networks trained using self-supervised learning objectives such as next-word prediction or object recognition. On the other hand, machine cognition encompasses more complex computations, such as using knowledge of the environment to guide reasoning, analogy, and long-term planning. Humans can also control and explain their cognitive functions. This seems to require the retention of symbolic mappings from perception outputs to knowledge about their environment. For example, humans can follow and explain the guidelines and safety constraints driving their decision-making in safety-critical applications such as healthcare, criminal justice, and autonomous driving. This article introduces the rapidly emerging paradigm of Neurosymbolic AI combines neural networks and knowledge-guided symbolic approaches to create more capable and flexible AI systems. These systems have immense potential to advance both algorithm-level (e.g., abstraction, analogy, reasoning) and application-level (e.g., explainable and safety-constrained decision-making) capabilities of AI systems.

arXiv.org

It’s more that the latest few months of AI-related sci-fi is oversaturated with Commentary^TM^ and Discourse^TM^ on the dangers of LLMs and I’m getting bored with it.

I had known about development of the technology for a couple years before it hit the mainstream. I have been completely unsurprised with how it has gone.

No, but actually studying Artificial Intelligence a decade ago in college did.

We had language models back then, too, they just weren’t as good.

AI in fiction a boring concept to me. It’s presented either as “What is a person?” or “What if we create an evil god?”. To me anything with feelings is a person and the other is just a chrome paint job on evil god characters in non sci-fi genres, so it’s just a speculative dead end.

AI in real life is much more interesting and its proliferation makes fictional AI seem even more bland. Real life AI is first and foremost not intelligent and probably not even close, that said we have no rubric to grade it by because we don’t even really know what intelligence is yet. That said, machine learning algorithms highlight patterns in the world and in our behaviors that are fascinating just because they show just how complicated the world and people are in ways our brains just passively process without consideration. Kind of like how QWOP highlights just how difficult and complicated walking is.

It hasn’t. I don’t know what am LLM is.
Then how do you know you haven’t been influenced?
It stands for Large Language Model, and that’s what ChatGPT, Gemini, Grok, etc are. They are all LLMs. They are also called ‘AI’ (Artificial Intelligence) but they are not at all intelligent, they just match patterns and produce one word at a time like a very complex autocomplete in a phone keyboard.
I think we might actually get Star Wars style droids in our lifetimes.
I noticed that authors are mostly completely wrong about everything. They can’t write machines. They can’t write animals either. And of course they can’t write aliens. They can only write humans and then use that for “the machine has feelings” bs. Those things in the stories are not machines, they are badly written humans.

It is not now, nor ever will be anything like the way it’s depicted in sci-fi fantasy. We are never going to achieve anything close to a Star Trek-level of symbiosis with tech. Everything we ever do will be weaponized, and what can’t be turned on our adversaries and ultimately ourselves, will be used to make the less intelligent even more-so.

It’s going to drain our last vestige of creativity as it runs headlong through our every culture, and in its wake will be the unmotivated remains of what passion for the arts we once had, until one day- we will be nothing more than animals walking in and out of rooms.

Trust that nothing good lies that way.

It’s given me an idea of how we get there. Clearly, modern LLMs aren’t near the level as seen in movies, but we will get there. We will move on from LLMs within a few years to a more adaptive model, as we further increase our understanding of AI and neural networks.

I see modern LLMs as task tools, they can interpret our requests to pass onto a more intelligent model type which will save processing power needed from the newer AIs.

People in this thread seem to have a lot of bias, they can’t see how the tech will evolve. You need to keep an open mind and look at where tech is being developed, with AI, it will be new architectures.

Their bias is a direct response to the rhetoric from the ‘leaders’ of the AI industry, who have collected billions of dollars and turned it into BS expectations.
Trade it for a pc?
My favoeite character is a robot, and while sometimes she sounds like an llm she’s much more than that. She actually learns how humans are and it’s beautiful and I love her
After I learned how LLMs function, the “AI” we use in reality was categorized within my mind as something entirely new and different from the fictional, cognizant, sapient artificial intelligence in my favorite novels.
Hate is a strong word… I feel like humans and machines coexist a little too well in the movies, except when the lack of coexistence IS the plot.
I keep thinking our AI will lead us to something like the Eloi of the Time Machine, and the Morlocks will be the machines that run everything.

We calculated in the 70s that the algorithm on which the LLMs run will only get us so far. We’ve nearly reached that point. related article that basically covers it all: venturebeat.com/…/llms-are-stuck-on-a-problem-fro…

So basically no different view. Still waiting for my cyborg buddy.