For those who like sci-fi, how have real life LLMs changed how you view robots and AI in science fiction?
For those who like sci-fi, how have real life LLMs changed how you view robots and AI in science fiction?
not at all, just as how boston dynamics’ atlas didn’t change how i viewed robocop.
text generators just have very little in common with intelligent, autonomous artificial entities.
In sci-fi, AI devices (like self-driving cars or ships, or androids) seem like an integrated unit where any controls or sensors they have are like human limbs and senses. The AI “wills” the engine to start. I always imagined AI would be like a single organism where neurons are connected directly to the body.
Given the development of LLM:s and how they are used, it now seems more likely that AI will be an additional “smart layer” on top of the dumb machinery, and actions are performed by emitting tokens/commands (“raise arm 35 degrees”) that are sent to API:s. The interaction will be indirect in the way that we control the TV with the remote.
They’ve made fictional AI seem that much more far-fetched.
Obviously, we all learn by imitation and instruction - but LLMs have shown that’s only part of the puzzle

Humans interact with the environment using a combination of perception - transforming sensory inputs from their environment into symbols, and cognition - mapping symbols to knowledge about the environment for supporting abstraction, reasoning by analogy, and long-term planning. Human perception-inspired machine perception, in the context of AI, refers to large-scale pattern recognition from raw data using neural networks trained using self-supervised learning objectives such as next-word prediction or object recognition. On the other hand, machine cognition encompasses more complex computations, such as using knowledge of the environment to guide reasoning, analogy, and long-term planning. Humans can also control and explain their cognitive functions. This seems to require the retention of symbolic mappings from perception outputs to knowledge about their environment. For example, humans can follow and explain the guidelines and safety constraints driving their decision-making in safety-critical applications such as healthcare, criminal justice, and autonomous driving. This article introduces the rapidly emerging paradigm of Neurosymbolic AI combines neural networks and knowledge-guided symbolic approaches to create more capable and flexible AI systems. These systems have immense potential to advance both algorithm-level (e.g., abstraction, analogy, reasoning) and application-level (e.g., explainable and safety-constrained decision-making) capabilities of AI systems.
It’s more that the latest few months of AI-related sci-fi is oversaturated with Commentary^TM^ and Discourse^TM^ on the dangers of LLMs and I’m getting bored with it.
I had known about development of the technology for a couple years before it hit the mainstream. I have been completely unsurprised with how it has gone.
No, but actually studying Artificial Intelligence a decade ago in college did.
We had language models back then, too, they just weren’t as good.
AI in fiction a boring concept to me. It’s presented either as “What is a person?” or “What if we create an evil god?”. To me anything with feelings is a person and the other is just a chrome paint job on evil god characters in non sci-fi genres, so it’s just a speculative dead end.
AI in real life is much more interesting and its proliferation makes fictional AI seem even more bland. Real life AI is first and foremost not intelligent and probably not even close, that said we have no rubric to grade it by because we don’t even really know what intelligence is yet. That said, machine learning algorithms highlight patterns in the world and in our behaviors that are fascinating just because they show just how complicated the world and people are in ways our brains just passively process without consideration. Kind of like how QWOP highlights just how difficult and complicated walking is.
It is not now, nor ever will be anything like the way it’s depicted in sci-fi fantasy. We are never going to achieve anything close to a Star Trek-level of symbiosis with tech. Everything we ever do will be weaponized, and what can’t be turned on our adversaries and ultimately ourselves, will be used to make the less intelligent even more-so.
It’s going to drain our last vestige of creativity as it runs headlong through our every culture, and in its wake will be the unmotivated remains of what passion for the arts we once had, until one day- we will be nothing more than animals walking in and out of rooms.
Trust that nothing good lies that way.
It’s given me an idea of how we get there. Clearly, modern LLMs aren’t near the level as seen in movies, but we will get there. We will move on from LLMs within a few years to a more adaptive model, as we further increase our understanding of AI and neural networks.
I see modern LLMs as task tools, they can interpret our requests to pass onto a more intelligent model type which will save processing power needed from the newer AIs.
People in this thread seem to have a lot of bias, they can’t see how the tech will evolve. You need to keep an open mind and look at where tech is being developed, with AI, it will be new architectures.
We calculated in the 70s that the algorithm on which the LLMs run will only get us so far. We’ve nearly reached that point. related article that basically covers it all: venturebeat.com/…/llms-are-stuck-on-a-problem-fro…
So basically no different view. Still waiting for my cyborg buddy.