The mental tyranny of AI writing, an arduously long blog post
https://meresophistry.substack.com/p/the-mental-tyranny-of-ai-writing
The mental tyranny of AI writing, an arduously long blog post
https://meresophistry.substack.com/p/the-mental-tyranny-of-ai-writing
@sarahdalgulls Arduously long but one point struck me. The use of ambiguity in AI responses because it doesn't know what to put. In effect the reader sees the meaning in the sentence.
Does this mean that we are being gaslighted, and AI is not nearly as advanced as we think? Has it just learnt to produce text that is plausible to anyone and fits multiple interpretations, just like the tabloid astrology columns?
I don't think there's a simple answer to this. On relatively closed domains like writing computer language it definitely can produce real direct answers. On ambiguous social questions, perhaps it does produce ambiguous answers for us to project meaning on to?
I'm quite surprised that I hadn't noticed or thought of this possibility. I think I will ask some questions and look at the answers while asking myself "how would someone with a different worldview understand this?".
@sarahdalgulls I know that generative AI applies a set of rules , and is just a "text rearranger". However, I don't see that as necessary precluding intelligent results, as there are many examples of emergent complexity in mathematics. One example the Mandelbrot set has infinite complexity, yet arises from applying a simple rule to each point. A classic example is the [game of life https://en.m.wikipedia.org/wiki/Conway%27s_Game_of_Life], where different starting states can give different complex outcomes.
So, I would not be surprised if a complex system produced more than you'd expect by just rearranging words. When and if we do create real artificial intelligence it won't be something planned and programmed but an emergence from a complex system, just as our intelligence is an emergence from neurons that fire depending on complex rules. It is likely to be quite different from anything we predicted the system would do, and possibly not obvious as intelligence at first.