My feeling is increasingly becoming that the people who believe AI to be intelligent are the same as the people who believe computers to be magic.

Which is to say, they’re not *entirely* wrong  —  it’s incredible what this technology is capable of. But I don’t exactly want to hear what they think when they’ve demonstrated that they don’t understand.

Would be nice if there was a quicker way of saying “I am fascinated by this technology on a theoretical level, but would rather eat paper than look at the random output you generated using an off-the-shelf tool made by a shady megacorp”

@rutherford IaDbtToaTLbWREPtLatROyGUaOtSTMbaSm

Acronyms always simplify things.

@rutherford I was talking to an expert on this yesterday and he expressed a similar sentiment. AI researchers are now very busy to address the criticisms, e.g. that an LLM has no sense of purpose. So they will give it a sense of purpose. Or that they are often wrong. They will make them self-correcting. But the never stop to think about the problem of not understanding the ramifications what this thing will actually be doing when you do all that.
@wim_v12e “our search chatbot keeps getting trivia wrong... but it’s fine, for Patch 1.1 we’ll just program in objective truth”

@rutherford at work we have the occasional Slack thread where some engineers are waxing poetic about and borderline worshipping ChatGPT with weird expressions of confirmation-bias-fuelled awe lol

I don't think there's a bit of consciousness (yet?) in any of these models and any semblance of sentience is merely projected by the observer.

or does Sydney actually love me uwu

@joshavanier men will really claim Bing Search developed sentience to avoid admitting they got a crush on predictive text