Meta. OpenAI. Google.

Your AI chatbot is not *hallucinating*.

It's bullshitting.

It's bullshitting, because that's what you designed it to do. You designed it to generate seemingly authoritative text "with a blatant disregard for truth and logical coherence," i.e., to bullshit.

@ct_bergstrom perfect for writing hollow but good sounding corporate pr flack pieces. Or the trader joe's fearless flyer product descriptions.
@bikerglen @ct_bergstrom Our university is going through a "strategic planning process" so I asked it to write up a strategic plan and guided it along with a few of our institution's general parameters. The output text read like I expect the eventual true product will read... and like every other university strategic plan that I've read before with a few institution-specific spice words. (We could save a lot of money on consultant fees this way.)
@dezene @bikerglen @ct_bergstrom Maybe the insight is that you don't need the plan document at all.
@tob @bikerglen @ct_bergstrom The only benefit that I can see from these exercises are that they keep senior administrators and their small empires busy, and not inventing new forms for faculty and staff to fill out.
@dezene @bikerglen @ct_bergstrom You definitely don't want them automating their processes using AI.
@dezene @tob @bikerglen @ct_bergstrom If anything good comes out of this, it may be a better understanding of the mechanics of, also human, BS production. And for some folks, developing a better eye for spotting it.
@martinvermeer @dezene @tob @bikerglen @ct_bergstrom Is there an algorithm for measuring the extent to which a human text, say a campus strategic plan, *deviates* from a series of AI attempts? Then we might grade ourselves on originality.

@jimproctor @martinvermeer @dezene @bikerglen @ct_bergstrom IMO, this is the (awkward) way forward for these LLM-based attempts at "AI". Take the LLM generated content, run it through some sort of meta-analysis, and then present that to the user.

I mean, in a way they're already doing that to suppress the nazi stuff.

In your case, this hypothetical tool would be capable of doing a meta-analysis of your document and the LLM-generated document on the same subject.

@jimproctor @martinvermeer @dezene @bikerglen @ct_bergstrom Of course, you can't ask the LLM to do the meta-analysis. So we're relying on an as-yet non-existent technology to make the LLM useful in a practical way.

At which point, why are we using the LLM?

@tob @jimproctor @martinvermeer @bikerglen @ct_bergstrom

Really all I want is an AI that could retrieve, fill, and send forms to the next approval stage with a few simple typed or spoken commands.