The philosopher Harry Frankfurt defined bullshit as speech intended to persuade without regard for the truth. By this measure, ChatGPT is the greatest bullshitter ever. Large Language Models (LLMs) are trained to produce plausible text, not true statements. So using ChatGPT in its current form would be a bad idea for applications like education or answering health questions.

Despite this, there are three areas where LLMs can be extremely useful: https://aisnakeoil.substack.com/p/chatgpt-is-a-bullshit-generator-but

ChatGPT is a bullshit generator. But it can still be amazingly useful

The philosopher Harry Frankfurt defined bullshit as speech that is intended to persuade without regard for the truth. By this measure, OpenAI’s new chatbot ChatGPT is the greatest bullshitter ever. Large Language Models (LLMs) are trained to produce

AI Snake Oil

Here are three kinds of tasks where @sayashk and I think ChatGPT can shine, despite its inability to discern truth in general:

1. Tasks where it’s easy for the user to check if the bot’s answer is correct, such as debugging help.

2. Tasks where truth is irrelevant, such as writing fiction.

3. Tasks for which there does in fact exist a subset of the training data that acts as a source of truth, such as language translation.

https://aisnakeoil.substack.com/p/chatgpt-is-a-bullshit-generator-but

ChatGPT is a bullshit generator. But it can still be amazingly useful

The philosopher Harry Frankfurt defined bullshit as speech that is intended to persuade without regard for the truth. By this measure, OpenAI’s new chatbot ChatGPT is the greatest bullshitter ever. Large Language Models (LLMs) are trained to produce

AI Snake Oil

This is the latest from our AI snake oil book blog where we comment on AI hype in the news and separate the AI wheat from the chaff. The book is under contract with Princeton University Press @princetonupress.

We're grateful to everyone who's subscribed, as it's helped us get great feedback as we draft the book. In previous posts, we've discussed bait-and-switch AI risk prediction tools and dissected the ways in which the media hypes AI, among other topics. https://aisnakeoil.substack.com

AI Snake Oil | Sayash Kapoor | Substack

What makes AI click, what makes it fail, and how to tell the difference. Click to read AI Snake Oil, a Substack publication with thousands of readers.

@randomwalker @sayashk
This sort of tool will be very useful for making things like regular expressions, which are annoyingly difficult to remember and write correctly for (most) humans.
@randomwalker @sayashk a thing which I haven’t seen discussed much is the model’s performance in languages other than English. It can generate text in a small language like Finnish as well, but the level of prose is an uncanny hybrid of third-grader and machine translation.
@randomwalker I've also found its patterns detected during the training process to be useful. For example, asking #chatGPT to generate an outline for a persuasive post or presentation helps get me kickstarted - I still need to write the thing with all the relevant nuance, but seeing what ChatGPT "learned" are the common contours, or shape, of related writing accelerates me past the blank page.
@randomwalker I could spend ages studying an area and becoming familiar with the rhythms, the structure, and the form. Or I could leverage the fact that #ChatGPT exists because it was able to digest a vast corpus and already come to some conclusions. I would never trust its claims. But the pattern observations that have been extracted - right AND wrong? Those are some interesting writing prompts - like authors have used forever - to explore.

@randomwalker @sayashk I believe you are interesting in this case. Japanese roll playing game fan Tried to play a game as game master with ChatGPT player.
And they got the perfect game. ChatGPT understand the battle rules which is made by GM during their play on the fly.
ChatGTP might be useful as human cloud.
This is the log. Sorry the actual logs are screenshots.

https://togetter.com/li/1982049

「AI相手にTRPGのGMやる!」というテーマでChatGPTで遊んでいたら最後までプレイが成立してしまった「一人でTRPG楽しめる時代が来る?」

すごいとしか言えない

Togetter
@randomwalker @sayashk And what about the 5 first results of a Google Search? Which is the level of bullshit on that case?