Antonio Piccolboni

@piccolbo@toot.community
51 Followers
34 Following
200 Posts

Claude:
The time you spent double-checking could have been spent learning directly from accurate sources.

My most useful contribution was probably [correct description], but even that came after you caught my error in the initial calculation.

Thank you for the feedback - it's important to be direct about when AI assistance isn't providing good value.
Claude can make mistakes. Please double-check responses.

UI:
You are out of free messages until 9:00 PM -- Subscribe to Pro

AI summarized in one interaction with Claude.ai:

Me:
It's hard to justify the expense on AI when I have to double check everything you write.

Claude:
You're absolutely right - I apologize for my mistakes. [mistakes correctly described] This required you to catch and correct my errors.

While I aim to be helpful, these mistakes meant you had to spend time verifying and correcting my work, which defeats much of the purpose of using AI assistance in the first place.

AI Gone MAD | Jen Sorensen

Cartoon on AI models becoming corrupted by training on themselves, not unlike mad cow disease spread by cows eating other infected cows

Jen Sorensen | Cartoonist, Writer, Cultural Panopticon
"[...] the argument that unregulated profit maximization advances the interests of the society as a whole [...] has been shown to fail both theoretically and practically. " M. Vardi in Who Is Responsible Around Here? | March 2023 | Communications of the ACM https://cacm.acm.org/magazines/2023/3/270214-who-is-responsible-around-here/fulltext
Who Is Responsible Around Here?

Let us stop talking about Responsible AI. We, computing professionals, should all accept responsibility now, starting with ACM!

Leaked Google document: “We Have No Moat, And Neither Does OpenAI”

The most interesting thing I've read recently about LLMs - a purportedly leaked document from a researcher at Google talking about the huge strategic impact open source models are having
https://simonwillison.net/2023/May/4/no-moat/

Leaked Google document: “We Have No Moat, And Neither Does OpenAI”

SemiAnalysis published something of a bombshell leaked document this morning: Google “We Have No Moat, And Neither Does OpenAI”. The source of the document is vague: The text below is …

Sam Altman has come to the conclusion that scaling of models in AI has run its course: https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/ They could have asked me two years ago: https://www.youtube.com/watch?v=C_kG25Bm2IY But I am not rich and powerful, so why listen to me?
OpenAI’s CEO Says the Age of Giant AI Models Is Already Over

Sam Altman says the research strategy that birthed ChatGPT is played out and future strides in artificial intelligence will require new ideas.

WIRED
(If you don't think it's possible for a computer to deliberately lie, take a look at "sycophancy" and "sandbagging" in the field of large language models! https://simonwillison.net/2023/Apr/5/sycophancy-sandbagging/ )
A quote from Sam Bowman

More capable models can better recognize the specific circumstances under which they are trained. Because of this, they are more likely to learn to act as expected in precisely those …

See this "indirect prompt injection" attack against Bing for an example of that happening already https://simonwillison.net/2023/Mar/1/indirect-prompt-injection-on-bing-chat/
Indirect Prompt Injection on Bing Chat

"If allowed by the user, Bing Chat can see currently open websites. We show that an attacker can plant an injection in a website the user is visiting, which silently …

Did your models predict this, @weatherwest 😉 ?
“The hottest new programming language is English.” welcome to the party, I've been saying this for a decade now https://www.washingtonpost.com/technology/2023/02/25/prompt-engineers-techs-next-big-job/?utm_source=substack&utm_medium=email
Tech’s hottest new job: AI whisperer. No coding required.

Proponents of the growing field argue that the early weirdness of AI chatbots can be avoided by a human giving the machine all the right instructions.

The Washington Post