Jordan B. L. Smith

74 Followers
98 Following
305 Posts
Researcher in music and AI/ML. Hoping to understand why music is less like other sound and more like a crossword puzzle.

Clickbait Yogi-ism! (Compare with: "Nobody goes there anymore — it's too crowded.")

FYI the town in question is St Leonard's, just next door to Hastings.

According to #spotify2023wrapped, my top five songs of the year include two by Unknown Mortal Orchestra: "Layla" and "Multi-Love", and three electronic tracks. But I'm sure I spent far longer listening to this:

https://www.tiktok.com/@zeo_choons/video/7281101236194020615

[Attached video is a copy of the above TikTok]

Zeo on TikTok

Replying to @cinerd3lla duet with @LIZZIE 🦸🏻‍♀️ #harrypotterchessscene #harrypotterremix #harrypotterclub #houseremix #techhousemusic #techhousedj

TikTok
The "Group pics, perfected" feature, in the new Pixel 8 product page, is dystopian, right? Sure, AI-based photo retouching is a useful feature, but I find it horrifying to imagine looking at someone's family photo album to find identical, polite smiles on every face in every picture. It's more subtle than an Aphex Twin video but no less disturbing, and could/should form the basis of an upcoming A24 film.

@dfeldman’s tweet — from June 2022 — also left me curious about whether the current version of ChatGPT (3.5) would also fail to complete the syllogism. It did! So did Anthropic's Claude.

So, can text-completion systems solve logic puzzles? Well, solving a syllogism is a prerequisite to solving a logic puzzle. ChatGPT cannot solve a syllogism. Therefore... 🤗

Footnote [2] is more puzzling. To support the claim that LLMs exhibit the “qualitatively new behaviour [of] solving logic puzzles”, the author cites a tweet by @dfeldman — which shows an LLM *failing* to do logic. The tweet asks: "Can GPT-3 solve simple logic puzzles?" and shows a series of GPT-3 phrase completion-based chats, beginning with:
Prompt: "Q: Alice is shorter than Bob. Bob is taller than Charlie. Is Alice shorter than Charlie? A:"
Completion: "Yes, Alice is shorter than Charlie."

First, here is the blog post: https://thegradient.pub/othello/

Footnote [1] is a tweet in praise of GitHub Copilot, which was trained on a large database of code. When Copilot writes code, this is not a “qualitatively new behaviour”; this is a model doing what it has been trained to do.

Large Language Model: world models or surface statistics?

A mystery Large Language Models (LLM) are on fire, capturing public attention by their ability to provide seemingly impressive completions to user prompts (NYT coverage). They are a delicate combination of a radically simplistic algorithm with massive amounts of data and computing power. They are trained by playing a guess-the-next-word

The Gradient
Another way that ChatGPT impresses is that, even when it makes mistakes, it can be prompted to “check its work”, and often seems to do so correctly. This can even make it seem “self-reflexive”! But it can just as easily be prompted to revise replies that were already correct and introduce mistakes. (Or, in this case, fix the wrong thing!)
When ChatGPT succeeds, it can seem uncanny, and seduce us into viewing it as an interface to a general AI that has vast knowledge about the world, even if it’s only imperfectly accessible. Failures like this remind us that it doesn’t “have” knowledge. It doesn’t “know” that Paris is the capital of France; it has just often seen those words together. Similarly, it doesn’t even “know” that “askew” contains the letter ‘w’; it might seem to, but gives away the game when it tries to explain why:

A conversation with ChatGPT in which I ask it to write a rap about spaghetti without using the letter 'e'.

It gets off to a good start, but then makes careless mistakes — and cheats!

When asked to check its work, it apologizes, but misdiagnoses the issue.

When asked to revise, it replaces ‘stir-fry’ with ‘cook’ — making a legal word choice worse.

Finally I prompt it again, pointing out the first error, and it does well! But in the 2nd stanza, it gets careless (and cheats) again.

With Twitter, you could ask famous people simple, respectful questions, and hope for a reply. But I never got one from Mr. Steinman or Mr. Loaf, and now they've both passed and I'm afraid I'll never get an answer to this question. That's maybe as good a reason as any to give up on Bird Site. I'm done there, officially!