Here in one paper is the probable reason why Apple abruptly pulled out of OpenAI's current funding round a week ago, after previously being expected to buy at least a billion bucks of equity.

(AI is peripheral to Apple's business model and not tarnishing their brand in the long term is more important than jumping on a passing fad.)
https://appdot.net/@jgordon/113294630427550275

John Gordon (@[email protected])

“current LLMs cannot perform genuine logical reasoning; they replicate reasoning steps from their training data” https://machinelearning.apple.com/research/gsm-symbolic “Adding a single clause that seems relevant to the question causes significant performance drops (up to 65%) across all state-of-the-art models” Reassuring! Best news in months. #jgshare

AppDot.Net
@cstross though they still went ahead with adding it to their products. Curious to see how that goes

@fl0_id @cstross that may in part be a campaign to gather information on possible interaction workflows.

They‘re maybe also prototyping API designs for being able to pipe semantic information between different applications.

@Sevoris @cstross but that only makes sense if you’re still adding it? My point was if you’re trying to save face against ai garbage, how does that work when you’re still adding it to your products? Your points are much more on the details and only relevant if you care about / add it.

@fl0_id @cstross that depends on how you think about the interaction workflows around people manipulating and moving information between applications.

I mean yes it‘s a poisoned data source, and it‘s definitely a compromise, but it‘s not like "I want this from my digital assistant" isn‘t data that can be used in some shape or form.

@Sevoris @fl0_id @cstross

> I want [this] from my digital assistant

is 1000% valuable product-direction research data

Current systems are basically outsourcing to call-center style labeling factories — with a bit of ML laundering in between — & this fully pre-dates the LLM boom (Facebook M,? & when Siri was new)

Recent LLM innovation has let them do more of the ML laundering and time shifting by throwing compute at it

But they're still mostly emulating what a call-center drone can do

@Sevoris @fl0_id @cstross

That "emulate form without doing actual reasoning" is so effective at charming the executive class into throwing money at it…

…is kinda damning of the executive class' own "genius" intellectual self-regard

They themselves have used prose-based discriminative pattern-matching (& privilege) to get where they are, not actual reasoning

"That guy fast-talks like he knows what he's talking about, put him in charge of a division" is no way to run a system of {governance}

@trochee @fl0_id @cstross yeah, this whole thing is certainly part of the pathology, though I wouldn't nessecarily acuse Apple's engineering team of being infected with it outright.

(that said, user groups Apple targets may well be... the money for apple products is certainly correlated with fast-talking people...)

@Sevoris

I used to joke — until it became Not Really That Funny — that all the digital assistants were solving Silicon Valley techbro problems caused by the Silicon Valley commute

— going through email on the drive to work
— someone to talk to/give you therapy on the drive to work
— someone to write your memos on the drive to work
— driving directions that avoid traffic jams
— Uber; the fantasy of self-driving cars; imagining public transit as private

Not much has changed

@fl0_id @cstross

@trochee @Sevoris @fl0_id @cstross Something tells me the fast-talking guy in the last paragraph was *really* involved in debating societies!