Ctrl-Alt-Speech: Let Fly The Claudes Of War, With Casey Newton
Ctrl-Alt-Speech: Let Fly The Claudes Of War, With Casey Newton
Turning ChatGPT into the control room of a user’s digital life
I think this is spot on from Casey Newton about the vision guiding OpenAI’s recent development. It would be easy to read their developments as throwing a million things at the world to see what sticks (social video, online shopping, pulse, ad tech etc) but they are explicitly saying these are all part of a more or less unified vision:
OpenAI seems more likely to monetize its platform through revenue-sharing deals or auctioning off placement. Maybe you ask for help with algebra, OpenAI loops in the Coursera app, and takes a finder’s fee if you become a paid user of the latter.
To OpenAI executives, the move helps them pursue what they describe as the goal they had before they got sidetracked by ChatGPT’s success: building a highly competent assistant.
“What you’re gonna see over the next six months is an evolution of ChatGPT from an app that is really useful into something that feels a little bit more like an operating system,” Nick Turley, the head of ChatGPT, told reporters in a Q&A session on Monday. “Where you can access different services, you can access software — both the existing software that you’re used to using, but … most exciting to me, new software that has been built natively on top of ChatGPT.”
https://www.platformer.news/openai-dev-day-2025-platform-chatgpt/?ref=platformer-newsletter
What will optimisation look like for them on this model? It’s not quite user engagement in the same way as social media platforms but equally there will be an incentive structure facing the firm and a range of data-intensive methods through which to act on these incentives.
And I think he’s right there’s a huge risk of a massive data privacy scandal:
At launch, OpenAI is promising a more rigorous approach to data privacy. OpenAI will share only what it needs to with developers, executives said. (They essentially hand-waved through the details, though, so the actual mechanics will bear scrutiny.) Unlike Facebook, though, OpenAI has no friend graph to worry about — whatever might go wrong between you, ChatGPT, and a developer, it will likely not involve giving away the contact information of all of your friends.
At the same time, the AI graph may prove even riskier. ChatGPT stores many users’ most private conversations. Leaky data permissions, either intentional or accidental, could prove disastrous for users and the company. It only took one real privacy disaster to end Facebook’s platform ambitions; I can’t imagine it would take much more to end OpenAI’s.
#CaseyNewton #ChatGPT #generativeAI #openAI #platform #platformisation
The visibility of academics will be shaped through LLMs as much as social media in future
This observation by the tech journalist Casey Newton got me thinking about how LLMs are increasingly shaping the visibility of academics:
Thinking models have gotten surprisingly good at identifying potential sources — potentially academic ones. When writing about Grok last month, I wanted to talk to someone who had studied relationships between people and chatbots. ChatGPT led me to Harvard’s Center for Digital Thriving, and suggested someone to talk to, along with their email address. I wound up interviewing them for the piece. The fact that thinking models can quickly analyze the academic literature about any subject and identify prominent researchers on the subject, along with their email addresses and phone numbers, is beginning to save me a lot of Googling.
I realised early on that I was more visible in model responses (ChatGPT and Claude) than other academics of a comparable age, career stage and influence* which I assumed was because 6000 blog posts hosted on wordpress.com were gobbled up in training. It could talk at greater length, with more accuracy, about my work then it could about other academics because my online visibility translated into model visibility.
I suspect this also means I’m more prone to being suggested by the model for a topical discussion in the way that Casey points to when looking for experts to interview, though I’m unsure how to go about establishing this. The value of a long term blog also means that I figure prominently as a source for ChatGPT and software like Perplexity. Interestingly, I don’t recall ever seeing a single referral from Claude. In the last year I’ve had more referrals to this blog from ChatGPT than I have from Facebook or Bluesky, though interestingly LinkedIn drives more traffic.
In other words there’s a complex relationship between online visibility and model visibility. Given that online visibility is the key driver which led social media to be institutionalised into higher education in the UK, this is very significant for academic careers even if it takes a long time for it to consolidate into a widely recognised incentive structure.
What other factors lead to increased model visibility? Ultimately this is a matter of visibility within the training data, but the patterns of visibility produced by this are challenging to conceptualise. What are the positive and negative outcomes of increased model visibility? Casey illustrates one in terms of visibility to journalists but there are many others.
*I did this in a very impressionistic way but it would be interesting to do this as a robust quantitative exercise.
This is an interesting overview of the rapidly developing field of SEO for LLMs: https://www.seerinteractive.com/insights/how-to-get-your-brand-in-chatgpts-training-data
#CaseyNewton #GenerativeAIForAcademics #higherEducation #SocialMedia #socialMediaForAcademics #trainingData #visibility #wordpress
„[…] and the company's rivals, including Google and OpenAI, seem to have a much stronger idea of what they are doing. […]“
Das glaube ich allerdings auch.
"It's not clear why #CaseyNewton believes a large userbase means it'll sustain a large and profitable industry. Even though #OpenAI’s #ChatGPT is one of the largest consumer products on the Internet, it is burning through billions with no profit anytime soon.
…concerns raised about #generativeAI that touch on burn rate, energy input, scaling models, training data seem to be summarily dismissed by Newton as whining about #AI being “fake.”"
https://thetechbubble.substack.com/p/the-phony-comforts-of-useful-idiots
"Requiring that AI models add digital watermarks to their output disclosing their provenance."
https://www.platformer.news/thursday-newsletter-3/
Oooh. That's clever. Because it can apply to your home and garden (not actually) "Open Source" versions too.
"... some huge part of... [the SubStack founders] actually enjoys being part of a Culture War, and they like fighting it."
#CaseyNewton, 2024
Seldom have I heard a better example of the principle known in psychology as "projection".
It's people like Casey who enjoy fighting Culture Wars. So much so that they keep trying to drag SubStack into them.
(1/?)
"In particular, it will turn to the development of artificial intelligence. The rise of AI has already been the story during the relatively calm administration of Joe Biden. But with Trump likely offstage for good, reporters like us will have more room to explore the race to build super-intelligence."
#CaseyNewton, 2024
https://www.platformer.news/leaving-substack-platformer-year-four/
Calling #MOLE training "AI" is investor bait, not a serious attempt to "build super-intelligence". Has Casey really not noticed that yet?
"... Microsoft, Alphabet, Amazon and Meta all increased expenditures on AI dramatically in the first half of this year, to a collective $106 billion."
#CaseyNewton, 2024
https://www.platformer.news/ai-bubble-tech-stock-decline/
(1/?)