Last week I asked you all to tell me how to cover AI. Lots of you had ideas — and here are seven principles you asked me to follow https://www.platformer.news/p/how-you-want-me-to-cover-artificial
How you want me to cover artificial intelligence

Seven principles for journalism in the age of AI

Platformer
@caseynewton Can you please add stop it with covering Google PR so much and focus on covering when Google makes stuff real.
@darryl_ramm what do you mean by 'real'? Bard is available in 180 countries
@darryl_ramm @caseynewton real, like you mean transformers? like gpt?

@caseynewton

The bottom line is that one can not believe everything they hear. Or read. Or see.

Nothing has really changed.

One must trust the source.

@caseynewton Casey Newton, the most responsible AI reporter! I really love seeing these principles put out into the open.
@caseynewton One more. We look at AI from an American perspective. There are national security implications, with multiple nations — including adversaries — pursuing AI independently. Step hard on the breaks, and there will be transformative security and economic damage. There’s an arms race between the US/western private sector companies and public sectors outside. AI investments by unfriendly countries should be instructive to the tradeoffs we accept. International competitors merit attention.
@caseynewton I missed your initial call for feedback, apparently, but the thing that bothers me the most in discussions of this stuff (especially LLMs) is when people anthropomorphize these things. By saying things like it "hallucinates" or "understands" or even "lies" you imply that there is a mind capable of doing those things. Today, these tools don't really do those things as we understand them when applied to people.

@caseynewton

Thanks, that all sounds really sensible!

@caseynewton Good stuff!

One thing I always point out as crucial context is that big tech learned from GDPR that regulatory safeguards can be good for their business. The incumbents push for safety after they had the time to build it, deepening the moat.

Having these execs on congress is a signal about their perception of AI risk... but there's noise coming from vested interest in a more bureaucratic environment.

@caseynewton In a world full of AI, as a Journalist, you could/should focus on Identity tech. Sam Altman is investing in Worldcoin, seems he controls the problem and the solution for the problem Worldcoin comes with an eyeball identity scanner, i think.
@caseynewton You have some smart readers! This is good stuff.
@caseynewton Thanks for writing this. I do think more attention needs to be paid to the leaked "we have no moat" article, explaining why we aren't going to have an OpenAI/Google monopoly on capable LLMs for long. Makes me wonder if one purpose of OpenAI asking for regulation is to lock in their position and hamper open source models. See https://simonwillison.net/2023/May/4/no-moat/
Leaked Google document: “We Have No Moat, And Neither Does OpenAI”

SemiAnalysis published something of a bombshell leaked document this morning: Google “We Have No Moat, And Neither Does OpenAI”. The source of the document is vague: The text below is …

@not2b That seems pretty clearly the intent of the calls for regulation coming from the Big Tech corporations. They don't want to be regulated, they want to write regulations that benefit them and harm any competition. They'd prefer it wasn't the FTC, who has people who understand LLMs and consumer protection, doing the regulating I suspect. @caseynewton
@caseynewton this is great, thank you for writing it!
@caseynewton I don't even fully endorse calling any of this trash AI. Defining intelligence and the basic structures of how we think are less-than-solved problems. Have any of the people running these gimmicks demonstrated even a current standard of comprehension in this field, much less an impressive new level?

Think some of your most interesting reporting was the Cruze trip in SF.
They would have mapped that micro-region with sub-millimeter radar, they could / should have hard coded the road boundaries (eg yellow lines, median) - and yet the car still couldn’t stay in the lane!
We all underestimate the final 20%, from an 80% test to a working product.
What’s the ‘stay-in-lane’ basics equivalent in AI? Before we talk about job impacts, weird philosophical etc, can the AI get the basics right?

@caseynewton