What does “good” coverage of AI mean to you? I wrote about how the disparate views that very smart people have about existential risk are making it hard to calibrate how to cover advancements in artificial intelligence.

https://www.platformer.news/p/why-im-having-trouble-covering-ai

Why I'm having trouble covering AI

If you believe that the most serious risks from AI are real, should you write about anything else?

Platformer
@caseynewton "Good" coverage of what folks keep calling "AI" these days for me would be highlighting the fact that they are essentially a very advanced form of autocomplete and avoiding the folks who think these LLMs have anything in common with the personified, sapient presentations of artificial intelligence that appear in fictional literature and movies.
@caseynewton I would say it's to separate the hype from the real world. Focus on today and don't let them talk about the next version till it ships.

@caseynewton I guess we could start focusing on people who study AI policy and safety. I get that Hinton and Schmidhuber are examples, but being pioneers on neural networks don't make their opinions on *societal impact* risks any more valid than the average AI scholar.

Their opinion kinda matters, but also kinda don't, like famous people. There are more relevant individuals to platform.

@villasbc good perspective
@caseynewton and great write-up 😉 looking forward to your I/O reporting, looming dooms aside

@caseynewton I'm not worried about robots taking over; I'm worried about LLM lies being taken as fact and distorting/ruining everything that they touch. That's where I'd like the coverage to focus: on the real problems we're going to see right now and in the near future.

Another aspect I'm interested in: the many issues with the training datasets (bias, unauthorized use, etc.).

@caseynewton Your framing is the biggest impediment here. There isn't a continuum. There's many possible outcomes that are all still possibly our true future. The only difference between them is how society reacts to the technology and the interests behind the technology. So skip the step where you guess the midpoint and instead cover each technology and product as if it was a separate episode of Sliders. The appropriate context will fill in naturally over time.
@dvogel appreciate this. I think a challenge I feel here is often not really understanding in the moment which direction any particular change seems likely to push us in

@caseynewton I’ve been having similar issues reporting on AI for Ars. Trying to walk a middle path is difficult with the wide range of viewpoints. It may owe to how nebulous “AI” is

Also, since the “AI doom” issue is 100% speculative, the topic has moved from tech into politics—a realm where people disagree over opinions and beliefs

We’ve been increasingly covering AI as a policy issue at Ars and I don’t think it’s a coincidence. So you’re a political reporter now and you didn’t know it 😁

@caseynewton In a way, “AI” is so nebulous that its definition widely varies by personal opinion. So I have come to see AI more as the subjective application or social perception of machine learning research

Due to that, and how everyone grew up on sci-fi, AI is more culture and feeling than science. So if you’re reporting about “AI,” you are reporting about various conflicting belief systems. What people think it is, what they think it should be. Like I said, political reporter 😁

@benjedwards @caseynewton I’ve decided to try to stop calling them “AI” or “artificial intelligence,” it’s not incorrect for some meanings, but it gives the wrong idea that LLMs are “intellegent.” And also the cultural baggage like you say. I’m trying to stick to LLM, ML, machine learning, etc
@benjedwards ahh this is really interesting, thank you for sharing Benj!
@caseynewton @benjedwards Yes, what it is going to come down to is essentially a political battle: labor vs management, is AI a way that companies can operate with far fewer employees, keeping all the gains for the stockholders. The near term "existential risk" is not that AI will take over, but rather that AI will work as designed, but that no changes will be made that might allow must people to still have an income. And all that is politics, not technology.
@benjedwards @caseynewton AI has always been a technopolitical issue, and its specific form in this phase (deep learning) has specific political implications, but many of the aforementioned 'smart' people are blind to concrete social dynamics in a way that's widespread in white men with an engineering mindset

@caseynewton There are two problems with most coverage, including your piece: (1) a lack of informative descriptions of how these machines are built and work; and (2) a focus on positive functionality to the exclusion of any well ordered discussion of limitations.

Both angles are tough to cover in the absence of transparency, but that makes them more important, not less. Both would also blunt the buzz of the topic over time, which I hope is not a reason for gliding past them

@fgbjr appreciate the note but 2 seems wrong to me — I read way more about risk than anything else, including in mainstream pubs

On tech stuff — how do more detailed descriptions improve the coverage in your view?

@caseynewton It was an impressionistic note, I'm not widely read enough to dig in my heels. But the two impressions are related. Talk of risk that I've seen from the most prominent (non-tech) outlets is often premised on software becoming "too good," or developing intelligence beyond human capacity. Although there's a critical difference between mimicry and the sentience of a trusted mind, it's easy to slip across the boundary; and that's where popular tech description can help.
@caseynewton Step away from experts with theoretical concerns and talk about what is happening right now/ask Qs in relation to that. I know nonprofit people using ChatGPT to write communications to donors because they don't write super well. Do donors care? Are ppl worried about inaccuracies, hallucination, etc? We know AI has caused real harm, like facial recognition stuff. What Qs directly re: that are being asked about? To me, that's a clearer direction than sussing out "AI good" or "AI bad"
@ruskicouch great questions, thank you!
@caseynewton For me it would include assiduous puncturing of the hype. E.g., during the "blockchain will change everything" wave, the journalists most useful to me were looking at what it actually did versus the claims. And also looking at the incentives for people making the big claims. I'd also like to hear more from the nuanced people, the foxes rather than the hedgehogs.
@caseynewton I mean there's the context of ... in our society there are a lot of things that we knowingly do in spite of the harm they cause, for the sake of some modicum of convenience. You could insert social media here but I'm thinking just as much about stuff like: Even the fanciest underwear is made under questionable labor conditions and I don't know too many people sewing their own tightie whities. Are we going there with AI? Are we RUNNING there?
@caseynewton I generally wish coverage would be more skeptical of theoretical OMG SKYNET risks (or OMG techno utopia alternative), and look harder at near term stuff, like AI generated SEO spam drowning out real content, or chatbots that routinely make shit up being used in ways that impact people in real life
@caseynewton I also wish the press would push AI vendors/hype artists to explain why they believe defects like "hallucinations" are fixable bugs, rather than inherent characteristics of the current tech
( @ncweaver summarizes nicely in
https://mastodon.social/@ncweaver@thecooltable.wtf/110333341827157379 )
@caseynewton I'm bored of the AI doomsayers and the AI influencers (10x your productivity with ChatGPT today!) and the people who think it's all vaporware. I love @simon's blog posts on the subject, because they don't fall prey to pat conclusions.

@caseynewton my experience is that if the article doesn't come out and clearly define that AI is a silly term and everything we see to day is actually machine learning and there's nothing intelligent about it... it's not going to be a very good article.

Machine learning has some neat applications and possibilities, but everyone who's gung ho for "AI" are the same kinds of people who think cryptocurrency will save the world. But they're just refusing to see that the emperor is wearing no clothes

@caseynewton It is fascinating that big changes like this often forget how humans adapt time and again.
@caseynewton Totally off-topic, but the difference in quality and quantity in responses to this post between here and the other place is striking. Dang.
@caseynewton AI is suck a nebulous term and I think that the focus should be on specific implementations of machine learning technology and how they are damaging instead of "AI is going to destroy us all". We should focus on the fact that machine learning in vehicles is currently killing people and that misuse of LLMs can have bad outcomes. The focus should be on how humans implement AI and what effects that has. LLMs aren't going to take over the world like some people seem to believe but it will exacerbate media misinformation when reporters use it to write their articles and that will have bad outcomes. I think that reporting on the dangers of AI should be specific and founded, not just unwarranted claims that AI will cause a dystopian future where robots controll us all.

@caseynewton to me it means listening to folks like @timnitGebru and @emilymbender that focuses on concrete real world issues. Coverage of rich people worried about AI being smarter than humans is a distraction and a disservice unless it's to highlight how ridiculous these people are.

Policy discussions, union discussions, worker protections, exploitation of English speaking poor countries are all interesting.

@caseynewton @timnitGebru @emilymbender

Digging into Microsoft saying they're doing a careful rollout and then adding it to multiple products in the span of a month, Bing, Skype, edge afaik

@caseynewton

AI is currently being reported on like "The Internet" was in 1995 or so, when nobody really had a grip. My suggestion is to give concrete examples of current use cases, with both positive and troublesome outcomes. Ethical considerations in decision support systems. Less dreaming of what it may become in 30 years. Like police using facial recognition vs. Adobe Sensei to change seasons in a photo or the TikTok beauty filters. Significant energy savings via smarter route planning etc

@caseynewton 'baseline scepticism' sounds good! I understand the 'here's a cool new thing' impulse but we've all lived through the un/intended consequences of those cool new things so I prefer pieces that also ask questions about externalities - what does it mean for minorities, the environment, people previously paid for that work.

Probably the biggest service journalists could do is pulling back the 'AI' curtain to reveal... LLMs, and helping people understand them

@caseynewton and you mentioned 'AI safety' but not AI ethics - is the first an attempt to frame the questions they want society to ask. A bit 'how do I stop my diamond shoes from hurting my feet'?

I don't think journalism can reduce our human propensity for seeing personality and intelligence in chat bots, but it'd be nice if it could!

@caseynewton My hot take: The current lineage of AI tools will be neither as transformative as boosters suggest nor as dystopian as critics suggest.

Eventually we may well achieve AGI, but that’s a whole other kettle of fish and it’s still likely decades away at best.

@caseynewton Something with hashtags so I can mute the f)(Ck out of it
@caseynewton I would like coverage that stops using the blanket term of “AI” I would like people to understand the depths and just how long AI has been a part of our lives
@caseynewton I think that the people with seemingly non-overlapping views are the most important to put in (metaphorical) conversation with each other. People who believe that current approaches to machine learning are on a path to “AGI” and “superintelligence” should not get forums where their baseline views are accepted without substantive challenge. But these views shouldn’t be ignored or casually dismissed either.
@caseynewton As a layperson, having thought about all this stuff a bit obsessively, I think the skeptics seem to have the better argument, but I would give at least a 10% chance (totally arbitrary) that I’m wrong about that. So I think that’s important to explore.
@caseynewton I think the biggest risk for you as a journalist would be to be captured by the industry’s view of itself and its work. Even in your linked piece, which I respect, both positions you present as counterpoints are squarely within that industry ideology.

@caseynewton You might find this summary page from Chapman useful, charting a middle course on AI issues. Among other things in there: "In fact, artificial intelligence is something of a red herring. It is not intelligence that is dangerous; it is power. AI is risky only inasmuch as it creates new pools of power. We should aim for ways to ameliorate that risk instead."

https://betterwithout.ai/scary-AI

What is the Scary kind of AI? | Better without AI

Better without AI
@caseynewton I have an idea: hypothesise a slippery slope towards AI-induced oblivion, with stepping stones that indicate progress down the slope. Allow it to branch for things like Terminator endpoints or Matrix endpoints.
Then, illustrate it and publish it on every article, showing what effect the subject of the article has on progress down the slope. That way, you can cover the trivial improvements, and still consciously think about, and convey, the impact that is happening step-by-step, without resorting to hindsight.
@caseynewton Also, "whatever bad actors do with AI can likely be countered with good actors using AI" is as dumb as argument as when it's used for "bad guys with guns being stopped by god guys with guns." They're both asymmetrical situations, where the first to fire creates damage and gains a significant advantage.