Nice piece by in Vanity Fair, drawing together some industry perspectives on AI, recent developments around ChatGPT, and where we might go from here.

What it does well is that it points to concerns about plagiarism, accuracy, and the quality of information—all issues which come up in the context of large language models and their increasing use in news work, and topics we should think about (and people already do).

https://www.vanityfair.com/news/2023/01/chatgpt-journalism-ai-media

ChatGPT’s Mind-Boggling, Possibly Dystopian Impact on the Media World

Is artificial intelligence “useful for journalism” or a “misinformation superspreader”? With CNET mired in controversy, Jonah Peretti promising “endless opportunities,” and Steven Brill warning of AI’s weaponization, the industry is only just coming to grips with this jaw-dropping technology.

Vanity Fair

What it misses out on is the fact that AI is already in use in news organisations around the world...just in ways that are much less fancy to the outside observer.

One reason why we see this hype around ChatGPT and generative AI is that these models have become very good at communicating with us and mimicking activities that were thought to be the domain of humans only. They still do not "understand" and likely never will but are good at fooling us into believing just that. Hence the hype.

However, AI is used by news organisations in countless ways—detecting stories, improving production processes, and more targeted distribution—which taken together already make a difference to journalism.

It just happens in more places and in ways that are seen as less worrisome to most of us (even though, e.g., the use of AI in recommendation is potentially just as or even more problematic than in writing texts).

@FelixSimon can you please explain the point about recommendation? Assuming we mean other texts from the same newspaper, as opposed to random posts of junk sites/Facebook?
@Dubikan Yes, that’s exactly what I meant :) AI (well machine learning really) as applied to recommendation of news orgs own content through their own channels/products
@FelixSimon yeah, why is that dangerous?
@Dubikan Because we have conflicting interests at work. The business case is to give readers more of what they want. The democratic case is that readers should have a balanced information diet. Then there is the argument about autonomy (should readers receive automated recommendations against their will). Ideally, all this is kept in balance. But risk is that business case wins, and rest is treated as afterthoughts
@FelixSimon as long as something was deemed to be worthy of publication, I don't see how recommendations based on one's interests are bad. They could even allow people to break out of episodic reporting and gain greater context, or be better exposed to follow ups on past stories they read. I'm far more worried about how media has already pivoted to 100% clickbait headlines and the impact of that on autonomy.
@Dubikan Well, recommendations are "bad" if the systems are not designed in the way you imagine (as these aspect include specific design choices). As for clickbait headlines, I am with you!