"When people search for information online today, they are presented with an array of options, and they can judge for themselves which results are reliable. A chat #AI like #ChatGPT removes that “human assessment” layer and forces people to take results at face value, says Chirag Shah."

https://www.technologyreview.com/2023/01/17/1067014/heres-how-microsoft-could-use-chatgpt/

Hold on:

1/n

Here’s how Microsoft could use ChatGPT

Plus: Roomba testers feel misled after intimate images ended up on Facebook.

MIT Technology Review

"Language models could be integrated into Word to make it easier for people to summarize reports, write proposals, or generate ideas, Shah says. They could also give email programs and Word better autocomplete tools, he adds. And it’s not just all word-based. Microsoft has already said it will use OpenAI’s text-to-image generator DALL-E to create images for PowerPoint presentations too."

2/n

**But here’s the important question people aren’t asking enough: Is this a future we really want?**

"Adopting these technologies too blindly and automating our communications and creative ideas could cause humans to lose agency to machines. And there is a risk of “regression to the meh,” where our personality is sucked out of our messages, says [Melanie] Mitchell."

3/n

“​The bots will be writing emails to the bots, and the bots will be responding to other bots,” she says. “That doesn't sound like a great world to me.”

Concerns:

1. Would they manipulate us to buy stuff or act in a certain way?

2. People will still have to edit and double-check the accuracy of AI-generated content.

3. People may blindly trust it, which is a known problem with new technologies. 

“We'll all be the beta testers for these things,” Mitchell says.

4/4

https://www.technologyreview.com/2023/01/17/1067014/heres-how-microsoft-could-use-chatgpt/

Here’s how Microsoft could use ChatGPT

Plus: Roomba testers feel misled after intimate images ended up on Facebook.

MIT Technology Review

@arjen reg. point 2: I fear/expect, that those "brave" ones will simply go with it and that double-checking will be forced/outsourced to the other party.

E.g. if error occurs at other end I may face following:

1) spending increased effort on validating

2) if this does damage to me, I'll either absorb it or try to extract compensation from the other side

3) if this gives me advantage, I need to think whether it's fair to keep it (say as compensation for 1+2, …)