How do we wake up regulators about the risks of the upcoming 'conversational' search engine war?
I have, as many in NLP worldwide in the last 2 months, talked to many journalists and academics from other fields about #ChatGPT. Many of us have warned rigth away (e.g., [1]) that the commercial interests & competition in big tech will lead to this technology being unrolled prematurely, without proper independent risk assessment. Is anyone doing anything about it?
[1] https://www.volkskrant.nl/nieuws-achtergrond/een-moordscene-a-la-nicci-french-gemaakt-met-kunstmatige-intelligentie-en-meer-iedereen-is-verbijsterd-over-de-kwaliteit~b6e6df4e/

Een moordscène à la Nicci French gemaakt met kunstmatige intelligentie, én meer: ‘Iedereen is verbijsterd over de kwaliteit’
Een doortimmerd marketingplan maken, een column voor de krant schrijven of een bestaande computercode verbeteren: het taalprogramma ChatGPT verbaas...
de VolkskrantI worry, however, that many don't take those risks very seriously: a little bit of plagiarism, some misinformation, a bit of misogyny -- nothing new, right? That's a mistake when we talk about search engines: they are the entry point to the internet. When Facebook, another key entry point, changed its policies several times in the last decade, this had massive direct (e.g., on internet traffic to newspapers) & indirect effects. When Bing/Perplexity/You & others now integrate search with
#chatgpt-like techniques & force Google to the same, we don't know the consequences, but accidents are bound to happen: companies go bankrupt, crucial information does not reach key people, while misinformation does. It's as if a company controlling a major highway suddenly redirects all traffic to a secret road; surely, governments would want a say? I'm no expert on regulations but I wish regulators would hit a pause button & create an opportunity for some proper auditing of this technology.