https://www.history.co.uk/article/the-real-story-of-the-chernobyl-divers
In this paper, we study how well humans can detect text generated by commercial LLMs (GPT-4o, Claude, o1). We hire annotators to read 300 non-fiction English articles, label them as either human-written or AI-generated, and provide paragraph-length explanations for their decisions. Our experiments show that annotators who frequently use LLMs for writing tasks excel at detecting AI-generated text, even without any specialized training or feedback. In fact, the majority vote among five such "expert" annotators misclassifies only 1 of 300 articles, significantly outperforming most commercial and open-source detectors we evaluated even in the presence of evasion tactics like paraphrasing and humanization. Qualitative analysis of the experts' free-form explanations shows that while they rely heavily on specific lexical clues ('AI vocabulary'), they also pick up on more complex phenomena within the text (e.g., formality, originality, clarity) that are challenging to assess for automatic detectors. We release our annotated dataset and code to spur future research into both human and automated detection of AI-generated text.
I say that #SyntheticChernobyl occurred in late November 2022, the poisoning was real, first-responders the most affected, and it will take decades to learn the side-effects.
I don’t think I’m wrong.
My cyberpunk pastime in Midjourney is to imagine thousands of salt-and-thorium mini-reactors powering desalination plants supporting walkable American villages with e-rickshaws. Microsoft is training language models to generate documentation to build nuclear reactors. That is the solarpunk future
The bleeding edge of #technology #journalism is struggling to grok #SyntheticChernobyl
https://www.fastcompany.com/91117543/google-generative-ai-seo-spam
My cyberpunk pastime in Midjourney is to imagine thousands of salt-and-thorium mini-reactors powering desalination plants supporting walkable American villages with e-rickshaws. Microsoft is training language models to generate documentation to build nuclear reactors. That is the solarpunk future
@joannastern hacked her bank, family, and doctor for a piece in the WSJ in Summer 2023.
The #BotShit will poison our information wells way before quantum computing comes on line to hack all security.
The leaking, ballooning, unmitigated poisoning is why I call November 2022 the detonation of #SyntheticChernobyl.
https://spore.social/@awpeet/112181291229052728
aaaand voice-cloning technology can be used to break into bank accounts that use voice authentication 😨 https://arstechnica.com/information-technology/2024/03/openai-holds-back-wide-release-of-voice-cloning-tech-due-to-misuse-concerns/
“This has gone hand in hand with the dismantling of the journalistic apparatus, which seems to be reaching its apotheosis over the last 12 months. Not to mention the rise of #AI and the collapse of #internet searchability.”
#NarcotizedThinking #NarcotizingBlanket #journalism #SyntheticChernobyl
In the era of “AI”, high-quality user-generated content (i.e. #UGC) becomes as valuable as #PreWarSteel.
My cyberpunk pastime in Midjourney is to imagine thousands of salt-and-thorium mini-reactors powering desalination plants supporting walkable American villages with e-rickshaws. Microsoft is training language models to generate documentation to build nuclear reactors. That is the solarpunk future
My cyberpunk pastime in Midjourney is to imagine thousands of salt-and-thorium mini-reactors powering desalination plants supporting walkable American villages with e-rickshaws. Microsoft is training language models to generate documentation to build nuclear reactors. That is the solarpunk future