Hey, I work in LLM research (computational linguistics).
People should understand that us researchers are not the ones marketing the technology as replacements for workers, let alone artists, or as search engines, friends, therapists, or other tools they are not.
Often enough, we are the ones who know enough about it to understand why current trends are wrong or dangerous.
One of the most outspoken anti-AI-hype activists is Emily Bender ( @emilymbender ), a computational linguist who is working in the field herself.
Research is not the enemy. The technology isn't, either. It's the marketers, the venture capitalists, the investors, the managers, the grifters.
@lianna @anolandria @emilymbender It's being weaponized against people today. It should be pulled out of the workplace, out of generating porn of victim photos, deepfakes of anyone, and should be stopped from drinking half the fresh water in a desert and eating multiple power plants worth of energy.
Literally the only valid use cases today are in providing assistance for the disabled.
More research isn't what's needed now. More responsibility is. An unfathomable amount more.
@targetdrone @anolandria @emilymbender Cameras should not be banned because upskirting exists. The GPS network should not be abolished because stalkers can use it to track their victims. The internet should not be shut down because it makes it easier for propaganda to spread.
Your enemy is not technology, it is the people abusing it. The office worker letting ChatGPT write a generic status report e-mail to her supervisor because she struggles with formal writing is not your enemy.
If you think "research" is a straight translation for "people improving the technology's capabilities", you're wrong. AI research includes ethics, linguistics, sociology. If you want more "responsibility" for AI misuse, that requires research. Or do you want governments to ban things without concrete data? Where do you think the data for machine learning misuse comes from if not from research?