@anolandria A strong disconnect between prior mathematical principles of proof and rigor and the way statistical curve fitting is alleged to work now.
No, I don't wish to elaborate.
@anolandria Current "AI" is not (and will never be) actual A.I. and should be written in quotes. Facetiously and seriously.
LLM/GenAI deserves as much consideration as any other technology but only if ethical, economical, environmental, and other yet unindentified concerns are addressed.
It's not research that's needed, what we need is less assholery. Current "AI" is designed and deployed as a weapon.
@anolandria This poll isn't clear about terms.
"AI" lumps together several distinct technologies, some of which are well-founded and useful.
LLMs and "generative AI" are tremendously wasteful and destructive and are being used as ideological weapons to assault human dignity.
In the abstract, could they be used only for benign applications? Perhaps, but the resource costs are still so extreme that I doubt they'll ever be worth the cost.
Meanwhile, the ideological violence is extreme.
Hey, I work in LLM research (computational linguistics).
People should understand that us researchers are not the ones marketing the technology as replacements for workers, let alone artists, or as search engines, friends, therapists, or other tools they are not.
Often enough, we are the ones who know enough about it to understand why current trends are wrong or dangerous.
One of the most outspoken anti-AI-hype activists is Emily Bender ( @emilymbender ), a computational linguist who is working in the field herself.
Research is not the enemy. The technology isn't, either. It's the marketers, the venture capitalists, the investors, the managers, the grifters.
@anolandria @emilymbender If you "abandon" LLM research, what does that even mean?
Cut state funding to everything that's LLM related? Then you also cut funding to everything researching it critically. You cut funding to research about AI ethics, about AI biases. You also likely cut funding to other linguistic applications, like accessibility software or normal machine learning like voice recognition or digital assistants.
Ban it by law? Then you push the technology into illegal, unregulated spaces or even other countries.
Science is never the enemy. The poll results so far scare me.
@lianna @anolandria @emilymbender It's being weaponized against people today. It should be pulled out of the workplace, out of generating porn of victim photos, deepfakes of anyone, and should be stopped from drinking half the fresh water in a desert and eating multiple power plants worth of energy.
Literally the only valid use cases today are in providing assistance for the disabled.
More research isn't what's needed now. More responsibility is. An unfathomable amount more.
@targetdrone @anolandria @emilymbender All technology can be abused. The solution is not to abolish technology.
The instances you're talking about are not even related to LLMs, they're machine learning in general - the same technology powering medical reseaech and applications, for example.
@targetdrone @anolandria @emilymbender Cameras should not be banned because upskirting exists. The GPS network should not be abolished because stalkers can use it to track their victims. The internet should not be shut down because it makes it easier for propaganda to spread.
Your enemy is not technology, it is the people abusing it. The office worker letting ChatGPT write a generic status report e-mail to her supervisor because she struggles with formal writing is not your enemy.
If you think "research" is a straight translation for "people improving the technology's capabilities", you're wrong. AI research includes ethics, linguistics, sociology. If you want more "responsibility" for AI misuse, that requires research. Or do you want governments to ban things without concrete data? Where do you think the data for machine learning misuse comes from if not from research?
@anolandria Current AI is unethical and I don't believe any more research can fix that. Completely different things called AI can be ok.
This is the "Risch algorithm is good AI" view. (With useful incomplete implementations.)
I want things providing verifiable and reliable solutions to some problems, that can be understood by people, that can run on reasonable computers when shared as complete and corresponding source code.
@anolandria I think it's very important to make a distinction between machine learning AI and the LLM / GenAI. The machine learning stuff has shown real potential and real-world impact in health and science. As far as I know those models have been trained with knowledge and consent of the subjects.
The LLM stuff is a people pleaser fed on every work imaginable without any regard of law, ethics or safety rails it seems.
So training an AI to better detect cancer for instance? Yeah current and ethical.
Asking ChatGPT to think up a story about a duck and his nieces? That's most likely not going to end up ethical to me.
@anolandria The topic is way too complex to summarize it in a social media post. The tech behind LLMs is interesting and in some cases very useful. But the environment around them has been very toxic and harmful for so many reasons. I feel as if there's almost two separate worlds on the topic of LLMs. On one side we have chatGPT, SPAM, awful generated code, a really absurd economic bubble that is about to burst, manipulation of the masses, etc. On the other side we have local LMs, most of which you can run in an average PC with a small open source program, and it can do some useful things like translations, or just for pure entertainment, without sending private data to any company or being tracked, etc.
Would the world be better now if LLMs never existed? Yes, no doubt. But they do exist and we have to deal with that reality.
In general, I don't think it's unethical to use open weights models that already exist. What's unethical is to give money to OpenAI or any other company that is contributing to the bubble, that are using mind boggling amounts of energy in their datacenters, that are saturating our web servers with requests, that are manipulating the public, etc, etc.
Then there's the future risks of AI development. One of the biggest ones is well explained in Rational Animations' latest video [0] narrated by AI safety researcher Robert Miles [1].
Coming back at my "two separate worlds" concept, I'm frustrated at my attempts to use small language models without wanting to interact with the BS capitalist hellhole of "AI". I talk a bit more about this here [2].
[0] https://www.youtube.com/watch?v=uiPhOk1t3GU
[1] https://www.youtube.com/@RobertMilesAI
[2] https://valenciapa.ws/@starsider/115060692481293281
@anolandria Should the training of the technology and the technology itself be separate conversations? Or conversations around the tech bro hype, and greed based decision-making and forcing it into everything regardless of need trying to generate profit?
The core tech is interesting for sure, but there are so many bad/unethical/greedy decisions around the core tech it's hard to separate.
It is DEFINITELY a pandoras box though. Any good is being vastly overshadowed by BS and greed.