I'm genuinely curious what you think about current AI/LLM. I see a lot of justified frustration towards AI, but I don't often see the details of people's thoughts on the matter of AI generally. Shares would be appreciated, and I'd love to know if you have thoughts beyond the four options (polls are limited) ❤️
AI research should be abandoned entirely.
17.1%
CURRENT AI is unethical. More research is needed.
51.2%
Current AI can be used ethically in some cases.
17.8%
Problems with AI are the same as any new tech.
14%
Poll ended at .

@anolandria A strong disconnect between prior mathematical principles of proof and rigor and the way statistical curve fitting is alleged to work now.

No, I don't wish to elaborate.

@anolandria Current "AI" is not (and will never be) actual A.I. and should be written in quotes. Facetiously and seriously.

LLM/GenAI deserves as much consideration as any other technology but only if ethical, economical, environmental, and other yet unindentified concerns are addressed.

@afreytes @anolandria
I remember when AI ambitions were serious at the turn of the century the distinction between Virtual Intelligence and the (perhaps impossible but noble) goal of Artificial Intelligence. Now the whole thing's just a marketing term. The delusion that what they're doing will ever miraculously become more than what it already is is such a bizarre bit of religious fervor. (The "singularity" idea just reminds me of The Claw from Toy Story only less motivated by reality.)
@anolandria it's hard to answer because "AI" is such a vague term, and all the things they're calling AI right now aren't actually artificial intelligence in my book. I don't think would be anything wrong with researching machine learning and training algorithms to do specific tasks if they weren't building a fuckton of data centers and wasting all this power and water. But also the reason they're building all that is that they're not just researching it, but trying to force everyone to USE it 💀
@anolandria I mean, in the end the real problem is capitalism, as always. The companies investing in this are only interested in increasing their profits and/or participating in their insane robot cult, so they don't care that they're hurting people. But there could be different ways to do the research, and it could be adopted more responsibly. But like...only if society was completely different
@sofiav This is absolutely one of the "options" that didn't fit in the small amount of text and 4 options I had available for a survey! 💯 This is, in fact, closer to my feelings exactly. Capitalism is the problem. 😡
@sofiav @anolandria Yep. The real problem is that Google et al. have got more money than they know what to do with, and improving life for everybody is beneath them.
@sofiav I do think a lot of the first LLM research was done with honest and good intentions, especially since it was mostly open source, but, like most everything else, capitalists work tirelessly to search out anything that can benefit them personally, and turn it evil.

@anolandria

It's not research that's needed, what we need is less assholery. Current "AI" is designed and deployed as a weapon.

@anolandria One problem is that "AI" is a marketing buzzword slapped on a bunch of technologies, some of which have been in use for decades. But the massive LLM projects currently prompting construction of massive data centers just to mimick human interaction are an abomination.
@fgbjr True, and I do wish we didn't call them AI so the least I could do is call it AI/LLM in the question. Machine learning is just one possible aspect of an actual AI, so I'm not opposed the the research on the subject, philosophically speaking.

@anolandria This poll isn't clear about terms.

"AI" lumps together several distinct technologies, some of which are well-founded and useful.

LLMs and "generative AI" are tremendously wasteful and destructive and are being used as ideological weapons to assault human dignity.

In the abstract, could they be used only for benign applications? Perhaps, but the resource costs are still so extreme that I doubt they'll ever be worth the cost.

Meanwhile, the ideological violence is extreme.

@foolishowl I appreciate this. I definitely would have put more answers into the survey if it allowed.

@anolandria

Hey, I work in LLM research (computational linguistics).

People should understand that us researchers are not the ones marketing the technology as replacements for workers, let alone artists, or as search engines, friends, therapists, or other tools they are not.

Often enough, we are the ones who know enough about it to understand why current trends are wrong or dangerous.

One of the most outspoken anti-AI-hype activists is Emily Bender ( @emilymbender ), a computational linguist who is working in the field herself.

Research is not the enemy. The technology isn't, either. It's the marketers, the venture capitalists, the investors, the managers, the grifters.

@anolandria @emilymbender If you "abandon" LLM research, what does that even mean?

Cut state funding to everything that's LLM related? Then you also cut funding to everything researching it critically. You cut funding to research about AI ethics, about AI biases. You also likely cut funding to other linguistic applications, like accessibility software or normal machine learning like voice recognition or digital assistants.

Ban it by law? Then you push the technology into illegal, unregulated spaces or even other countries.

Science is never the enemy. The poll results so far scare me.

@lianna @emilymbender I mean... im kinda surprised that the "abandon" option is only at just over 20%. I honestly thought it would be higher.

@lianna @anolandria @emilymbender It's being weaponized against people today. It should be pulled out of the workplace, out of generating porn of victim photos, deepfakes of anyone, and should be stopped from drinking half the fresh water in a desert and eating multiple power plants worth of energy.

Literally the only valid use cases today are in providing assistance for the disabled.

More research isn't what's needed now. More responsibility is. An unfathomable amount more.

@targetdrone @lianna @emilymbender Yes, well said. You are right, it's not research that's the problem. At the risk of sounding like a broken record, it's letting corporations put profits over everything else.
@anolandria @lianna @emilymbender Deepfakes are used by thieves, scammers, spammers, misogynists, racists, politicians, fascists, all kinds of loathsome people. Training datasets have plundered human efforts that should never have been used. They're corrupting education. People wanting to get rich is an important but tiny sliver of the problem; the lack of responsibility across the board is the real killer.

@targetdrone @anolandria @emilymbender All technology can be abused. The solution is not to abolish technology.

The instances you're talking about are not even related to LLMs, they're machine learning in general - the same technology powering medical reseaech and applications, for example.

@targetdrone @anolandria @emilymbender Cameras should not be banned because upskirting exists. The GPS network should not be abolished because stalkers can use it to track their victims. The internet should not be shut down because it makes it easier for propaganda to spread.

Your enemy is not technology, it is the people abusing it. The office worker letting ChatGPT write a generic status report e-mail to her supervisor because she struggles with formal writing is not your enemy.

If you think "research" is a straight translation for "people improving the technology's capabilities", you're wrong. AI research includes ethics, linguistics, sociology. If you want more "responsibility" for AI misuse, that requires research. Or do you want governments to ban things without concrete data? Where do you think the data for machine learning misuse comes from if not from research?

@anolandria Current AI is unethical and I don't believe any more research can fix that. Completely different things called AI can be ok.

This is the "Risch algorithm is good AI" view. (With useful incomplete implementations.)

I want things providing verifiable and reliable solutions to some problems, that can be understood by people, that can run on reasonable computers when shared as complete and corresponding source code.

@anolandria there are definitely diverse unethical aspects of current models of various types. More research but also more legislation required.
@arielwip I would agree with that. 🤔💯

@anolandria I think it's very important to make a distinction between machine learning AI and the LLM / GenAI. The machine learning stuff has shown real potential and real-world impact in health and science. As far as I know those models have been trained with knowledge and consent of the subjects.

The LLM stuff is a people pleaser fed on every work imaginable without any regard of law, ethics or safety rails it seems.

So training an AI to better detect cancer for instance? Yeah current and ethical.

Asking ChatGPT to think up a story about a duck and his nieces? That's most likely not going to end up ethical to me.

@da_kink A very valid and appropriate distinction I think. 💯

@anolandria The topic is way too complex to summarize it in a social media post. The tech behind LLMs is interesting and in some cases very useful. But the environment around them has been very toxic and harmful for so many reasons. I feel as if there's almost two separate worlds on the topic of LLMs. On one side we have chatGPT, SPAM, awful generated code, a really absurd economic bubble that is about to burst, manipulation of the masses, etc. On the other side we have local LMs, most of which you can run in an average PC with a small open source program, and it can do some useful things like translations, or just for pure entertainment, without sending private data to any company or being tracked, etc.

Would the world be better now if LLMs never existed? Yes, no doubt. But they do exist and we have to deal with that reality.

In general, I don't think it's unethical to use open weights models that already exist. What's unethical is to give money to OpenAI or any other company that is contributing to the bubble, that are using mind boggling amounts of energy in their datacenters, that are saturating our web servers with requests, that are manipulating the public, etc, etc.

Then there's the future risks of AI development. One of the biggest ones is well explained in Rational Animations' latest video [0] narrated by AI safety researcher Robert Miles [1].

Coming back at my "two separate worlds" concept, I'm frustrated at my attempts to use small language models without wanting to interact with the BS capitalist hellhole of "AI". I talk a bit more about this here [2].

[0] https://www.youtube.com/watch?v=uiPhOk1t3GU
[1] https://www.youtube.com/@RobertMilesAI
[2] https://valenciapa.ws/@starsider/115060692481293281

The story of Omega-L and Omega-W

YouTube

@anolandria Should the training of the technology and the technology itself be separate conversations? Or conversations around the tech bro hype, and greed based decision-making and forcing it into everything regardless of need trying to generate profit?

The core tech is interesting for sure, but there are so many bad/unethical/greedy decisions around the core tech it's hard to separate.

It is DEFINITELY a pandoras box though. Any good is being vastly overshadowed by BS and greed.