People defending AI-generated “art” because AI is also used to find medical cures is like saying that smashing people’s toes with hammers should be acceptable because hammers are also used to build hospitals.
@StefanThinks also not sure how many 'cures' we have from AI projects so far. AFAIK they're mostly used in analysis, and in drug molecule design - would be interesting to see a decent analysis of a) what kinds of things people are producing with it and b) how much of a role it plays - just time-saving on otherwise brute force stuff? Cool, but maybe not exactly revolutionary.
@noodlemaz @StefanThinks For what I've read, AI has been successfully used in detecting early signs of diseases in lungs (earlier than what human eyes have managed), to identify individual animals based on scale etc patterns, generally sorting through massive amounts of data in scientific research, the kind of work that would be very time consuming and mechanical.
I absolutely despise how it's being used to produce "art" etc, and hate all companies who push these unethical uses.

@OiskaE @StefanThinks yeah that's what I was saying, analysis, and big data sifting in my other comment. (I'm in cancer research so that's mainly what I see)

GenAI slop can get in the bin.

@noodlemaz @StefanThinks Ah, I missed your other comment, we're exactly on the same page with AI.
Also good luck with your research!
@OiskaE @StefanThinks ah just on the funding/fundraising side now, left the lab many years ago! Cheers

@noodlemaz @StefanThinks I suspect It is more like what you are saying and there are very specialized data sets used with isolated AI instances that are working out all the protein configurations.

It sure as hell isn't a chat bot.

And I equally expect it is mostly being used so some companies can patent every variation possible so they can block future research, not enable it. This is the US dystopia after all.

@Urban_Hermit @StefanThinks I'm not in the US or talking about the US on the research side (Trump is making sure to hamstring all your research pretty much)
Here it's not all in the hands of companies, but also universities and other research institutions.

I still question the hype, and think people generally should be more skeptical of its use and whether it needs shoehorning into every project.

But my problem is genAI, not machine learning applications overall

@Urban_Hermit @noodlemaz @StefanThinks AI is being used to find new proteins, examples are Google’s Deep Mind & Stanford’s Folding@home.
@stevewfolds @Urban_Hermit @StefanThinks they're not 'finding new proteins', they're predicting how known proteins fold. Which is cool and has helped a bit in some fields, but still needs experimental confirmation.
Also a big problem with these tools is that they've been allowed to publish without making their code public, ostensibly still a requirement for Nature - for some reason the billionaire-owned corps are let off the hook and get to publish secretive science. It's anti-shared progress.
@noodlemaz @Urban_Hermit @StefanThinks
Folding@home has published about 100 peer reviewed papers.

@stevewfolds @Urban_Hermit @StefanThinks yes... But they're not 'finding new proteins'
Clue in the name! The simulation is re: protein *folding*

This is an old bio problem - we know the DNA, the RNA and what sequence of amino acids that can create. What we don't know is 'conformation' or folding. How is that protein curling up into a final shape? What modifications happen to it in a cell?

Sims can't answer every q definitely but they can help predict likely solutions
= most of their publishing

@StefanThinks That is assuming it does find cures and does not hallucinate entirely new illnesses.
@StefanThinks A huge part of the problem is due to generative AI having become a tool for nearly everyone when it should have remained something you handle on your own with your own devices. It's one of those things that turn users into consumers even more.
@StefanThinks Finding medical cures is an appropriate use for AÍ, Making “AÍ art” is just a waste of resources,

@modris @StefanThinks porn. Let's just admit that most AI art is used for making grotesque porn, and they are using the faces of your kids that they bought from Facebook or Reddit. AI Photoshop and blending.

And tech bros want to say that if the bot uses tiny enough pieces from multiple sources and they refuse to keep track of sources then they shouldn't have to pay anyone or be liable for damages.

@StefanThinks half the replies mixing the general ambiguous term "AI " with LLM and diffusion models are missing the point in a way that proves said point.

Ain't this a whacky time to live in

As a side-note: No, generative AI is not used to do find cures to anything. They use models focused exclusively in data analysis, and that kind of AI is proving to be an absolute game changer. No hallucinations or stealing involved. Remember how we got covid vaccines in less than a year? That was a lot of hard work and proper use of machine learning.

@jnk @StefanThinks
> generative AI is not used to do find cures to anything. ... Remember how we got covid vaccines in less than a year? That was a lot of hard work and proper use of machine learning.

AlphaFold was instumental in finding the structure of SARS-CoV-2, and that is generative AI with very similar maths to the early LLMs at least.

@DavyJones @StefanThinks correct, AlphaFold was relevant; but that is not a generative AI model, and has little to do with LLMs.

Yes, both are based on neural networks, which has similar math at first. Both look for patterns and LLMs do recognise (not to confuse with understand) language incredibly well, just like AlphaFold recognises protein patterns incredibly well; but that's literally everything they have in common.

After that, both LLMs and diffusion models try to reconstruct and make up data based on the mess they just made. The data that AlphaFold returns still has to be studied by professionals to be useful, making it just a valuable tool; which gets us back to @StefanThinks 's post

@jnk @StefanThinks
> AlphaFold was relevant; but that is not a generative AI model

Accurate structure prediction of biomolecular interactions with AlphaFold 3 https://doi.org/10.1038/s41586-024-07487-w

> this is a generative training procedure that produces a distribution of answers
> The use of a generative diffusion approach comes with some technical challenges that we needed to address
> We note that the switch from the non-generative AF2 model to the diffusion-based AF3 model introduces the challenge of spurious structural order (hallucinations) in disordered regions

@StefanThinks slippery slope strawman imo.
@StefanThinks I still say the best response to slop is 'AI 🤮 ' and no further response. That will get through better than a well reasoned argument.

@StefanThinks

HELP- I'VE NEVER SEEN A BETTER ANALOGY

@StefanThinks No, it's like saying that a hammer is okay to be used to smash toes because a wrench can be used to build hospitals.

Generative AI and image recognition and pattern analysis are very different tools.

@StefanThinks Tools aren’t the problem—it’s how they’re used. A hammer can build or harm. Same with AI.
@StefanThinks I am not against the use of AI to make ( bland) illustrations, videos or novels.
The problem is that the training source material is used without consent or attribution, from a commons that is made available by the creators.
AI companies use the appropriated data and then proceed to classical enclosure, paywalliing access, and enshitifying it.
If we all creators had a say and a share of benefits, I would have less issues with it.

@StefanThinks AI is not finding cures for anything, they are just number crunching at faster rates than current number crunchers.

Current AI is no more than faster versions of Colossus and the Bomb, with as much intellect.

An LLM is not an AI, it is just an infinite monkey machine.

@StefanThinks it’s almost like it has to do with how its being used, not that it exists…  
@StefanThinks That's kinda like saying that gene edited bacteria cultivated to produce insulin means that gene edited herbicide tolerant crops are great.

@StefanThinks It's more like saying that smashing toes with a hammer is okay, because screwdrivers can turn screws.

LLMs are just one type of AI, specializing in creating text. Medical research AIs are different, in that they're designed to find answers to problems.

@StefanThinks Honestly this is one of the best posts I've seen in my life.
@StefanThinks Well they are great medical researchers so when they stole that Rembrandt we have to forgive and forget

@StefanThinks

I had an art teacher, Mme LeMaire. We admire art and detest other art - the good artist is the archer who shoots at far targets, who works at the limit of his skills.

AI is just a point-blank shot. Nobody respects that

@StefanThinks I'm open to a discussion about who would be worthy of toehammering and who would not, tho.
@StefanThinks @weirdmustard pet peeve of mine: reminding people that algorithmic tools for analyzing data is not the same as statistics based generation of text or pictures. I can see the reason naming it the same in getting funding, but...
@StefanThinks And they're not even the same thing anyway.
Until Altman et al stretched the definition of AI all the way out to encompass algorithms we've had for ever, so they could pretend they achieved something.