@OiskaE @StefanThinks yeah that's what I was saying, analysis, and big data sifting in my other comment. (I'm in cancer research so that's mainly what I see)
GenAI slop can get in the bin.
@noodlemaz @StefanThinks I suspect It is more like what you are saying and there are very specialized data sets used with isolated AI instances that are working out all the protein configurations.
It sure as hell isn't a chat bot.
And I equally expect it is mostly being used so some companies can patent every variation possible so they can block future research, not enable it. This is the US dystopia after all.
@Urban_Hermit @StefanThinks I'm not in the US or talking about the US on the research side (Trump is making sure to hamstring all your research pretty much)
Here it's not all in the hands of companies, but also universities and other research institutions.
I still question the hype, and think people generally should be more skeptical of its use and whether it needs shoehorning into every project.
But my problem is genAI, not machine learning applications overall
@stevewfolds @Urban_Hermit @StefanThinks yes... But they're not 'finding new proteins'
Clue in the name! The simulation is re: protein *folding*
This is an old bio problem - we know the DNA, the RNA and what sequence of amino acids that can create. What we don't know is 'conformation' or folding. How is that protein curling up into a final shape? What modifications happen to it in a cell?
Sims can't answer every q definitely but they can help predict likely solutions
= most of their publishing
@modris @StefanThinks porn. Let's just admit that most AI art is used for making grotesque porn, and they are using the faces of your kids that they bought from Facebook or Reddit. AI Photoshop and blending.
And tech bros want to say that if the bot uses tiny enough pieces from multiple sources and they refuse to keep track of sources then they shouldn't have to pay anyone or be liable for damages.
@StefanThinks half the replies mixing the general ambiguous term "AI " with LLM and diffusion models are missing the point in a way that proves said point.
Ain't this a whacky time to live in
As a side-note: No, generative AI is not used to do find cures to anything. They use models focused exclusively in data analysis, and that kind of AI is proving to be an absolute game changer. No hallucinations or stealing involved. Remember how we got covid vaccines in less than a year? That was a lot of hard work and proper use of machine learning.
@jnk @StefanThinks
> generative AI is not used to do find cures to anything. ... Remember how we got covid vaccines in less than a year? That was a lot of hard work and proper use of machine learning.
AlphaFold was instumental in finding the structure of SARS-CoV-2, and that is generative AI with very similar maths to the early LLMs at least.
@DavyJones @StefanThinks correct, AlphaFold was relevant; but that is not a generative AI model, and has little to do with LLMs.
Yes, both are based on neural networks, which has similar math at first. Both look for patterns and LLMs do recognise (not to confuse with understand) language incredibly well, just like AlphaFold recognises protein patterns incredibly well; but that's literally everything they have in common.
After that, both LLMs and diffusion models try to reconstruct and make up data based on the mess they just made. The data that AlphaFold returns still has to be studied by professionals to be useful, making it just a valuable tool; which gets us back to @StefanThinks 's post
@jnk @StefanThinks
> AlphaFold was relevant; but that is not a generative AI model
Accurate structure prediction of biomolecular interactions with AlphaFold 3 https://doi.org/10.1038/s41586-024-07487-w
> this is a generative training procedure that produces a distribution of answers
> The use of a generative diffusion approach comes with some technical challenges that we needed to address
> We note that the switch from the non-generative AF2 model to the diffusion-based AF3 model introduces the challenge of spurious structural order (hallucinations) in disordered regions
HELP- I'VE NEVER SEEN A BETTER ANALOGY
@StefanThinks No, it's like saying that a hammer is okay to be used to smash toes because a wrench can be used to build hospitals.
Generative AI and image recognition and pattern analysis are very different tools.
@StefanThinks AI is not finding cures for anything, they are just number crunching at faster rates than current number crunchers.
Current AI is no more than faster versions of Colossus and the Bomb, with as much intellect.
An LLM is not an AI, it is just an infinite monkey machine.
@StefanThinks It's more like saying that smashing toes with a hammer is okay, because screwdrivers can turn screws.
LLMs are just one type of AI, specializing in creating text. Medical research AIs are different, in that they're designed to find answers to problems.
I had an art teacher, Mme LeMaire. We admire art and detest other art - the good artist is the archer who shoots at far targets, who works at the limit of his skills.
AI is just a point-blank shot. Nobody respects that