hearing gullible 20-somethings say "this technology DOES have good use-cases, like in medicine for example…"
is going to turn me into the fucking Joker
"But SnoopJ have you considered just not watching trash"
I mean, I *have* considered it
oh right, also:
mumble mumble Therac-25
@OliviaVespera I am not really "into" protein folding enough to hold opinions about that space, or tools that aim for it (e.g. AlphaFold)
I'm open to tools that accelerate the search over protein structures, especially since "it either conforms or it doesn't" and other clear criteria for goodness apply.
All the ones I've ever heard anybody say good things about predate the current craze for "AI" and I think it would be generous to assume this is what people mean when they talk about applications in medicine, but maybe some of them do think of this.
@OliviaVespera the "generate new stuff" thing I have a lot more skepticism for
especially after google's stunt with GNoME which turned out to basically be a sort of advanced academic spamming of the materials community
@OliviaVespera @SnoopJ I mean it's just as prone to inventing new proteins and wasting researchers time trying to replicate. There is no use case that does not face all the challenges the common uses do, it's a question of whether or not it can be quicker and less wasteful of time/energy/environment than historic methods.
I'd like to see it compared to say, Folding@home for instance.
@OliviaVespera it is a good use of machine learning
the discussion is about generative AI like Large Language Models (LLM) or audio, image and video generators that are trained on copyrighted material
@musevg
LLM Boosters have been quite belligerent about trying to conflate all of them, mainly to lump the stuff that might plausibly work some day in with their bullshit fabricators.
@musevg you would have to go pretty deep into the topic to understand all the different approaches
Wikipedia has probably a good neutral overview
https://en.wikipedia.org/wiki/Artificial_intelligence
and here LLMs specifically
https://en.wikipedia.org/wiki/Large_language_model
it is a big research field going back to 1943
i think the technology and research is not the problem, but for-profit companies acting unethical by ignoring copyright and making billions from the stolen work of (small) artists
@davidak @OliviaVespera @SnoopJ
Back in the day, we had expert systems, PROLOG, LISP.
Now I'm just looking for the right terms to discern the different types of "AI"… I guess, image creators like Midjourney and music generators like Suno aren't LLMs. Or are they? And the correct term for "AI" used to analyze images (medical or geo/spatial) is...?
@musevg LLM is for text and the big LLMs are multimodal, which means they also can work with image and audio, like you can speak to it, show something with your webcam and it can understand you and recognize it and answer you with voice, way more natural than Text to Speech (like OpenAI ChatGPT Voice Mode)
https://chatgpt.com/features/voice
those models predict one token after another
the models to generate media from text are diffusion models.
but there are now language diffusion models.....
@musevg i think usable terms to separate those two areas is Generative AI vs Machine Learning
where GenAI is the stealing slop machine and ML is what scientists do or pattern recognition in products like OCR
@musevg @davidak @OliviaVespera @SnoopJ
I think the name you are looking for is somewhere around "deep nerural network", "recuring neural network" and "artufucal neural network".
So "artificial deep recuring neural networks" maybe?
Or maybe just "neural networks"?
