@Ruth_Mottram
I assume you are aware and it's just your wording that may be misunderstood.
So, not to critise you - you're the expert, but to clarify the LLM/ML/AI confusion you mention:
These scientific applications of βAI" - Machine Learning (ML), are older than LLMs and we use then in a wide range of applications with great success
They have nothing to do with LLMs or generative AI, apart from both being forms of AI.
But very often these success stories are used by genAI-bros to validate their narrative driving genAI, in an attempt to build a justification for genAI.
On the other hand, many opponents of genAI/LLMs reject these applications in a knee-jerk reaction.
One can (and should) have a more nuanced view: I wholy support useful scientific applications of ML: peer-reviewed, ethics-reviewed, and with scientific integrity. I am violently against any genAI application in their current form. They will need to prove their ethical, ecological, societal benefit still, and I don't see how they can.
Note: there are useful uses of LLMs as well (in language recognition, translation, grammar), but those don't need the massive scale of the commercial ones and couldn't be sold to the general public. So I count them under scientific (natural language processing is it's own field, after all).
Sorry for the rant.
Anti-LLM, pro-scientific AI should not be difficult π