Machine Learning techniques are upending multiple scientific fields. Operational 5-day forecasting of air quality in 1 minute in this paper from Chinese researchers.

This is awesome work with very clear public health implications.

EDIT for clarity: I am.not suggesting LLMs have anything to do with this work, but many people hear AI and imagine LLMs. And many of them.are perhaps rightly sceptical of AI as a result.
But AI or ML techniques can be useful for lots of things, not just chatbots. And we should probably invest more in those.


https://www.nature.com/articles/s41586-026-10234-y

Advancing operational global aerosol forecasting with machine learning - Nature

Reliable 5-day, 3-hourly forecasts of aerosol optical components and surface concentrations are obtained in 1 minute using a machine-learning-driven forecasting system.

Nature
@Ruth_Mottram this article does not consider LLM, as far as I understand.

@precariousmind @Ruth_Mottram
πŸ‘†πŸ»
And that is another problem!
LLMs and generative "AI" are so prevailant that other useful machine learning technics get completely overshadowed and often starved of resource.

So I would change the central declaration.

We need to get over the LLM (for everything) hype and get back to use and fund different machine learning ("AI") tools

@realn2s @Ruth_Mottram that is the point.
Besides many other negative points, LLM/genAI enables Silicon Valley evil companies to achieve evil purposes quicker. At the end of the day, they enhance personal data collection, surveillance, and (just seen with Anthropic) military use.
@precariousmind @realn2s This was exactly my point! And probably why we need to start regularly calling the chatbots LLMs rather than "AI"
@Ruth_Mottram @precariousmind @realn2s I suggest you please edit your post to reword it. I was confused as to why you were suggesting that people should get over their skepticism of LLMs based on this application of ML that has nothing to do with LLMs. Several other people have been similarly confused.
@Ruth_Mottram @realn2s (late response) generative AI (genAI/GAI) are also more precise (if we accept we can call that "AI").
@Ruth_Mottram "AI" is a marketing term, and I will never help blur the distinction between genuinely useful tools and the ongoing LLM scam by using it.
@Ruth_Mottram Even just the tooling: running large inversions can be orders of magnitude faster when using tinygrad or torch instead of numpy and the code basically looks the same. Being able to do something in a few seconds instead of hours means you can run more/different scenarios.

@Ruth_Mottram As one of the skeptics who you want to help "get over" my skepticism, the framing of it in your last sentence was pretty off-putting and made me double-check whether you have a history of being an AI shill.

Right now, as a layperson, my skepticism is keeping me from investing time money and my personal info into some dangerous schemes, and I have no intention of learning a whole new field of technology just so I can pick out the good ones from the bad ones in a field truly over-saturated with con-artists.

I suggest your enemy isn't "AI skepticism", and it isn't the public's ignorance: it's the firehose of shit being sprayed at us by the rich and powerful right now as we speak. And some of that firehose spray of misinfo is telling us how ignorant we are of the really real and really cool stuff their snake oil can do and we should "get over" it.

"And a very good example of how we need to get over LLM scepticism when people start talking about #AI tools"

@Ruth_Mottram I am very confused about this take. Just skimmed the article and while it is very much outside my domain, I know enough about machine learning to see that there is no LLM involved here at all. So yes, obviously statistical prediction models can be very useful and I trust your assessment here that this specific model is impressive for the domain. But this development has nothing to do with LLMs.

@Ruth_Mottram

I assume you are aware and it's just your wording that may be misunderstood.

So, not to critise you - you're the expert, but to clarify the LLM/ML/AI confusion you mention:

These scientific applications of ”AI" - Machine Learning (ML), are older than LLMs and we use then in a wide range of applications with great success

They have nothing to do with LLMs or generative AI, apart from both being forms of AI.

But very often these success stories are used by genAI-bros to validate their narrative driving genAI, in an attempt to build a justification for genAI.

On the other hand, many opponents of genAI/LLMs reject these applications in a knee-jerk reaction.

One can (and should) have a more nuanced view: I wholy support useful scientific applications of ML: peer-reviewed, ethics-reviewed, and with scientific integrity. I am violently against any genAI application in their current form. They will need to prove their ethical, ecological, societal benefit still, and I don't see how they can.

Note: there are useful uses of LLMs as well (in language recognition, translation, grammar), but those don't need the massive scale of the commercial ones and couldn't be sold to the general public. So I count them under scientific (natural language processing is it's own field, after all).

Sorry for the rant.

Anti-LLM, pro-scientific AI should not be difficult πŸ˜‰

@tschenkel I think we are in agreement.

@Ruth_Mottram

That's how I understood your comment

@Ruth_Mottram - but also a good example of why we need do un-conflate SlopAI, like the kind Theyℒ️ want to make billions on while disenfranchising us all, from actual, useful computer tools which may or may not be based on the same underlying technology and/or science.

They try to ride the hype wave where I work, too, but if your area is expertise & your tools are expert systems, this conflation is about as useful as marketing microwave TV dinners when you're a chef.

@jwcph hah yes, that's a nice analogy, and that's exactly why I posted. The fediverse doesn't have nearly enough positive stories around the application of AI/ML* tools and Europe seriously risks being left behind as a result.

*In principle I think I prefer Machine Learning as a terminology, but I'm not against AI if it is strictly defined as on wikipedia: "Artificial intelligence (AI) is the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals"

@Ruth_Mottram I'm not super-worried; I think useful applications are pursued with appropriate vigor regardless of the hype cycle. The effort could use more money but those projects aren't getting it now either...

I prefer to stay away from anthropomorfisms entirely; computers don't learn & are not intelligent. It's called programming, algorithms etc. Perception, decision, intent, goals... none of that exists in The Machine; it's all simulation at best & we have no path to the real thing.