AI Risks "Hypernormal" Science

https://www.asimov.press/p/ai-science

Designing AI for Disruptive Science

Why scaling AI won’t automatically lead to paradigm shifts.

Asimov Press

The article presumes that the models we have today describing everything could still be subject to a major paradigm shift.

Maybe they could be, but it seems pretty unlikely. The edges of a lot of scientific understanding are now past practical applicability. The edges are essentially models of things impossible to test. In fact, relativity was only recently fully backed up with experimental data.

> article presumes ... everything could still be subject to a major paradigm shift. ...seems pretty unlikely

Alternatively: there's plenty of mainstream, accepted science that's plain, flat out, provably wrong. Yet, it is against good taste (job security, people's feelings, status quo bias, etc.) to point this out.

Hence, it can actually be tricky to catch wind of, or get a grasp on, such issues to begin with, much less pursue such issues toward meaningful, published, recognized change in understanding (that is to say: paradigm shift).

I'd name some examples, but you wouldn't believe me.

With respect to the article, it seems the current LLMs can (though, obviously, do not necessarily have to) return text that appears to reason (pretty reasonably!) about paradigm shifts, when given the context required and nudged quite forcefully toward particular directions. But, as the article seems to indicate, the LLMs seem to not tend toward finding, investigating, and reporting on paradigm shifts all on their own very much. (But maybe part of that is intrinsic to how they are programmed and/or their context?)

> ... there's plenty of mainstream, accepted science that's plain, flat out, provably wrong
:
:
> I'd name some examples, but you wouldn't believe me.

I probably would not. You would probably be wrong