The #NSF has reduced spending on the basic sciences by 50% or more in 2025, with similar cuts proposed for next year as well. For instance, through to May 21 of this year, the funding awarded to the mathematical sciences is currently $32 million, compared to the 10-year average of $113 million. These are of course large numbers for an individual person; but with the US population of 340 million, this amounts to spending less than 22 cents per American per year on basic #mathematics research, compared to the 10-year average of 80 cents per American per year. https://www.nytimes.com/interactive/2025/05/22/upshot/nsf-grants-trump-cuts.html

I myself have been fortunate to be supported by a small fraction of this 80 cents for almost the entirety of my professional career, allowing me to conduct research in the summer, invite speakers to my department, and to support graduate students. For now, I can continue these activities at a minimal level using by existing (and relatively modest) NSF grant https://www.nsf.gov/awardsearch/showAward?AWD_ID=2347850 , but already I do not have the resources to put into any future long-term projects. For instance, my experiments with using new technologies for mathematical workflows are currently being conducted purely by myself together with contributions from unpaid online volunteers; I am applying for funding from several sources to scale these projects up beyond a proof-of-concept level, but I am expecting the process to be extremely competitive. (1/3)

Trump Has Cut Science Funding to Its Lowest Level in Decades

The lag in funding extends far beyond D.E.I. initiatives, affecting almost every area of science: chemistry, computing, engineering, materials and more.

The New York Times

Basic mathematical research pursues questions that are often quite far from actual practical application; but they contribute, in a largely invisible way, to the broader research ecosystem that eventually does generate such application. For instance, consider the problem of packing spheres into space as efficiently as possible - a question first proposed by Kepler in 1611. At a practical level, the solution to this problem has been "known" to greengrocers for centuries - one should stack the spheres in a hexagonal close packing. But mathematicians spent decades to work out how to establish the optimality of this packing, culminating in a formally verified proof in 2012.

Mathematicians also explored variants of this sphere packing problem in other geometries than three-dimensional Euclidean space: for instance in higher dimensions, with the famous recent breakthroughs of Viazovska in 8 and 24 dimensions, or in more discrete geometries over finite fields. Such curiosity-driven questions appear to lack immediate application - nobody had a need to pack eight-dimensional oranges together, for instance. (2/3)

But when cellphone adoption became widespread, it became necessary to figure out how to efficiently encode the signals of multiple cellular devices in the wireless spectrum in such a way that they do not interfere with each other. As it turns out, many of the mathematical techniques and insights generated by exploring these discrete and high-dimensional versions of the sphere packing problem have been of immense value for this problem - not just in the "positive" sense of designing efficient signal encoding methods, but also in the "negative" sense of also giving theoretical upper bounds on such efficiency, thus setting the right benchmarks to evaluate progress, and to avoid wasting resources on attempting encodings that are mathematically impossible.

(As a side note, the successful formalization of the proof of the Kepler conjecture has also inspired and informed many further collaborative formal projects, including my own experiments in this area, even if those projects do not directly involve sphere packing.)

Such contributions to tangible technological advances are subtle and indirect; but without such basic research, many such advances would have taken far longer to be developed, and some may not have been pursued at all. The cuts to funding for such reseearch - which will particularly impact the next generation of researchers - may save a few cents a year in the short term, but greatly reduce the capacity to solve many challenging technological problems of significant real-world impact in the future. (3/3)

@tao
Unfortunately, "the future" is "up to the next election"*

* If there's ever going to be one...

@tao - the EU is acting in its self-interest by luring American scientists away, budgeting $500,000,000 on this so far:

https://arstechnica.com/science/2025/05/europe-launches-program-to-lure-scientists-away-from-the-us/

Europe launches program to lure scientists away from the US

EU will spend over $500 million to recruit researchers and scientists.

Ars Technica
@johncarlosbaez @tao Can't help but wonder where this money was when European scientists needed jobs. The problem was never that we were short of talented people.

@rossquantum @johncarlosbaez @tao it's a question of politics. Money, a lot of it, are there, it's the question of distribution.

At the moment EU is convinced, and they are not too wrong, that they can snoop up a number of top-notch people, relatively cheaply. This is easy to sell, politically.

@johncarlosbaez @tao is it wrong or bad for tbe EU to act in its own self interest?
@Leif573 @tao - of course not! What I mean is that this is a smart move on their part, not just charity, so we may expect it to continue.
@johncarlosbaez @tao ok sorry just wasnt sure and wanted to understand thx for answering
@tao while I think funding for many "interdisciplinary" pseudosciences should be eliminated, I think basic science research should funded a lot more.
@tao Apparently not the sexy side of science fiction
@tao Republicans can't govern
@tao I should have thought of this when I looked at the data but cutting "$6 per person" down to "$3 per person" is barely a difference. If people want to street interview, they should frame the statistic like this.

@tao That diagram looks like an illustration of the Banach-Tarski paradox.

How does it go again?

Ah yes: if you accept the Axiom of You Have No Choice, it follows that for all agencies under public ownership, there exists some series of political manoeuvres such that an agency can be disassembled and the pieces looted, ransacked, destroyed, pillaged, and privatised until the agency provides no more than half of its previous services; all the while claiming that the remnants are somehow at least as effective as the original.

The only way to avoid the paradox is to reject the Axiom of You Have No Choice. #HandsOff

@tao to put it bluntly, using chatbots which require calling out to third party corporate servers to regurgitate mathematical fragments without credit or citation is not a terribly good example of the kind of research actually under threat right now since it's indistinguishable from corporate PR and part of a field receiving immense levels of funding at the moment for this reason. you could certainly do better than to siphon from the same competitive sources that failed to allocate funding for my own phd application this year.

it's not terribly convincing to claim that an effective response to government imposed austerity is to invest in tools like LLMs which are directly intentionally created to foster said austerity—to deskill and to replace expensive researchers with graduate degrees. it's insult added to injury to claim that this is a form of fundamental research desperately worth preserving, in an environment where LLMs are literally the only thing getting grant money at the moment, causing many researchers to simply leave the field they have developed a unique and irreplicable expertise in. none of this is remotely contentious to people who have read a single book.

did you know all production LLMs use regular expressions to filter and censor their outputs, since the token prediction machine is fundamentally incapable of understanding the intricacies necessary to do so? do you know how difficult those filter expressions are to write and how difficult they are to debug? this task is so thankless because the field of parsing and formal languages has starkly diverged from practice for half a century and extant regular expression engines measure only farcical ideals of decontextualized performance, much like the "objective" function which substitutes for intelligence in LLMs.

that's my field, and that's my thesis (i actually have a novel formulation of linguistic complexity and a grammar language with qualities you wouldn't be able to appreciate). research like your project proposal in OP displaces exactly that which holds it up. calling LLM usage the kind of fundamental research under threat in this environment steals valor from those who would actually advance many separate fields at once. reconsider.

@hipsterelectron My experiments are actually not using large language models much at all, but are focusing more on accelerating formalization tasks in languages such as Lean using collaboration tools such as Github, as well as more classical AI tools such as subgraph matching. In any event, I am not applying for funding for such projects from the NSF, but instead through a DARPA program https://www.darpa.mil/research/programs/expmath-exponential-mathematics specifically targeted for such projects.

As you note, AI-related research projects are still recieving significant funding; while I did mention my own research as an example of possible impact, my post was more aimed at the broader impact of these funding cuts on the rest of the basic research environment.

expMath: Exponentiating Mathematics | DARPA

@tao Being a DARPA funded project, does it have any restrictions on who can be involved and who cannot?

@tao thanks for your patient and thoughtful reply. i apologize for wildly misunderstanding your post. you had intended indeed to wield your position to champion innovation from NSF grantees, which i find quite admirable, and exactly what i had hoped when i began to read your post.

i admire your work and your bravery to advocate for the NSF at this critical time. i won't make this mistake again and i hope you have a great day.