The #NSF has reduced spending on the basic sciences by 50% or more in 2025, with similar cuts proposed for next year as well. For instance, through to May 21 of this year, the funding awarded to the mathematical sciences is currently $32 million, compared to the 10-year average of $113 million. These are of course large numbers for an individual person; but with the US population of 340 million, this amounts to spending less than 22 cents per American per year on basic #mathematics research, compared to the 10-year average of 80 cents per American per year. https://www.nytimes.com/interactive/2025/05/22/upshot/nsf-grants-trump-cuts.html

I myself have been fortunate to be supported by a small fraction of this 80 cents for almost the entirety of my professional career, allowing me to conduct research in the summer, invite speakers to my department, and to support graduate students. For now, I can continue these activities at a minimal level using by existing (and relatively modest) NSF grant https://www.nsf.gov/awardsearch/showAward?AWD_ID=2347850 , but already I do not have the resources to put into any future long-term projects. For instance, my experiments with using new technologies for mathematical workflows are currently being conducted purely by myself together with contributions from unpaid online volunteers; I am applying for funding from several sources to scale these projects up beyond a proof-of-concept level, but I am expecting the process to be extremely competitive. (1/3)

Trump Has Cut Science Funding to Its Lowest Level in Decades

The lag in funding extends far beyond D.E.I. initiatives, affecting almost every area of science: chemistry, computing, engineering, materials and more.

The New York Times

@tao to put it bluntly, using chatbots which require calling out to third party corporate servers to regurgitate mathematical fragments without credit or citation is not a terribly good example of the kind of research actually under threat right now since it's indistinguishable from corporate PR and part of a field receiving immense levels of funding at the moment for this reason. you could certainly do better than to siphon from the same competitive sources that failed to allocate funding for my own phd application this year.

it's not terribly convincing to claim that an effective response to government imposed austerity is to invest in tools like LLMs which are directly intentionally created to foster said austerity—to deskill and to replace expensive researchers with graduate degrees. it's insult added to injury to claim that this is a form of fundamental research desperately worth preserving, in an environment where LLMs are literally the only thing getting grant money at the moment, causing many researchers to simply leave the field they have developed a unique and irreplicable expertise in. none of this is remotely contentious to people who have read a single book.

did you know all production LLMs use regular expressions to filter and censor their outputs, since the token prediction machine is fundamentally incapable of understanding the intricacies necessary to do so? do you know how difficult those filter expressions are to write and how difficult they are to debug? this task is so thankless because the field of parsing and formal languages has starkly diverged from practice for half a century and extant regular expression engines measure only farcical ideals of decontextualized performance, much like the "objective" function which substitutes for intelligence in LLMs.

that's my field, and that's my thesis (i actually have a novel formulation of linguistic complexity and a grammar language with qualities you wouldn't be able to appreciate). research like your project proposal in OP displaces exactly that which holds it up. calling LLM usage the kind of fundamental research under threat in this environment steals valor from those who would actually advance many separate fields at once. reconsider.

@hipsterelectron My experiments are actually not using large language models much at all, but are focusing more on accelerating formalization tasks in languages such as Lean using collaboration tools such as Github, as well as more classical AI tools such as subgraph matching. In any event, I am not applying for funding for such projects from the NSF, but instead through a DARPA program https://www.darpa.mil/research/programs/expmath-exponential-mathematics specifically targeted for such projects.

As you note, AI-related research projects are still recieving significant funding; while I did mention my own research as an example of possible impact, my post was more aimed at the broader impact of these funding cuts on the rest of the basic research environment.

expMath: Exponentiating Mathematics | DARPA

@tao thanks for your patient and thoughtful reply. i apologize for wildly misunderstanding your post. you had intended indeed to wield your position to champion innovation from NSF grantees, which i find quite admirable, and exactly what i had hoped when i began to read your post.

i admire your work and your bravery to advocate for the NSF at this critical time. i won't make this mistake again and i hope you have a great day.