AI hỗ trợ khám phá toán học nhanh chóng qua dự án AI for Math. Google DeepMind hợp tác với các nhà toán học để phát triển công nghệ giúp giải quyết các bài toán phức tạp, khám phá định lý mới. Mở ra kỷ nguyên nghiên cứu toán học hiệu quả hơn nhờ trí tuệ nhân tạo. #AI #ToánHọc #DeepMind #KhámPháKhoaHọc #AIForMath #NgânHàngCôngThức

https://www.reddit.com/r/singularity/comments/1oj97kc/accelerating_discovery_with_the_ai_for_math/

Congratulations to the EPFL team selected for the AI for Math Fund by Renaissance Philanthropy with support from XTX Markets! 🎉

Their project, Document-Level Autoformalization, uses AI to bridge human and machine understanding of mathematics.

👉 Learn more: https://ai.epfl.ch/advancing-mathematics-with-ai/
#AIforMath

MIT researchers just snagged 'AI for Math' grants to turbocharge mathematical discovery! They're bridging the gap between massive databases like LMFDB and formal proof systems such as Lean's Mathlib. This isn't just about faster calcs; it's about making unformalized knowledge accessible to AI for new breakthroughs.

#AIforMath #Mathematics #TechNews #Research #LLMs
https://news.mit.edu/2025/ai-for-math-grants-accelerate-mathematical-discovery-0922

Will AI make us better mathematicians, or just better at validating AI's math?

MIT affiliates win AI for Math grants to accelerate mathematical discovery

An MIT-based team will use Renaissance Philanthropy and XTX Markets’ AI for Math grant to accelerate mathematical discovery. The team will use AI to integrate LMFDB and mathlib for automated theorem proving.

MIT News | Massachusetts Institute of Technology
Readings shared June 19, 2025

The readings shared in Bluesky on 19 June 2025 are Galois energy games (in Isabelle/HOL). ~ Caroline Lemke. #ITP #IsabelleHOL Chomsky-Schützenberger representation theorem (in Isabelle/HOL). ~ Moritz

Vestigium
Reviving DSP for advanced theorem proving in the era of reasoning models. ~ Chenrui Cao, Liangcheng Song, Zenan Li, Xinyi Le, Xian Zhang, Hui Xue, Fan Yang. https://arxiv.org/abs/2506.11487v1 #AI #Math #AIforMath #LLMs #ITP #LeanProver
Reviving DSP for Advanced Theorem Proving in the Era of Reasoning Models

Recent advancements, such as DeepSeek-Prover-V2-671B and Kimina-Prover-Preview-72B, demonstrate a prevailing trend in leveraging reinforcement learning (RL)-based large-scale training for automated theorem proving. Surprisingly, we discover that even without any training, careful neuro-symbolic coordination of existing off-the-shelf reasoning models and tactic step provers can achieve comparable performance. This paper introduces \textbf{DSP+}, an improved version of the Draft, Sketch, and Prove framework, featuring a \emph{fine-grained and integrated} neuro-symbolic enhancement for each phase: (1) In the draft phase, we prompt reasoning models to generate concise natural-language subgoals to benefit the sketch phase, removing thinking tokens and references to human-written proofs; (2) In the sketch phase, subgoals are autoformalized with hypotheses to benefit the proving phase, and sketch lines containing syntactic errors are masked according to predefined rules; (3) In the proving phase, we tightly integrate symbolic search methods like Aesop with step provers to establish proofs for the sketch subgoals. Experimental results show that, without any additional model training or fine-tuning, DSP+ solves 80.7\%, 32.8\%, and 24 out of 644 problems from miniF2F, ProofNet, and PutnamBench, respectively, while requiring fewer budgets compared to state-of-the-arts. DSP+ proves \texttt{imo\_2019\_p1}, an IMO problem in miniF2F that is not solved by any prior work. Additionally, DSP+ generates proof patterns comprehensible by human experts, facilitating the identification of formalization errors; For example, eight wrongly formalized statements in miniF2F are discovered. Our results highlight the potential of classical reasoning patterns besides the RL-based training. All components will be open-sourced.

arXiv.org
Reseña de «Can A.I. quicken the pace of math discovery?»

El artículo «Can A.I. quicken the pace of math discovery?» presenta la iniciativa "Exponentiating Mathematics" de DARPA. Su objetivo es desarrollar una IA como coautor para acelerar la investigación e

Vestigium
Readings shared June 15, 2025

The readings shared in Bluesky on 15 June 2025 are BiCoq: Bigraphs formalisation with Coq. ~ Cécile Marcon et als. #ITP #CoqProver #Rocq #Math Introduction to competitive programming in Haskell. ~ Br

Vestigium
Reseña de «Hardest problems in mathematics, physics & the future of AI

En la entrevista "Hardest problems in mathematics, physics & the future of AI", Terence Tao comparte sus reflexiones sobre diversos problemas fundamentales sin resolver en análisis y teoría de números

Vestigium
Readings shared June 13, 2025

The readings shared in Bluesky on 13 June 2025 are Formalizing zeta and L-functions in Lean. ~ David Loeffler, Michael Stoll. #ITP #LeanProver #Math Formal verification of relational algebra transfor

Vestigium
The abc conjecture almost always — autoformalized. ~ Jesse Michael Han et als. https://github.com/morph-labs/lean-abc-true-almost-always #Autoformalization #AIforMath #ITP #LeanProver
GitHub - morph-labs/lean-abc-true-almost-always

Contribute to morph-labs/lean-abc-true-almost-always development by creating an account on GitHub.

GitHub