Thursday's #paperOfTheDay is "Tropical Mathematics" from 2009.
I'm currently developing a version of #QuantumFieldTheory called #tropicalFieldTheory . The present article is background on what "tropical" means in #mathematics : This term has first appeared in the context of #computerScience in the 1970s, and it was coined in honor of the early work being done in São Paulo, Brasil. The basic idea is to consider a special type of (mathematical) ring: A typical example of a ring would be the real numbers, together with addition and multiplication. Now, the "tropical semiring" is the real numbers and infinity, but "addition" is replaced by "taking minimum", while "multiplication" is replaced by "addition". This strange object behaves well in many ways. For example, in the usual ring of real numbers one would have
7 + 2*3 = 7+6 = 13
in the tropical semiring, the same equation becomes
min{ 7, 2+3 } = min { 7, 5 }= 5.
The tropical semiring is only SEMI because taking minimum does not always have an inverse: There is no x such that min {x,5}=8 .
In the following decades, tropical arithmetics has been developed into a full mathematical theory. In particular, ome has tropical polynomials, where the conventional addition of monomials is replaced by taking minimums. This is exactly what we do in tropical field theory: The #FeynmanIntegral s are integrals over rational functions, and we replace their denominators and numerators by tropical polynomials.
Today's article was written before tropical field theory, but it discusses a nice application from #biology : One can compute phylogenetic trees with the help of tropical algebraic geometry.
https://arxiv.org/abs/math/0408099
Tropical Mathematics

These are the notes for the Clay Mathematics Institute Senior Scholar Lecture which was delivered by Bernd Sturmfels in Park City, Utah, on July 22, 2004. The topic of this lecture is the ``tropical approach'' in mathematics, which has gotten a lot of attention recently in combinatorics, algebraic geometry and related fields. It offers an an elementary introduction to this subject, touching upon Arithmetic, Polynomials, Curves, Phylogenetics and Linear Spaces. Each section ends with a suggestion for further research. The bibliography contains numerousreferences for further reading in this field.

arXiv.org
#paperOfTheDay for Wednesday is "Dimensional renormalization: The number of dimensions as a regularizing parameter" from 1972. As the title suggests, this is one of the articles that first introduced dimensional regularization.
In perturbative #QuantumFieldTheory (or statistical physics), one encounters #FeynmanIntegral s which are divergent. These divergences are eventually removed through #renormalization , but in order to even get to that point, one first needs to assign some value to these integrals. This is called regularization. Various methods of regularization are known, but the typical problem is that they destroy symmetries of the theory. Dimensional regularization was a breakthrough for practical computation of Feynman integrals because it respects many symmetries.
The basic idea is to define an integral for non-integer dimension of spacetime. This is done, essentially, by analytic continuation: We know what it means to take a first, second, third etc derivative of a function, and to integrate it once, twice, thrice etc. If the function is spherically symmetric (i.e. depends only on the radius of spherical coordinates), then the "count" of the integrals or derivatives appears as an explicit number in intermediate steps. For example, the volume element in 3 dimensional spherical coordinates is r^2*dr*(angular part), where the exponent "2" represents dimension D=2+1=3. Basically, you could insert any number in place of the "2", and declare this to be the D-dimensional integral. Of course, in reality this is more sophisticated, but the basic idea is very much in this spirit.
https://link.springer.com/article/10.1007/BF02895558
Dimensional renorinalization : The number of dimensions as a regularizing parameter - Il Nuovo Cimento B (1971-1996)

We perform an analytic extension of quantum electrodynamics matrix elements as (analytic) functions of the number of dimensions of space(ν). The usual divergences appear as poles forν integer. The renormalization of those matrix elements (forν arbitrary) leads to expressions which are free of ultraviolet divergences forν equal to 4. This shows thatν can be used as an analytic regularizing parameter with, advantages over the usual analytic regularization method. In particular, gauge invariance is mantained for anyν.

SpringerLink
The #paperofTheDay is 99 years old: "Winkelvariablen und kanonische Transformationen in der Undulationsmechanik" by Fritz London 1927.
If classical #physics is formulated in terms of Hamilton's canonical equations, the variables (p,q) initially are position and momentum, but they can be transformed in various ways by what is called "canonical transformations". Usually, one tries to find "action-angle variables", which are such that the transformed momentum becomes a constant (and the corresponding transformed position is then a linear, or periodic, function of time, hence an "angle"). Typical simple examples of this kind are rotation symmetric problems, where the action-angle variables are spherical coordinates, and the transformed (and conserved) momentum is angular momentum. In particular, the classical action S is the generator of a canonical transformation.
The present paper is one of the very early papers of #quantumMechanics , and it deals with the question how canonical transformations can be realized in quantum mechanics. An important special case is the transformation generated by e^(i/h S). There are two conceptual challenges: Firstly, in Schrödinger's formulation, instead of a "particle position", which is a number, one has a "wave function", which is a function of the spacial coordinate. So, the canonical transformations operate on an infinite dimensional Hilbert space of functions, instead of an ordinary vector space. Secondly, the momentum q becomes the differential operator i d/dq, and the canonical transformation must transform this object. Since these concepts were very new and foreign at that time, the paper includes many clarifying comments.
https://link.springer.com/article/10.1007/BF01400361
Winkelvariable und kanonische Transformationen in der Undulationsmechanik - Zeitschrift für Physik A Hadrons and nuclei

Die Übertragung der Transformationstheorie der Matrizenmechanik auf die Schrö-dingersche Eigenwerttheorie führt zu einer sehr generalisierte

SpringerLink
#paperOfTheDay : "Phase transitions for phi^4_2 quantum fields" from 1975. This article is about the field theory with quartic interaction (hence, symmetric under a change of sign of the field variable) in two dimensions. This theory has the same universality class as the Ising model, which is known to have a phase transition. The purpose of the present article is to prove the existence of this phase transition from the perspective of #quantumFieldTheory .
The proof, essentially, proceeds by mapping the field to a lattice model: Introduce small cubes and compute the average field in them. Then, distinguish whether this average is positive or negative, and examine the length of the boundary between areas of positive and negative mean spin. This construction yields a rather coarse bound: The true field fluctuates more than the averaged cubes, but for questions of long-range correlation, the cubes will be sufficient. An estimate on the possible length of boundaries shows that on average, the probability of two adjacent cubes having opposite sign is bounded to be rather small, which in turn implies that there is non-vanishing long-range correlation, and hence an ordered phase. https://link.springer.com/article/10.1007/BF01608328
This #paperOfTheDay is #computerScience : "A polynomial time, numerically stable integer relation algorithm" from 1992. This is the article that introduces the PSLQ algorithm.
Given a n-component vector of real numbers x=(x_1, ..., x_n), an integer relation is a vector of integers (m_1, ..., m_n) such that m_1*x_1 + ... +m_n*x_n=0. Phrased differently, this means that one of the entries of x is a linear combination of the other entries, with rational coefficients. The task is to either find this integer vector m, or to establish that no such vector exists with entries below a certain size (since the numbers x are given to finite floating-point precision, it is always possible to find some vector m with gigantic numeric values: Say they have 100 digits, then multiply x by 10^100, and the resulting numbers will be integers within the given precision).
Before PSLQ, there had been algorithms, most notably LLL and HJLS, for the same task. The great innovation of PSLQ is the "numerically stable": It does not require much more digits for internal computations than then input data has, and it is reliable in determining that no relation exists below a given threshold. By this, it is effectively also faster than previous algorithms, because it can run at lower floating point accuracy.
PSLQ has important applications in perturbative #QuantumFieldTheory and broader #physics : Often one has complicated integrals, where one knows that the result is some rational linear combination of a finite number of transcendentals (e.g. pi, e, Values of the zeta function, sqrt(2), log(2), etc). Then, one can solve the integral numerically, and use PSLQ to find the linear combination. https://www.davidhbailey.com/dhbpapers/pslq.pdf
The #paperOfTheDay is "Quantum Ostrogradsky theorem" from 2020.
Classical #physics is based on functions such as the Lagrangian, action, or Hamiltonian (=energy), which always depend on at most the first time derivative of the quantity (such as a field or a particle position) in question. The classical equations of motion -- the Hamilton canonical equations -- are always first order partial differential equations in the variables "position" and "momentum", and the time evolution is determined from giving the position and the velocity at an initial time.
Structurally, it would be easy to write down a Lagrangian that depends on second time derivative. Then, if one sets up first-order canonical equations, there is a second canonical momentum, and a second canonical position for each variable. This would be a bit unintuitive, namely it would allow infinitely many distinct future time evolutions if position and velocity of a particle are given, but why not? The problem is that the so-obtained Hamiltonian is unbounded from below, and such system has no ground state. This is Ostrogradsky's theorem: A classical system with higher order time derivatives is inconsistent.
The present article proves that the same problem persists in quantum theory if one follows the usual path of canonical quantization. This is perhaps not surprising, but also not trivial: For example, the 1/r potential of a hydrogen atom has no classical minimum, but it does have a finite energy ground state in #quantum mechanics. Such effect is absent for the Ostrogradsky case, where the potential decays linearly, so that quantum fluctuations have no chance of making it bounded. https://link.springer.com/article/10.1007/JHEP09(2020)032
Quantum Ostrogradsky theorem - Journal of High Energy Physics

The Ostrogradsky theorem states that any classical Lagrangian that contains time derivatives higher than the first order and is nondegenerate with respect to the highest-order derivatives leads to an unbounded Hamiltonian which linearly depends on the canonical momenta. Recently, the original theorem has been generalized to nondegeneracy with respect to non-highest-order derivatives. These theorems have been playing a central role in construction of sensible higher-derivative theories. We explore quantization of such non-degenerate theories, and prove that Hamiltonian is still unbounded at the level of quantum field theory.

SpringerLink
#paperOfTheDay for Friday is "Unitarity violation at the Wilson-Fisher fixed point in 4-epsilon dimensions" from 2016.
Statistical #physics and #quantumFieldTheory usually involve two parameters that physically only allow integer values: The dimension of space(time), and the dimensions of internal symmetry groups (such as the SU(3) in the standard model QCD, or O(N) symmetry in scalar fields). On the other hand, it is routine to formally assign non-integer values to them. Dimensional regularization sets D=4-epsilon, where epsilon is not assumed to be integer, and the #FeynmanDiagram s of O(N) symmetric theories are polynomials in N, hence allowing any value.
The present article points out that even a free, 1-component scalar field theory contains states with negative norm if one lets D be non-integer. The argument is surprisingly simple: Consider operators which are built from spacetime-derivatives d_mu acting on fields. In particular, we are interested in those operators which are antisymmetric in their indices. However, in D integer dimensions, there are D coordinate directions, and hence an operator with n>D derivatives can not be fully antisymmetric. Hence, the antisymmetric operators vanish when n>D for integer D. This does not hold for non-integer D, so that the operator actually has zeros at all the integer D. One can then see, by explicit calculation, that the 2-point functions of such operators (in the free theory!) flip sign at integer D, hence they are sometimes negative, hence the theory is not unitary.
This shows that the extension to non-integer D is very subtle; similar trouble exists for the Dirac matrices gamma_mu.
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.93.125025
Yesterday's #paperOfTheDay was "Critical Equation of State from the Average Action" from 1996. This paper is one of the first applications of the Wetterich equation: A numerical solution of the local potential approximation for the vector model. The vector model is physically interesting in 3 dimensions.
In conventional perturbative #QuantumFieldTheory, one would probably start in 4-2epsilon dimensions, and compute a power series in epsilon with #FeynmanDiagram s, to then arrive at a 3-dimensional theory with epsilon=1/2. The Feynman diagrams have 4-valent vertices, which is thought of as a microscopic point-like "collision" between 4 "particles".
In the functional/statistical #physics perspective on field theory, the same situation is interpreted quite differently. One is in 3 dimensions throughout, and considers a system (e.g. lattice) where the constituents do not move. A phi^4-term is then a potential, i.e. every individual particle oscillates in its own local quartic potential. Additionally, the particles are coupled to neighbours. This is the "microscopic" theory, represented by the classical action. At longer distances, one effectively merges many of the lattice sites, and the so-obtained average quantities have all sorts of complicated interactions. The present paper uses the "local potential" approximation, which says that the only coupling to neighbours is still an elastic next-neighbour interaction (i.e. standard kinetic term p^2 in momentum space), and all that changes is that the particles are now in a more complicated potential. In particular, this might have non-trivial minima, which reflects a broken symmetry. https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.77.873
#paperOfTheDay is "Lips: p-adic and singular phase space" from 2023. This article is quite different from the ones I usually read, in that it is not about a computation, but rather it describes a software. It is normal for research projects in theoretical #physics to involve large amounts of programming and computer use. Almost all of these operations rely on purpose-built #scientificSoftware , and there is a great number of open source libraries for all kinds of specialist physics computations.
The present article describes the package "Lips" (short for "Lorentz invariant phase space"), whose primary purpose is to generate valid sets of momenta for scattering amplitudes: A scattering amplitude is a function of masses and momenta of a set of particles, and it usually implies various constraints to these (e.g. masses should be positive, momenta should be conserved, individual momenta should square to given values, etc.). This makes it a non-trivial task to produce concrete numerical values of momenta that satisfy these constraints. Beyond that, one might also require specific types of numbers (rational, complex, p-adic, etc), or represent the momenta as spinors. The package can also do further calculations that arise in this context, such as evaluating spin-helicity expressions.
https://arxiv.org/abs/2305.14075
Lips: p-adic and singular phase space

I present new features of the open-source Python package lips, which leverages the newly developed pyadic and syngular libraries. These developments enable the generation and manipulation of massless phase-space configurations beyond real kinematics, defined in terms of four-momenta or Weyl spinors, not only over complex numbers ($\mathbb{C}$), but now also over finite fields ($\mathbb{F}_p$) and p-adic numbers ($\mathbb{Q}_p$). The package also offers tools to evaluate arbitrary spinor-helicity expressions in any of these fields. Furthermore, using the algebraic-geometry submodule, which utilizes Singular [1] through the Python interface syngular, one can define and manipulate ideals in spinor variables, enabling the identification of irreducible surfaces where scattering amplitudes have well-defined zeros and poles. As an example application, I demonstrate how to infer valid partial-fraction decompositions from numerical evaluations.

arXiv.org
#paperOfTheDay "Non-Wilsonian ultraviolet completion via transseries" from 2021. A #quantumFieldTheory with marginally renormalizable coupling, such as the standard model of particle #physics usually leads to a power series solution that is divergent in two different ways: The number of #FeynmanDiagram s grows too fast, and the renormalized value of individual diagrams grows too fast. The latter is called #renormalon , and it can also be found in many other frameworks of QFT.
The present paper uses an analysis based on the renormalization group equation, truncated to low-order terms, to argue that the presence of the renormalon implies an ambiguity in the resummation. The technical machinery for this is called "resurgence", the basic mechanism is really intuitive: The Borel resummation is an integral along the positive real line, the renormalon is an algebraic singularity on that line, hence there is an ambiguity of which side (and how often, etc) one wants to pass the singularity. The paper arrives at two possible, mutually exclusive, interpretations of this finding.
I find these considerations really exciting, and they are closely related to my own work. However, I think it is fair to say that the many papers that have been written about renormalon chain resummation often raise more new questions than they answer, and at least to me the "big picture" of how this is supposed to work beyond leading-order is largely unclear. https://doi.org/10.1142/S0217751X21500160