Thursday's #paperOfTheDay is "Tropical Mathematics" from 2009.
I'm currently developing a version of #QuantumFieldTheory called #tropicalFieldTheory . The present article is background on what "tropical" means in #mathematics : This term has first appeared in the context of #computerScience in the 1970s, and it was coined in honor of the early work being done in São Paulo, Brasil. The basic idea is to consider a special type of (mathematical) ring: A typical example of a ring would be the real numbers, together with addition and multiplication. Now, the "tropical semiring" is the real numbers and infinity, but "addition" is replaced by "taking minimum", while "multiplication" is replaced by "addition". This strange object behaves well in many ways. For example, in the usual ring of real numbers one would have
7 + 2*3 = 7+6 = 13
in the tropical semiring, the same equation becomes
min{ 7, 2+3 } = min { 7, 5 }= 5.
The tropical semiring is only SEMI because taking minimum does not always have an inverse: There is no x such that min {x,5}=8 .
In the following decades, tropical arithmetics has been developed into a full mathematical theory. In particular, ome has tropical polynomials, where the conventional addition of monomials is replaced by taking minimums. This is exactly what we do in tropical field theory: The #FeynmanIntegral s are integrals over rational functions, and we replace their denominators and numerators by tropical polynomials.
Today's article was written before tropical field theory, but it discusses a nice application from #biology : One can compute phylogenetic trees with the help of tropical algebraic geometry.
https://arxiv.org/abs/math/0408099
Tropical Mathematics

These are the notes for the Clay Mathematics Institute Senior Scholar Lecture which was delivered by Bernd Sturmfels in Park City, Utah, on July 22, 2004. The topic of this lecture is the ``tropical approach'' in mathematics, which has gotten a lot of attention recently in combinatorics, algebraic geometry and related fields. It offers an an elementary introduction to this subject, touching upon Arithmetic, Polynomials, Curves, Phylogenetics and Linear Spaces. Each section ends with a suggestion for further research. The bibliography contains numerousreferences for further reading in this field.

arXiv.org
#paperOfTheDay for Wednesday is "Dimensional renormalization: The number of dimensions as a regularizing parameter" from 1972. As the title suggests, this is one of the articles that first introduced dimensional regularization.
In perturbative #QuantumFieldTheory (or statistical physics), one encounters #FeynmanIntegral s which are divergent. These divergences are eventually removed through #renormalization , but in order to even get to that point, one first needs to assign some value to these integrals. This is called regularization. Various methods of regularization are known, but the typical problem is that they destroy symmetries of the theory. Dimensional regularization was a breakthrough for practical computation of Feynman integrals because it respects many symmetries.
The basic idea is to define an integral for non-integer dimension of spacetime. This is done, essentially, by analytic continuation: We know what it means to take a first, second, third etc derivative of a function, and to integrate it once, twice, thrice etc. If the function is spherically symmetric (i.e. depends only on the radius of spherical coordinates), then the "count" of the integrals or derivatives appears as an explicit number in intermediate steps. For example, the volume element in 3 dimensional spherical coordinates is r^2*dr*(angular part), where the exponent "2" represents dimension D=2+1=3. Basically, you could insert any number in place of the "2", and declare this to be the D-dimensional integral. Of course, in reality this is more sophisticated, but the basic idea is very much in this spirit.
https://link.springer.com/article/10.1007/BF02895558
Dimensional renorinalization : The number of dimensions as a regularizing parameter - Il Nuovo Cimento B (1971-1996)

We perform an analytic extension of quantum electrodynamics matrix elements as (analytic) functions of the number of dimensions of space(ν). The usual divergences appear as poles forν integer. The renormalization of those matrix elements (forν arbitrary) leads to expressions which are free of ultraviolet divergences forν equal to 4. This shows thatν can be used as an analytic regularizing parameter with, advantages over the usual analytic regularization method. In particular, gauge invariance is mantained for anyν.

SpringerLink
#paperOfTheDay : "Phase transitions for phi^4_2 quantum fields" from 1975. This article is about the field theory with quartic interaction (hence, symmetric under a change of sign of the field variable) in two dimensions. This theory has the same universality class as the Ising model, which is known to have a phase transition. The purpose of the present article is to prove the existence of this phase transition from the perspective of #quantumFieldTheory .
The proof, essentially, proceeds by mapping the field to a lattice model: Introduce small cubes and compute the average field in them. Then, distinguish whether this average is positive or negative, and examine the length of the boundary between areas of positive and negative mean spin. This construction yields a rather coarse bound: The true field fluctuates more than the averaged cubes, but for questions of long-range correlation, the cubes will be sufficient. An estimate on the possible length of boundaries shows that on average, the probability of two adjacent cubes having opposite sign is bounded to be rather small, which in turn implies that there is non-vanishing long-range correlation, and hence an ordered phase. https://link.springer.com/article/10.1007/BF01608328
This #paperOfTheDay is #computerScience : "A polynomial time, numerically stable integer relation algorithm" from 1992. This is the article that introduces the PSLQ algorithm.
Given a n-component vector of real numbers x=(x_1, ..., x_n), an integer relation is a vector of integers (m_1, ..., m_n) such that m_1*x_1 + ... +m_n*x_n=0. Phrased differently, this means that one of the entries of x is a linear combination of the other entries, with rational coefficients. The task is to either find this integer vector m, or to establish that no such vector exists with entries below a certain size (since the numbers x are given to finite floating-point precision, it is always possible to find some vector m with gigantic numeric values: Say they have 100 digits, then multiply x by 10^100, and the resulting numbers will be integers within the given precision).
Before PSLQ, there had been algorithms, most notably LLL and HJLS, for the same task. The great innovation of PSLQ is the "numerically stable": It does not require much more digits for internal computations than then input data has, and it is reliable in determining that no relation exists below a given threshold. By this, it is effectively also faster than previous algorithms, because it can run at lower floating point accuracy.
PSLQ has important applications in perturbative #QuantumFieldTheory and broader #physics : Often one has complicated integrals, where one knows that the result is some rational linear combination of a finite number of transcendentals (e.g. pi, e, Values of the zeta function, sqrt(2), log(2), etc). Then, one can solve the integral numerically, and use PSLQ to find the linear combination. https://www.davidhbailey.com/dhbpapers/pslq.pdf
#paperOfTheDay for Friday is "Unitarity violation at the Wilson-Fisher fixed point in 4-epsilon dimensions" from 2016.
Statistical #physics and #quantumFieldTheory usually involve two parameters that physically only allow integer values: The dimension of space(time), and the dimensions of internal symmetry groups (such as the SU(3) in the standard model QCD, or O(N) symmetry in scalar fields). On the other hand, it is routine to formally assign non-integer values to them. Dimensional regularization sets D=4-epsilon, where epsilon is not assumed to be integer, and the #FeynmanDiagram s of O(N) symmetric theories are polynomials in N, hence allowing any value.
The present article points out that even a free, 1-component scalar field theory contains states with negative norm if one lets D be non-integer. The argument is surprisingly simple: Consider operators which are built from spacetime-derivatives d_mu acting on fields. In particular, we are interested in those operators which are antisymmetric in their indices. However, in D integer dimensions, there are D coordinate directions, and hence an operator with n>D derivatives can not be fully antisymmetric. Hence, the antisymmetric operators vanish when n>D for integer D. This does not hold for non-integer D, so that the operator actually has zeros at all the integer D. One can then see, by explicit calculation, that the 2-point functions of such operators (in the free theory!) flip sign at integer D, hence they are sometimes negative, hence the theory is not unitary.
This shows that the extension to non-integer D is very subtle; similar trouble exists for the Dirac matrices gamma_mu.
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.93.125025
Yesterday's #paperOfTheDay was "Critical Equation of State from the Average Action" from 1996. This paper is one of the first applications of the Wetterich equation: A numerical solution of the local potential approximation for the vector model. The vector model is physically interesting in 3 dimensions.
In conventional perturbative #QuantumFieldTheory, one would probably start in 4-2epsilon dimensions, and compute a power series in epsilon with #FeynmanDiagram s, to then arrive at a 3-dimensional theory with epsilon=1/2. The Feynman diagrams have 4-valent vertices, which is thought of as a microscopic point-like "collision" between 4 "particles".
In the functional/statistical #physics perspective on field theory, the same situation is interpreted quite differently. One is in 3 dimensions throughout, and considers a system (e.g. lattice) where the constituents do not move. A phi^4-term is then a potential, i.e. every individual particle oscillates in its own local quartic potential. Additionally, the particles are coupled to neighbours. This is the "microscopic" theory, represented by the classical action. At longer distances, one effectively merges many of the lattice sites, and the so-obtained average quantities have all sorts of complicated interactions. The present paper uses the "local potential" approximation, which says that the only coupling to neighbours is still an elastic next-neighbour interaction (i.e. standard kinetic term p^2 in momentum space), and all that changes is that the particles are now in a more complicated potential. In particular, this might have non-trivial minima, which reflects a broken symmetry. https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.77.873
#paperOfTheDay "Non-Wilsonian ultraviolet completion via transseries" from 2021. A #quantumFieldTheory with marginally renormalizable coupling, such as the standard model of particle #physics usually leads to a power series solution that is divergent in two different ways: The number of #FeynmanDiagram s grows too fast, and the renormalized value of individual diagrams grows too fast. The latter is called #renormalon , and it can also be found in many other frameworks of QFT.
The present paper uses an analysis based on the renormalization group equation, truncated to low-order terms, to argue that the presence of the renormalon implies an ambiguity in the resummation. The technical machinery for this is called "resurgence", the basic mechanism is really intuitive: The Borel resummation is an integral along the positive real line, the renormalon is an algebraic singularity on that line, hence there is an ambiguity of which side (and how often, etc) one wants to pass the singularity. The paper arrives at two possible, mutually exclusive, interpretations of this finding.
I find these considerations really exciting, and they are closely related to my own work. However, I think it is fair to say that the many papers that have been written about renormalon chain resummation often raise more new questions than they answer, and at least to me the "big picture" of how this is supposed to work beyond leading-order is largely unclear. https://doi.org/10.1142/S0217751X21500160
#paperOfTheDay is "Effective field equations for expectation values" from 1986. Scattering in #quantumFieldTheory is usually defined as a time evolution from some infinite-past to infinite-future state, both of which are plane waves (=#particles on straight trajectories). In this setup, a natural definition for an "expectation value" of a field variable is the expectation between these states, i.e. <in | phi(x) | out> . The present paper introduces another class of expectation values, of the form <in | phi(x) | in>, and derive various equations for them. These,equations are different, but structurally equivalent, to the usual ones (e.g. if one uses perturbation theory, there still are the usual #FeynmanDiagram s, but one uses a retarded propagator in place of a Feynman propagator). The crucial difference is the interpretation of these quantities: The paper contains a nice example for a system where the |in> and |out> states have different plane wave states (i.e. a harmonic oscillator where the spring constant changes over time). In that case, it can be hard to interpret the conventional expectation values, whereas the new ones always refer to the |in> plane wave basis. Also, the conventional setup requires boundary conditions at the past and the future to compute the evaluation of the mean field, while the new setup can compute time evolution from just initial conditions in the past, which is more natural in certain quasi-classical setups. https://journals.aps.org/prd/abstract/10.1103/PhysRevD.33.444
#paperOfTheDay for Friday was "Tree hook length formulae, Feynman rules and B-series" from 2014. This is a #mathematics paper, more specifically #combinatorics . It deals with rooted trees, that is, connected graphs without cycles and with one distinguished vertex. For a given vertex, the subtree is the unique tree that is obtained by taking the vertex as a root, and discarding everything that was above the new root in the original tree. One then defines a "hook length formula" to be a mapping from rooted trees to some ring, which is given by evaluating some function on each subtree and multiplying the result. The classical example is the "tree factorial", where the function on the subtree is the number of vertices, so that the entire tree evaluates to the product of the number of vertices of all subtrees (which equals the ordinary factorial if the tree is a path). This construction might seem obscure, but it is widely used, and the present paper makes an effort to unify these results. For example, Runge-Kutta schemes for numerical integration of differential equations have an algebraic form called B-series, which essentially is a hook length formula. Also, renormalization of divergent subdiagrams in #quantumFieldTheory has this structure. The present paper discovers various new closed-form expressions for hook length formulae. From the perspective of QFT, what they do is invent new toy model Feynman rules that give rise to nice closed-form Green functions. I find this quite useful for a systematic qualitative understanding of QFT, even if these particular Feynman rules don't have an immediate physical interpretation. https://arxiv.org/abs/1412.6053
Tree hook length formulae, Feynman rules and B-series

We consider weighted generating functions of trees where the weights are products of functions of the sizes of the subtrees. This work begins with the observation that three different communities, largely independently, found substantially the same result concerning these series. We unify these results with a common generalization. Next we use the insights of one community on the problems of another in two different ways. Namely, we use the differential equation perspective to find a number of new interesting hook length formulae for trees, and we use the body of examples developed by the combinatorial community to give quantum field theory toy examples with nice properties.

arXiv.org
Thursday's #paperOfTheDay: "Almost zero-dimensional quantum field theories" from 1992. That paper considers the behaviour of #quantumMechanics and #quantumFieldTheory close to zero spacetime dimensions. The actual limit D=0, the zero-dimensional field theory, is well understood. The authors now study a (radially symmetric) Schrödinger equation, and then a free field theory, close to D=0. They find that this limit exists (i.e. there is a continuous family of theories for real parameters D which interpolates between the physical and the 0-dimensional theory), and the linear approximation in D already gives numerically meaningful estimates of the physical theory.
This setup is in the same spirit as our #tropicalFieldTheory , but the difference is that the older paper varies D alone, whereas the tropical limit arises when one reduces D and the power of the kinetic term (i.e. spacial decay rate of propagators) simultaneously. Secondly, we now have a much better understanding of analytical properties of #FeynmanIntegrals than 30 years ago, so that we can perform this limit in a mathematical clean way for all graphs of an interacting field theory. https://journals.aps.org/prd/abstract/10.1103/PhysRevD.46.5557