#paperOfTheDay for Wednesday is "Form factors in quantum gravity: Contrasting non-local, ghost-free gravity and Asymptotic Safety" from 2022.
Unlike all other elementary forces, #gravity does not straightforwardly make sense as a perturbative #quantumFieldTheory . This has given rise to a number of alternative approaches over the decades, two of which are being compared in today's paper.
The first one is "asymptotic safety", which, roughly, asserts that the conventional Einstein-Hilbert action is indeed the correct low energy description, but at higher energies, it does not simply blow up as could be expected from naive power counting. Instead, the strong gravity interaction at high energy (or equivalently at short scale) produce a state that is essentially scale invariant: An interacting fixed point. To study this behaviour, one usually resorts to numerical integrations of flow equations of the functional renormalization group.
The second approach is non-local ghost free gravity, where one assumes that, in perturbation theory, the propagator secretly has an exponentially decaying factor that only becomes relevant at high energies. This renders the theory renormalizable because it eliminates UV divergences.
The two approaches can also be interpreted in terms of two different, momentum-dependent, wave-function #renormalization factors. They correspond to rather different high-energy behaviour, which, however, is far beyond current range of experimental data.
https://www.sif.it/riviste/sif/ncc/econtents/2022/045/02/article/3
Vector Meson Dominance

I’m only now learning about ‘vector meson dominance’—a big idea put forth by Sakurai and others around 1960. Here’s a family of 9 mesons called the ‘vector nonet…

Azimuth
#paperOfTheDay for Tuesday: "Effective chiral Lagrangians for nucleon-pion interactions and nuclear forces" from 1991. This is one of the foundational papers of chiral effective field theory.
In principle, the interactions of nucleons (i.e. protons, neutrons), like any other interaction on small scales, is governed by the standard model of #particle physics, in particular #quantum chromodynamics. However, it is highly impractical to do calculations this way because below a certain energy (around 1GeV), the QCD force is so strong that it creates bound states, which one can not easily handle in perturbative #quantumFieldTheory . The way out is to use an effective field theory: The resulting objects, however they may arise, of course follow the usual laws of quantum mechanics, and they have certain symmetries governing their possible interactions. One takes these objects -- in the present case nucleons and pions -- as "elementary particles", writes down an ansatz for a Lagrangian, and works with this as usual.
In order to do perturbation theory, one needs a way to determine which terms are important and which are small corrections, and how the various terms scale under e.g. a change in energy. This "power counting" is more tricky in chiral effective theory than usual, because one has multiple mass scales and their ratios, but one possible way to do it is described in the paper.
https://www.sciencedirect.com/science/article/abs/pii/055032139190231L

One morning I was watching a TikTok and it talked about how a particle traveling at the speed of light does not experience time or distance.

That made me wonder "If there is no time and distance, maybe it is not what is moving?"

#physics #science #quantumfieldtheory #staticmotifs #noor

https://github.com/NoorMathematica/phys-core-002_static_motifs_dynamic_spacetime

Thursday's #paperOfTheDay is "Tropical Mathematics" from 2009.
I'm currently developing a version of #QuantumFieldTheory called #tropicalFieldTheory . The present article is background on what "tropical" means in #mathematics : This term has first appeared in the context of #computerScience in the 1970s, and it was coined in honor of the early work being done in São Paulo, Brasil. The basic idea is to consider a special type of (mathematical) ring: A typical example of a ring would be the real numbers, together with addition and multiplication. Now, the "tropical semiring" is the real numbers and infinity, but "addition" is replaced by "taking minimum", while "multiplication" is replaced by "addition". This strange object behaves well in many ways. For example, in the usual ring of real numbers one would have
7 + 2*3 = 7+6 = 13
in the tropical semiring, the same equation becomes
min{ 7, 2+3 } = min { 7, 5 }= 5.
The tropical semiring is only SEMI because taking minimum does not always have an inverse: There is no x such that min {x,5}=8 .
In the following decades, tropical arithmetics has been developed into a full mathematical theory. In particular, ome has tropical polynomials, where the conventional addition of monomials is replaced by taking minimums. This is exactly what we do in tropical field theory: The #FeynmanIntegral s are integrals over rational functions, and we replace their denominators and numerators by tropical polynomials.
Today's article was written before tropical field theory, but it discusses a nice application from #biology : One can compute phylogenetic trees with the help of tropical algebraic geometry.
https://arxiv.org/abs/math/0408099
Tropical Mathematics

These are the notes for the Clay Mathematics Institute Senior Scholar Lecture which was delivered by Bernd Sturmfels in Park City, Utah, on July 22, 2004. The topic of this lecture is the ``tropical approach'' in mathematics, which has gotten a lot of attention recently in combinatorics, algebraic geometry and related fields. It offers an an elementary introduction to this subject, touching upon Arithmetic, Polynomials, Curves, Phylogenetics and Linear Spaces. Each section ends with a suggestion for further research. The bibliography contains numerousreferences for further reading in this field.

arXiv.org
#paperOfTheDay for Wednesday is "Dimensional renormalization: The number of dimensions as a regularizing parameter" from 1972. As the title suggests, this is one of the articles that first introduced dimensional regularization.
In perturbative #QuantumFieldTheory (or statistical physics), one encounters #FeynmanIntegral s which are divergent. These divergences are eventually removed through #renormalization , but in order to even get to that point, one first needs to assign some value to these integrals. This is called regularization. Various methods of regularization are known, but the typical problem is that they destroy symmetries of the theory. Dimensional regularization was a breakthrough for practical computation of Feynman integrals because it respects many symmetries.
The basic idea is to define an integral for non-integer dimension of spacetime. This is done, essentially, by analytic continuation: We know what it means to take a first, second, third etc derivative of a function, and to integrate it once, twice, thrice etc. If the function is spherically symmetric (i.e. depends only on the radius of spherical coordinates), then the "count" of the integrals or derivatives appears as an explicit number in intermediate steps. For example, the volume element in 3 dimensional spherical coordinates is r^2*dr*(angular part), where the exponent "2" represents dimension D=2+1=3. Basically, you could insert any number in place of the "2", and declare this to be the D-dimensional integral. Of course, in reality this is more sophisticated, but the basic idea is very much in this spirit.
https://link.springer.com/article/10.1007/BF02895558
Dimensional renorinalization : The number of dimensions as a regularizing parameter - Il Nuovo Cimento B (1971-1996)

We perform an analytic extension of quantum electrodynamics matrix elements as (analytic) functions of the number of dimensions of space(ν). The usual divergences appear as poles forν integer. The renormalization of those matrix elements (forν arbitrary) leads to expressions which are free of ultraviolet divergences forν equal to 4. This shows thatν can be used as an analytic regularizing parameter with, advantages over the usual analytic regularization method. In particular, gauge invariance is mantained for anyν.

SpringerLink
#paperOfTheDay : "Phase transitions for phi^4_2 quantum fields" from 1975. This article is about the field theory with quartic interaction (hence, symmetric under a change of sign of the field variable) in two dimensions. This theory has the same universality class as the Ising model, which is known to have a phase transition. The purpose of the present article is to prove the existence of this phase transition from the perspective of #quantumFieldTheory .
The proof, essentially, proceeds by mapping the field to a lattice model: Introduce small cubes and compute the average field in them. Then, distinguish whether this average is positive or negative, and examine the length of the boundary between areas of positive and negative mean spin. This construction yields a rather coarse bound: The true field fluctuates more than the averaged cubes, but for questions of long-range correlation, the cubes will be sufficient. An estimate on the possible length of boundaries shows that on average, the probability of two adjacent cubes having opposite sign is bounded to be rather small, which in turn implies that there is non-vanishing long-range correlation, and hence an ordered phase. https://link.springer.com/article/10.1007/BF01608328
This #paperOfTheDay is #computerScience : "A polynomial time, numerically stable integer relation algorithm" from 1992. This is the article that introduces the PSLQ algorithm.
Given a n-component vector of real numbers x=(x_1, ..., x_n), an integer relation is a vector of integers (m_1, ..., m_n) such that m_1*x_1 + ... +m_n*x_n=0. Phrased differently, this means that one of the entries of x is a linear combination of the other entries, with rational coefficients. The task is to either find this integer vector m, or to establish that no such vector exists with entries below a certain size (since the numbers x are given to finite floating-point precision, it is always possible to find some vector m with gigantic numeric values: Say they have 100 digits, then multiply x by 10^100, and the resulting numbers will be integers within the given precision).
Before PSLQ, there had been algorithms, most notably LLL and HJLS, for the same task. The great innovation of PSLQ is the "numerically stable": It does not require much more digits for internal computations than then input data has, and it is reliable in determining that no relation exists below a given threshold. By this, it is effectively also faster than previous algorithms, because it can run at lower floating point accuracy.
PSLQ has important applications in perturbative #QuantumFieldTheory and broader #physics : Often one has complicated integrals, where one knows that the result is some rational linear combination of a finite number of transcendentals (e.g. pi, e, Values of the zeta function, sqrt(2), log(2), etc). Then, one can solve the integral numerically, and use PSLQ to find the linear combination. https://www.davidhbailey.com/dhbpapers/pslq.pdf
#paperOfTheDay for Friday is "Unitarity violation at the Wilson-Fisher fixed point in 4-epsilon dimensions" from 2016.
Statistical #physics and #quantumFieldTheory usually involve two parameters that physically only allow integer values: The dimension of space(time), and the dimensions of internal symmetry groups (such as the SU(3) in the standard model QCD, or O(N) symmetry in scalar fields). On the other hand, it is routine to formally assign non-integer values to them. Dimensional regularization sets D=4-epsilon, where epsilon is not assumed to be integer, and the #FeynmanDiagram s of O(N) symmetric theories are polynomials in N, hence allowing any value.
The present article points out that even a free, 1-component scalar field theory contains states with negative norm if one lets D be non-integer. The argument is surprisingly simple: Consider operators which are built from spacetime-derivatives d_mu acting on fields. In particular, we are interested in those operators which are antisymmetric in their indices. However, in D integer dimensions, there are D coordinate directions, and hence an operator with n>D derivatives can not be fully antisymmetric. Hence, the antisymmetric operators vanish when n>D for integer D. This does not hold for non-integer D, so that the operator actually has zeros at all the integer D. One can then see, by explicit calculation, that the 2-point functions of such operators (in the free theory!) flip sign at integer D, hence they are sometimes negative, hence the theory is not unitary.
This shows that the extension to non-integer D is very subtle; similar trouble exists for the Dirac matrices gamma_mu.
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.93.125025
Yesterday's #paperOfTheDay was "Critical Equation of State from the Average Action" from 1996. This paper is one of the first applications of the Wetterich equation: A numerical solution of the local potential approximation for the vector model. The vector model is physically interesting in 3 dimensions.
In conventional perturbative #QuantumFieldTheory, one would probably start in 4-2epsilon dimensions, and compute a power series in epsilon with #FeynmanDiagram s, to then arrive at a 3-dimensional theory with epsilon=1/2. The Feynman diagrams have 4-valent vertices, which is thought of as a microscopic point-like "collision" between 4 "particles".
In the functional/statistical #physics perspective on field theory, the same situation is interpreted quite differently. One is in 3 dimensions throughout, and considers a system (e.g. lattice) where the constituents do not move. A phi^4-term is then a potential, i.e. every individual particle oscillates in its own local quartic potential. Additionally, the particles are coupled to neighbours. This is the "microscopic" theory, represented by the classical action. At longer distances, one effectively merges many of the lattice sites, and the so-obtained average quantities have all sorts of complicated interactions. The present paper uses the "local potential" approximation, which says that the only coupling to neighbours is still an elastic next-neighbour interaction (i.e. standard kinetic term p^2 in momentum space), and all that changes is that the particles are now in a more complicated potential. In particular, this might have non-trivial minima, which reflects a broken symmetry. https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.77.873