Truncation error is the error from approximating an exact solution (e.g., infinite series) with a finite one. Ex: `e^x ≈ 1 + x + x^2/2!` (we 'truncated' it!). Pro-Tip: Use more terms or smaller step sizes to reduce this error!
Truncation error is the error from approximating an exact solution (e.g., infinite series) with a finite one. Ex: `e^x ≈ 1 + x + x^2/2!` (we 'truncated' it!). Pro-Tip: Use more terms or smaller step sizes to reduce this error!
📄 Comparing Models of Rapidly Rotating Relativistic Stars Constructed b…
Quicklook:
Stergioulas, Nikolaos et al. (1995) · The Astrophysical Journal
Reads: 100 · Citations: 521
DOI: 10.1086/175605
🔗 https://ui.adsabs.harvard.edu/abs/1995ApJ...444..306S/abstract
#Astronomy #Astrophysics #ComputationalAstrophysics #ComputerizedSimulation #NumericalAnalysis
We present the first direct comparison of codes based on two different numerical methods for constructing rapidly rotating relativistic stars. A code based on the Komatsu-Eriguchi-Hachisu (KEH) method (Komatsu et al. 1989), written by Stergioulas, is compared to the Butterworth-Ipser code (BI), as modified by Friedman, Ipser and Parker. We compare models obtained by each method and evaluate the accuracy and efficiency of the two codes. The agreement is surprisingly good. A relatively large discrepancy recently reported (Eriguchi et al. 1994) is found to arise from the use of two different versions of the equation of state. We find, for a given equation of state, that equilibrium models with maximum values of mass, baryon mass, and angular momentum are (generically) all distinct and either all unstable to collapse or are all stable. Our implementation of the KEH method will be available as a public domain program for interested users.
📄 A Comparison of Numerical Methods for the Study of Star Cluster Dynam…
Quicklook:
Aarseth, S. J. et al. (1974) · Astronomy and Astrophysics
Reads: 5 · Citations: 254
DOI: N/A
🔗 https://ui.adsabs.harvard.edu/abs/1974A&A....37..183A/abstract
#Astronomy #Astrophysics #AstronomicalModels #ComputerizedSimulation #NumericalAnalysis
We compare the results of three different numerical methods for computing the evolution of a spherical star cluster from a given initial state, under the influence of internal relaxation: the N-body integration, the Monte Carlo method, and the fluid-dynamical approach. The general features of the evolution are very similar in all cases. The rates of evolution differ somewhat; for stars of equal masses, taking the N-body integrations as a reference, the Monte Carlo models evolve too fast by a factor 1.5, and the fluid-dynamical models by a factor 2 to 3.
Your college professor teaches you "A-stable methods are required for stiff ODEs". But PSA, the most commonly used stiff ODE solvers (adaptive order BDF methods) are not A-stable. #sciml #numericalanalysis #diffeq

New preprint https://arxiv.org/abs/2511.06957
A #perspective discussing Moreau-Yosida (MY) techniques in #densityfunctionaltheory.
MY regularisation has enabled to import tools from #convexanalysis into #dft
providing a new mathematical understanding of the most important atomistic simulation approach
and new robust algorithms for Kohn-Sham #dft.
Thanks to my co-authors from the #hylleraas centre and #oslomet for insightful discussions.
Within density-functional theory, Moreau-Yosida regularization enables both a reformulation of the theory and a mathematically well-defined definition of the Kohn-Sham approach. It is further employed in density-potential inversion schemes and, through the choice of topology for the density and potential space, can be directly linked to classical field theories. This perspective collects various appearances of the regularization technique within density-functional theory alongside possibilities for their future development.
Automatic Differentiation Can Be Incorrect
#HackerNews #AutomaticDifferentiation #Incorrectness #NumericalAnalysis #Simulation #MachineLearning
ISCL Seminar Series The Numerical Analysis of Differentiable Simulation: How Automatic Differentiation of Physics Can Give Incorrect Derivatives Scientific machine learning (SciML) relies heavily on automatic differentiation (AD), the process of constructing gradients which include machine learning integrated into mechanistic models for the purpose of gradient-based optimization. While these differentiable programming approaches pitch an idea of “simply put the simulator into a loss function and use AD”, it turns out there are a lot more subtle details to consider in practice. In this talk we will dive into the numerical analysis of differentiable simulation and ask the question: how numerically stable and robust is AD? We will use examples from the Python-based Jax (diffrax) and PyTorch (torchdiffeq) libraries in order to demonstrate how canonical formulations ... READ MORE
Implicit Ode Solvers Are Not Universally More Robust Than Explicit Ode Solvers
#HackerNews #ImplicitOdeSolvers #ExplicitOdeSolvers #NumericalAnalysis #ComputationalMath #Robustness #Algorithms
A very common adage in ODE solvers is that if you run into trouble with an explicit method, usually some explicit Runge-Kutta method like RK4, then you should try an implicit method. Implicit methods, because they are doing more work, solving an implicit system via a Newton method having “better” stability, should be the thing you go to on the “hard” problems. This is at least what I heard at first, and then I learned about edge cases. Specifically, you hear people say “but for hyperbolic PDEs you need to use explicit methods”. You might even intuit from this “PDEs can have special properties, so sometimes special things can happen with PDEs… but ODEs, that should use implicit methods if you need more robustness”. This turns out to not be true, and really understanding the ODEs will help us understand better ... READ MORE
@chrisrackauckas The excellent blog post above explains in detail why implicit ODE solvers are considered more robust than explicit ODE solvers (because they do better on linear problems) and why this is NOT true for all problems (roughly speaking, nonlinear problems can behave differently for linear problems; see the blog post for a better explanation which does not fit here).
An extreme example are exponential integrators, which have perfect stability for linear problems (because they use the analytical solution of linear ODEs). Nevertheless, exponential integrators still suffer from stability problems for nonlinear problems.
#NumericalAnalysis #ODEsolver #NumericalIntegration #ExponentialIntegrator
Here are more details on the winners of the Leslie Fox Prize for Numerical Analysis, copied from the announcement in NA-Digest at https://na-digest.coecis.cornell.edu/na-digest-html/25/v25n27.html#3
All gave excellent talks and I hugely enjoyed listening to them.
1st Prizes:
James Foster (Bath) for "High order splitting methods for SDEs satisfying a commutativity condition"
Tizian Wenzel (LMU Munich) for "Analysis of target data-dependent greedy kernel algorithms"
2nd Prizes:
Sara Fraschin (Vienna) for "Stability of conforming space-time isogeometric methods for the wave equation"
Georg Maierhofer (Cambridge) for "Bridging the gap: symplecticity and low regularity in Runge-Kutta resonance-based schemes"
Wenqi Zhu (Oxford) for "Cubic-quartic regularization models for solving polynomial subproblems in third-order tensor methods"
David Persson (NYU) for "Randomized low-rank approximation of monotone matrix functions"
Yo! They used the BFGS algorithm, and "S" was SHANNO! That reminded me who it was who tried to recruit me into numerical analysis. It was this very Professor Shanno!
kholub.com/projects/uniform_plinko.html?fbclid=IwQ0xDSwLL9CJjbGNrAsvW8GV4dG4DYWVtAjExAAEeWIYV4owZNQdOwgcdYV0vlNxqFtZVrTclkrYrvCu12qqssxtt7m1UO4fIiDA_aem_x3EEe2QaW4wJTTArSzVjgQ https://kholub.com/projects/uniform_plinko.html?fbclid=IwQ0xDSwLL9CJjbGNrAsvW8GV4dG4DYWVtAjExAAEeWIYV4owZNQdOwgcdYV0vlNxqFtZVrTclkrYrvCu12qqssxtt7m1UO4fIiDA_aem_x3EEe2QaW4wJTTArSzVjgQ