Whoa, hold onto your protractors! 🤓 Rohan's blog post just made Gaussian integration the hip new thing for the cool kids of numerical analysis. Because nothing screams "party" like Chebyshev-Gauss quadrature and evaluating definite integrals! 🎉
https://rohangautam.github.io/blog/chebyshev_gauss/ #GaussianIntegration #ChebyshevGauss #NumericalAnalysis #MathIsCool #PartyWithMath #HackerNews #ngated
Gaussian integration is cool

Brief discussion on gaussian quadrature and chebyshev-gauss quadrature

Gaussian integration is cool

Brief discussion on gaussian quadrature and chebyshev-gauss quadrature

New publication https://doi.org/10.1103/PhysRevB.111.205143

New algorithm for the #inverseproblem of Kohn-Sham #densityfunctionaltheory (#dft), i.e. to find the #potential from the #density.

Outcome of a fun collaboration of @herbst with the group of Andre Laestadius at #oslomet to derive first mathematical error bounds for this problem

#condensedmatter #planewave #numericalanalysis #convexanalysis #dftk

That first implementation didn't even support the multi-GPU and multi-node features of #GPUSPH (could only run on a single GPU), but it paved the way for the full version, that took advantage of the whole infrastructure of GPUSPH in multiple ways.

First of all, we didn't have to worry about how to encode the matrix and its sparseness, because we could compute the coefficients on the fly, and operate with the same neighbors list transversal logic that was used in the rest of the code; this allowed us to minimize memory use and increase code reuse.

Secondly, we gained control on the accuracy of intermediate operations, allowing us to use compensating sums wherever needed.

Thirdly, we could leverage the multi-GPU and multi-node capabilities already present in GPUSPH to distribute computations across all available devices.

And last but not least, we actually found ways to improve the classic #CG and #BiCGSTAB linear solving algorithms to achieve excellent accuracy and convergence even without preconditioners, while making the algorithms themselves more parallel-friendly:

https://doi.org/10.1016/j.jcp.2022.111413

4/n

#LinearAlgebra #NumericalAnalysis

People in the market for a postdoc position in numerical linear algebra should look at the advert for a postdoc in Edinburgh "devoted to research on Randomized Numerical Linear Algebra for Optimization and Control of Partial Differential Equations."

The mentors are John Pearson (Edinburgh) and Stefan Güttel (Manchester), both excellent people, and the topic is fascinating. I even fantasised about leaving my permanent job and doing this instead ...

More info: https://www.jobs.ac.uk/job/DNA984/postdoctoral-research-associate

#NumericalAnalysis #optimization #PartialDifferentialEquations #postdoc

Postdoctoral Research Associate at The University of Edinburgh

Discover an exciting academic career path as a Postdoctoral Research Associate at jobs.ac.uk. Don't miss out on this job opportunity - apply today!

Jobs.ac.uk

Thanks to the Manchester NA group for organizing a seminar by David Watkins, one of the foremost experts on matrix eigenvalue algorithms. I find numerical linear algebra talks often too technical, but I could follow David's talk quite well even though I did not get everything, so thanks for that.

David spoke about the standard eigenvalue algorithm, which is normally called the QR-algorithm. He does not like that name because the QR-decomposition is not actually important in practice and he calls it the Francis algorithm (after John Francis, who developed it). It is better to think of the algorithm as an iterative process which reduces the matrix to triangular form in the limit.

#NumericalAnalysis #eigenvalue #LinearAlgebra

Three Hundred Years Later, a Tool from Isaac Newton Gets an Update | Quanta Magazine

A simple, widely used mathematical technique can finally be applied to boundlessly complex problems.

Quanta Magazine

SUperman: Efficient Permanent Computation on GPUs

#CUDA #MPI #HPC #NumericalAnalysis #Package

https://hgpu.org/?p=29806

SUperman: Efficient Permanent Computation on GPUs

The permanent is a function, defined for a square matrix, with applications in various domains including quantum computing, statistical physics, complexity theory, combinatorics, and graph theory. …

hgpu.org

Apparenty we weren't having enough issues of context collapse for #SPH as an acronym of #SmoothedParticleHydrodynamics, since I'm now seeing #STI as an acronym for #SymplecticTimeIntegrator. And of course these article are more often than not written with #LaTeX.

(No, Mastodon, I really do not want you to normalize the case of *that* tag.)

One of these I'm going to create a quiz game: #kink #fetish or #numericalAnalysis?