
Here's a question: let \(M\) be a \(0\times 0\) matrix with entries in the field \(\mathbb{F}\). What is \(\det(M)\)?
That first implementation didn't even support the multi-GPU and multi-node features of #GPUSPH (could only run on a single GPU), but it paved the way for the full version, that took advantage of the whole infrastructure of GPUSPH in multiple ways.
First of all, we didn't have to worry about how to encode the matrix and its sparseness, because we could compute the coefficients on the fly, and operate with the same neighbors list transversal logic that was used in the rest of the code; this allowed us to minimize memory use and increase code reuse.
Secondly, we gained control on the accuracy of intermediate operations, allowing us to use compensating sums wherever needed.
Thirdly, we could leverage the multi-GPU and multi-node capabilities already present in GPUSPH to distribute computations across all available devices.
And last but not least, we actually found ways to improve the classic #CG and #BiCGSTAB linear solving algorithms to achieve excellent accuracy and convergence even without preconditioners, while making the algorithms themselves more parallel-friendly:
https://doi.org/10.1016/j.jcp.2022.111413
4/n
#datascience cheatsheets for #python #probability #linearalgebra #calculus and #scipy
(Not necessarily in that order)
I made a thing! ๐
MLMathr is a free, visual way to learn the math behind machine learning.
From vectors to matrix transformations, itโs interactive, beginner-friendly, and fun.
Try it out: https://mlmathr.com