#today I did some recreational code #parallelisation (hey, that's what I did my MSc in, kinda, before it was fashionable), while recovering from the #nightclubbing!
https://www.earth.org.uk/note-on-site-technicals-104.html#2026-01-18
#today I did some recreational code #parallelisation (hey, that's what I did my MSc in, kinda, before it was fashionable), while recovering from the #nightclubbing!
https://www.earth.org.uk/note-on-site-technicals-104.html#2026-01-18
I am using dask to do some parallel calculations on a massive matrix. After the calculation, I just want the data to be written as int16 into an f.bin. The chunks are too large to compute the entire array and use numpy with tofile etc.
Any help/ideas/tips ?
Why is this so hard O__O
The detectCores() function of the parallel package is probably one of the most used functions when it comes to setting the number of parallel workers to use in R. In this blog post, I’ll try to explain why using it is not always a good idea. Already now, I am going to make a bold request and ask you to: Please avoid using parallel::detectCores() in your package! By reading this blog post, I hope you become more aware of the different problems that arise from using detectCores() and how they might affect you and the users of your code.