Which backend do you endup using (local like multicore, multiprocess or distributed like slurm)?
#RStats Rainy Day Dev Diary
Progress on {zap} - bespoke serialization infrastructure for R
Writing to file a 255-level factor with 10% NAs. N = 10000 random values
* faster than writing uncompressed 'saveRDS()'
* more compressed than saveRDS('xz') (just by a smidge!)
* 10x faster than writing saveRDS(xz)
The US media is too cowardly to cover the [Hands off!] protests for fear of Trump.
I assume you mean on broadcast TV. I get my news on the web, not on TV. Here is what I found:
Fox News isn't covering. USA Today says there are thousands, not hundreds of thousands. Washington Post, NBC, and New York Times, are completely mute. However CBS and ABC have stories. PBS is full throated on the spot with live feeds. CNN has below the fold (10 stories+ down) coverage. Axios, AP, and Reuters are covering prominently. Local Los Angeles affiliate television websites are also following.
#USPol #Politics #Protests#HandsOff #News #Author #Writer #WritersOfMastodon #Journalism #writing community
mirai users get a free upgrade by updating to the latest nanonext 1.5.1 on CRAN.
You can now return a (resolved) mirai from a mirai π
```
> m <- mirai::mirai(
mirai::call_mirai(mirai::mirai("mirai"))
)
> m[]
< mirai [$data] >
> m$data
< mirai [$data] >
> m$data$data
[1] "mirai"
```
#rstats
{tinyplot} 0.3.0 is out! π¨
It's a lightweight #rstats π¦ to draw beautiful and complex plots, using an ultra-simple and concise syntax. #dataviz
This is a massive release!
@gmcd & @vincentab
worked hard to add tons of new themes and plot types. I also helped a bit with spine/mosaic and ridge plotts.
Check it out!
grantmcdermott.com/tinyplot/
Introducing {oomph} an #RStats pkg technical demonstration of 500x faster named subsetting of vectors and lists
https://github.com/coolbutuseless/oomph
Given a static named list/vector, `oomph` subsets 100 elements from n=200k list 500x faster than R's standard method, & 1000x less memory allocation
Notes:
* Uses an order preserving minimal perfect hash
* Suited to static objects only - hashing object would need to be recalculated for every addition/removal
* A dynamic minimal perfect hash would be welcomed
{insitu} #RStats Dev Diary
This is a bit of niche package for avoiding memory allocations and performing ops on numeric vectors by-reference. Lower memory pressure, means fewer allocations, less garbage collection, and a speed increase for certain classes of computation.
In classic convolution example, by-reference calculation is faster than vectorised R, and faster than fft() based solution.
Note: function prefix is now "br_" (which stands for "by-reference")