Performing some quick statistical analyses in classic #RStats and neatly โ€œknittingโ€ them into a PDF using #RMarkdown, #knitr, and #MacTeX #texLaTeX.

Call me old-fashioned, but I really enjoy this workflow. 

#OpenSource #FOSS #statistics

BTW, so far I have not encountered any scenario in which #tidyR offers solutions superior to #baseR.

I can't speak for anyone else, but in my line of work, I achieve everything I want to do in base R with fewer lines of code than with what tidyR, dplyr and the like have to offer.

@kernpanik interesting. For me dplyr is a lifesaver for filtering and summarising tabular data. I have never achieved the same clarity with base R, but it may be a lack or proficiency on my side. I would like to see some real life examples

@larry77 I think that might be one of its superpowers I personally don't need - because in my line of work, I don't have to apply complex filters to large data sets. It's usually just very simple stuff like 'groupingvariable==1', which is fine in base R.

I totally understand there are many useful applications for tidyR, just not for me. What sometimes surprises me is people writing long chunks of tidyR code where a single line of base R code will do the job.

@kernpanik Usually, I also try to stick to base #rstats or lightweight packages (#tinyplot, #tinytable, #rdatatable, ...). Methinks, since most tutorial promote the tidyverse, some do not know base equivalent. However, base data frame operations may require more careful handling of row order, factor levels, and preserving the data frame structure. dplyr maintains a consistent behavior across grouped operations.
@kernpanik Agree. Of course, Iโ€™m so old that wrangling data into shape with Perl was best practice when I started, and thatโ€™s what I still do.