| https://www.twitter.com/renatagerecke | |
| github | https://www.github.com/rgerecke |
| letterboxd | https://www.letterboxd.com/queermath |
| https://www.twitter.com/renatagerecke | |
| github | https://www.github.com/rgerecke |
| letterboxd | https://www.letterboxd.com/queermath |
if we had a conversation this year, it probably came up that I'm in my β¨movies eraβ¨. iβve watched a lot of movies! so obviously i have to write a year in review, which i will not publish until January so that it covers the true full spectrum of what i watched in 2022.
TODAY IN QUEER TV HISTORY
HOMOSEXUALS - 12/18/1979, ABC
A 1-hour prime-time ABC News documentary built around first-person narratives by a diverse range of gay women and men in different parts of the U.S.
Here, Gwendolyn Rogers tells how she felt the first time she entered a lesbian bar in 1969.
#LGBTQ #lesbian #gay #LesbianHistory #LGBTQHistory #queer #QueerHistory #MediaStudies
Hey friends, #AdventOfCode starts TONIGHT! I've organized a friendly leaderboard every year for the #rstats (and friends) community, and you can join 2022's with this code:
1032765-5d428d59
@emilhvitfeldt did a great talk on why AOC is so awesome: https://www.youtube.com/watch?v=HnHAIdqULd0
and you can see my take on the leaderboard here: https://rstats-aoc.netlify.app/
was helping someone establish data management & a dashboard for department KPIs and they ghosted me for a month.......
............they have returned with a list of 107 indicators
My favourite trick for working with huge data sets in R. If your dataset is larger than memory and the query result is also larger than memory, you can still use dplyr/arrow pipelines. Example:
library(arrow)
library(dplyr)
nyc_taxi <- open_dataset("nyc-taxi/")
nyc_taxi |>
filter(payment_type == "Credit card") |>
group_by(year, month) |>
write_dataset("nyc-taxi-credit")
Input is 1.7 billion rows (70GB), output is 500 million (15GB). Takes 3-4 mins on my laptop π