Facebook (aka thefacebook) launched at US colleges in 2004 & 2005.
Can this staggered rollout — plus survey data on students' mental health — provide clear evidence on the effects of social media on mental health?
| https://twitter.com/deaneckles | |
| Google Scholar | https://scholar.google.com/citations?hl=en&user=oNQRPLYAAAAJ&view_op=list_works&sortby=pubdate |
| Website | https://deaneckles.com |
Facebook (aka thefacebook) launched at US colleges in 2004 & 2005.
Can this staggered rollout — plus survey data on students' mental health — provide clear evidence on the effects of social media on mental health?
Seen this plot?
Watch out for onfounding from survey format changes
http://justthesocialfacts.blogspot.com/2023/03/whats-wrong.html
Neat illustration of the bias–variance tradeoff in analysis of a regression discontinuity...
But then the variance turns into bias with file drawer bias
https://vincentbagilet.github.io/causal_exaggeration/summary.html
There are analyses like eg http://controversiasbarcelona.com/2019/EvansLacko2017.pdf but with this kind of thing, I often find meta-analyses obscure huge methodological variation.
What are the most credible studies here?
When presenting visualizations of experimental results, scientists often choose to display either inferential uncertainty (e.g., uncertainty in the estimate of a population mean) or outcome uncertainty (e.g., variation of outcomes around that mean) about their estimates. How does this choice impact readers' beliefs about the size of treatment effects? We investigate this question in two experiments comparing 95% confidence intervals (means and standard errors) to 95% prediction intervals (means and standard deviations). The first experiment finds that participants are willing to pay more for and overestimate the effect of a treatment when shown confidence intervals relative to prediction intervals. The second experiment evaluates how alternative visualizations compare to standard visualizations for different effect sizes. We find that axis rescaling reduces error, but not as well as prediction intervals or animated hypothetical outcome plots (HOPs), and that depicting inferential uncertainty causes participants to underestimate variability in individual outcomes.