Jake Anders

108 Followers
123 Following
58 Posts
Education, evaluation, economics, etc. | Professor of Quantitative Social Science @cepeo_ucl, UCL | PI of COSMO study | Dad to two | Reblog ≠ endorse.
LocationOxford/London, UK
Websitehttps://jakeanders.uk
RePEchttps://ideas.repec.org/f/pan354.html
Gravatarhttps://gravatar.com/jakeanders

Those of you who know me will know what a difficult and devastating year it has been for me in my personal life. Nothing can make up for our family’s loss.

I am determined, nevertheless, to be pleased with this professional success.

I have some news I’m very happy about.
In case you're specifically looking for the England-focussed content, I'll immodestly point out that you can find my contribution in chapter 3 starting on page 54.

Delighted to have contributed the chapter on England to this new book on the education policy response to the COVID-19 pandemic's unequal disruption to young people's education.

The full publication was launched today and is available here: https://op.europa.eu/en/publication-detail/-/publication/13381883-ec31-11ee-8e14-01aa75ed71a1

The pandemic, socioeconomic disadvantage, and learning outcomes - Publications Office of the EU

Details of the publication

Publications Office of the EU
Merry Christmas to all from our newly expanded family! 🎄

This is awesome.

How can we convert from an effect size to a percentile point difference?

Just multiply by 37.

So simple. So useful.

https://edworkingpapers.com/ai23-829

Multiply by 37: A Surprisingly Accurate Rule of Thumb for Converting Effect Sizes from Standard Deviations to Percentile Points | EdWorkingPapers

Educational researchers often report effect sizes in standard deviation units (SD), but SD effects are hard to interpret. Effects are easier to interpret in percentile points, but conversion from SDs to percentile points involves a calculation that is not intuitive to educational stakeholders. We point out that, if the outcome variable is normally distributed, simply multiplying the SD effect by 37 usually gives an excellent approximation to the percentile-point effect.

You can also read the paper on EdWorkingPapers if you prefer their front cover, or something like that (the content is the same): https://edworkingpapers.com/ai23-821
Experimental education research: clarifying why, how and when to use random assignment | EdWorkingPapers

Over the last twenty years, education researchers have increasingly conducted randomised experiments with the goal of informing the decisions of educators and policymakers. Such experiments have generally employed broad, consequential, standardised outcome measures in the hope that this would allow decisionmakers to compare effectiveness of different approaches. However, a combination of small effect sizes, wide confidence intervals, and treatment effect heterogeneity means that researchers have largely failed to achieve this goal.

Read our working paper and let us know what you think: https://econpapers.repec.org/RePEc:ucl:cepeow:23-07

We're still actively working on this, so your thoughts and suggestions are much appreciated.

EconPapers: Experimental education research: rethinking why, how and when to use random assignment

By Sam Sims, Jake Anders, Matthew Inglis, Hugues Lortie-Forgues, Ben Styles and Ben Weidmann; Abstract: Over the last twenty years, education researchers have increasingly conducted randomised experiments with the goal of

A lot of money is spent conducting RCTs in education research.

We want to get the most out of this spending — and ensure the real challenges we identify don't provide a pretext for those who oppose RCTs, per se, from talking down their important role in education research.

Many experiments aiming to inform decision-makers would be better replaced by rigorous quasi-experimental work or, where they are feasible, multi-site trials (individually randomised but carried out within multiple schools).

Experiments in education remain valuable — but especially to test theoretical models, which can then inform educators’ mental models or intervention design. These theory-informing experiments should be designed quite differently from those designed to inform decision-makers