| statistics | metrology |
| chemistry |
| statistics | metrology |
| chemistry |
@steven You set up a hypothesis to the effect that a given difference is greater than some chosen value and test that agaist the null that it isn’t.
This is sometimes called ‘equivalence testing’ and a common implementation is a ‘Two One-Sided t’ (TOST) test, so called because you’re effectively testing against both ends of an interval.
See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5502906/
Scientists should be able to provide support for the absence of a meaningful effect. Currently, researchers often incorrectly conclude an effect is absent based a nonsignificant result. A widely recommended approach within a frequentist framework is to ...
@cherdt Interestingly, 3sd doesn’t make sense for the Normal either; it’s a bit too much for Normal if you’re aiming at 99%. Shewhart knew that real distributions mostly have heavier tails when he chose 3 sigma action limits. And as a rule of thumb for the unusual it’s surprisingly robust.
But it can go wrong, of course, in quite enough ways to make thinking important.
@hishamzerriffi I cannot say I have an answer.
It is certainly important to recognise the dark side of figures like Pearson and Galton when looking at their history and the history of statistics. But that is about the men. Mathematically, the methods do not depend on that history, and most were built on older foundations and have been extended by many others.
That suggests teaching the history honestly but perhaps remembering it’s about the men, not the statistics per se.
@hishamzerriffi Yes, the history there is salutory. Galton in particular seems to have bent his research to support his preconceptions.
But there are few academic pursuits that have not been turned to evil at some point. Does that mean we should have a health warning on every academic textbook, listing the evils done by past practitioners? I’m not sure we shouldn’t, of course. But since we don’t for most, it seems reasonable to ask where we should draw the line for statistics.
@hishamzerriffi @hishamzerriffi A problem is that as far as I can see, _everyone_ in the early 20th century was a neodarwinist, a racist, a eugenicist, or some combination of all three. The Empire was still there; we English were out there bravely helping the poor savages learn christianity, law and trousers. It was all a “Good Thing”.
The challenge, sadly, is not to find and unmask the racists among researchers back then. It’s to find someone who wasn’t.
@pastelbio Spitballing a bit, but if it’s just a trend, that sounds like a case for GLM ... possibly a GLMM if you want the individuals counted as random and there are multiple counts per individual. You could also consider a simpler linear mixed model with log transform. Snag there is that any data transform will make a fitted trend harder to interpret; I’d rather not do that for continuous predictors.
If you’re looking at seasonal time series, though, I’m officially out of my depth :(