"not articulate enough" http://odonnellgroup.github.io
George Box famously said "all models are wrong, some are useful", but what he forgot to add was that usefulness doesn't just depend on the model.
A model is useful *only with respect to a given target problem*
updated version of our paper on bayesian modelling for whole-brain cell count data: https://elifesciences.org/reviewed-preprints/102391
People spend 1-2 years collecting these kinds of gene-expression/anatomy/IEG data... what's another 1-2 months learning + applying Bayes to get more stats bang for your buck 😍
What I find really interesting is papers that were ignored for years and then suddenly gained a lot of citations, sustained over a long time. I only know of these kind of papers because I signed one of them myself. Gave me the confidence to work on whatever I think is right without expecting any immediate splash.
Papers that make a splash do so because they deliver within not just the adjacent possible but closer, within the adjacent imaginable: what many thought would be desirable and not against any physical laws.
excellent writeup by Shelby Bradford in The Scientist on the MICrONS project to build large-scale connectivity maps of the mouse brain - with some small comments by me.
I have been blown away by all the various connectome projects and really do think they will change neuroscience forever... and maybe AI too who knows!
Lord of the Rings characters: screen time vs mentions in the book.
The further from the dotted line, the further off trend.
By reddit user austinw-8 https://www.reddit.com/r/dataisbeautiful/s/Dw7XqDxyEB
gave a short lecture this morning on principles of computational modelling, always try to stress the point made by @romainbrette that adding details to a model does not automatically make it more realistic.
The wooden airplane model has more 'details' but only the paper model can fly