#statstab #467 Hypothesis testing, model selection, model comparison some thoughts

Thoughts: An excellent (but too short) discussion on bayesian inference.

#bayesian #bayesfactor #modelselection #inference #NBHT #BF #ROPE #primer

https://discourse.mc-stan.org/t/hypothesis-testing-model-selection-model-comparison-some-thoughts/19163

Hypothesis testing, model selection, model comparison - some thoughts

EDIT: This was an attempt to write guidance. It turns out I stepped quite far from my depth and the text sounded much more conclusive than it should. I think it is correct to currently just classify it as “some thoughts” rather than a guidance. I still think it is useful to have a place to list possible approaches, but the text definitely needs more work. Sorry for the confusion. Coming from classical statistics background Stan users often want to be able to test some sort of null hypothesis. S...

The Stan Forums
Beyond Standard LLMs

Linear Attention Hybrids, Text Diffusion, Code World Models, and Small Recursive Transformers

Ahead of AI
Not so Prompt: Prompt Optimization as Model Selection

Here's a framework for prompt optimization: Defining Success: Metrics and Evaluation Criteria Before collecting any data, establish what success looks like for your specific use case. Choose a primary metric that directly reflects business value—accuracy for classification, F1 for imbalanced datasets, BLEU/ROUGE for generation tasks, or custom domain-specific

Gojiberries

#statstab #393 Statistically Efficient Ways to Quantify Added Predictive Value of New Measurements [actual post]

Thoughts: #392 has the comments, but this is where the magic happens.

#modelselection #modelcomparison #variance #effectsize #tutorial

https://www.fharrell.com/post/addvalue/

Statistically Efficient Ways to Quantify Added Predictive Value of New Measurements – Statistical Thinking

Researchers have used contorted, inefficient, and arbitrary analyses to demonstrated added value in biomarkers, genes, and new lab measurements. Traditional statistical measures have always been up to the task, and are more powerful and more flexible. It’s time to revisit them, and to add a few slight twists to make them more helpful.

Statistical Thinking

#statstab #392 Statistically Efficient Ways to Quantify Added Predictive Value of New Measurements (forum thread)

Thoughts: Forums can be great for asking the author for exact answers to complex questions

#modelselection #causalinference #prediction #bias #information

https://discourse.datamethods.org/t/statistically-efficient-ways-to-quantify-added-predictive-value-of-new-measurements/2013/1

Statistically Efficient Ways to Quantify Added Predictive Value of New Measurements

This topic is for discussions about Statistically Efficient Ways to Quantify Added Predictive Value of New Measurements

Datamethods Discussion Forum

#statstab #358 What are some of the problems with stepwise regression?

Thoughts: Model selection is not an easy task, but maybe don't naively try step wise reg.

#stepwise #regression #QRPs #issues #phacking #modelselection #bias

https://www.stata.com/support/faqs/statistics/stepwise-regression-problems/

Stata | FAQ: Problems with stepwise regression

What are some of the problems with stepwise regression?

IRIS Insights I Nico Formanek: Are hyperparameters vibes?
April 24, 2025, 2:00 p.m. (CEST)
Our second IRIS Insights talk will take place with Nico Formanek.
🟦
This talk will discuss the role of hyperparameters in optimization methods for model selection (currently often called ML) from a philosophy of science point of view. Special consideration is given to the question of whether there can be principled ways to fix hyperparameters in a maximally agnostic setting.
🟦
This is a WebEx talk to which everyone who is interested is cordially invited. It will take place in English. Our IRIS speaker, Jun.-Prof. Dr. Maria Wirzberger, will moderate it. Following Nico Formanek's presentation, there will be an opportunity to ask questions. We look forward to active participation.
🟦
Please join this Webex talk using the following link:
https://lnkd.in/eJNiUQKV
🟦
#Hyperparameters #ModelSelection #Optimization #MLMethods #PhilosophyOfScience #ScientificMethod #AgnosticLearning #MachineLearning #InterdisciplinaryResearch #AIandPhilosophy #EthicsInAI #ResponsibleAI #AITheory #WebTalk #OnlineLecture #ResearchTalk #ScienceEvents #OpenInvitation #AICommunity #LinkedInScience #TechPhilosophy #AIConversations
LinkedIn

This link will take you to a page that’s not on LinkedIn

Can anyone help with understanding how to best do #modelselection in the context of #neuralnetworks ? I'm trying to understand how to reduce #bias due to the selection of a particular test set.

More details here

https://stats.stackexchange.com/q/620547/582

Cross-validation and model selection of ANN

I have a neural network that I use to classify data into a number of classes; in my particular case, the classes are imbalanced, but I am trying to understand this for the general case. I am using F1

Cross Validated

7/10) This finding led to our #proposal: Can we use α for #modelSelection in an #SSL pipeline?

Two key +s of α:

1. α doesn’t require labels

2. α is quick to #compute (compared to training a readout)

We study hyperparam selection in #BarlowTwins (Zbontar et al.) as a case study!

#AI #ML #deeplearning #neuroscience