I just released a new notebook that demonstrates one of my favorite libraries ELI5 and how to version your models with model card for experiment tracking using skops ๐Ÿง‘๐Ÿปโ€๐Ÿ”ฌ๐Ÿ’œ

๐Ÿ““ Notebook: https://kaggle.com/code/unofficialmerve/explainability-and-versioning
๐Ÿ“” Resulting repository: https://huggingface.co/scikit-learn/xgboost-example
Couple of cool things ๐Ÿงต

When versioning your experiments, it's best to keep couple of information for better reproducibility:

๐Ÿง‘๐Ÿปโ€๐Ÿ”ฌ Your hyperparameters and attributes of preprocessors and architecture (pipeline)
๐Ÿ“ˆ Metrics
๐Ÿ‘‘ Feature importances (which I used ELI5 for)
โœ… Requirements of your environment

skops already saves some of the above information for you, and rest can be added (as table, metric, plots and more!) so you can just create a script for your training and run it every time you train a model
and it's a very light dependency โ˜๏ธโœจ
A cool thing with versioning in Hugging Face Hub is that you can access the info in the model card programmatically to run analyses on your experiments, e.g. run multiple experiments, automatically create and push model cards for all of them, pull info from model cards and write a small script for analysis on which model is the best! ๐ŸŒŸ
You also don't even need to host them openly, when pushing, set `private` to True and your repositories will be completely private (or if you work with an organization, people in your organization can see when you push to your org, e.g. a lab, company or an ML competition team)