Been expanding work benchmarking #TabPFN against more standard #tabular classification methods.

Here’s a comparison of training accuracy on Census data as a function of training set size. #XGBoost produces equal accuracy to TabPFN’s max training size using just 500 records - half(!) as many!

XGB does 1.5% better on an equal training set size (1024), and 5% better using the full data (30k points).

#DataScience #MachineLearnning

Hoping to get more results out this week, but the combination flooster (flu+COVID booster) got me big time today