TabPFN

AI Foundation model for tabular data From Wikipedia, the free encyclopedia

TabPFN (Tabular Prior-data Fitted Network) is a machine learning model for tabular datasets proposed in 2022. It uses a transformer architecture.[1] It is intended for supervised classification and regression analysis on small- to medium-sized datasets, with TabPFN-2.5 supporting up to 50,000 rows and 2,000 features.[4]

DevelopersNoah Hollmann, Samuel Müller, Lennart Purucker, Arjun Krishnakumar, Max Körfer, Shi Bin Hoo, Robin Tibor Schirrmeister, Frank Hutter, Leo Grinsztajn, Klemens Flöge, Oscar Key & Sauraj Gambhir [1]
Initial releaseSeptember 16, 2023; 2 years ago (2023-09-16)[2][3]
Stable release
v2.5 / November 6, 2025; 4 months ago (2025-11-06)
Written inPython[3]
Quick facts Developers, Initial release ...
TabPFN
DevelopersNoah Hollmann, Samuel Müller, Lennart Purucker, Arjun Krishnakumar, Max Körfer, Shi Bin Hoo, Robin Tibor Schirrmeister, Frank Hutter, Leo Grinsztajn, Klemens Flöge, Oscar Key & Sauraj Gambhir [1]
Initial releaseSeptember 16, 2023; 2 years ago (2023-09-16)[2][3]
Stable release
v2.5 / November 6, 2025; 4 months ago (2025-11-06)
Written inPython[3]
Operating systemLinux, macOS, Microsoft Windows[3]
TypeMachine learning
LicenseApache License 2.0
Websitegithub.com/PriorLabs/TabPFN
Close

History

TabPFN was first introduced in a 2022 pre-print and presented at ICLR 2023.[2] TabPFN v2 was published in 2025 in Nature by Hollmann and co-authors.[1] The source code is published on GitHub under a modified Apache License and on PyPi.[5] Writing for ICLR blogs, McCarter states that the model has attracted attention due to its performance on small dataset benchmarks.[6] TabPFN v2.5, the next generation of the foundation model, was released on November 6, 2025.[4]

Prior Labs, founded in 2024, aims to commercialize TabPFN.[7]

Overview and pre-training

TabPFN supports classification, regression and generative tasks.[1] It leverages "Prior-Data Fitted Networks"[8] models to model tabular data.[1] By using a transformer pre-trained on synthetic tabular datasets,[2][6] TabPFN avoids benchmark contamination and costs of curating real-world data.[2]

TabPFN v2 was pre-trained on approximately 130 million such datasets.[1] Synthetic datasets are generated using causal models or Bayesian neural networks; this can include simulating missing values, imbalanced data, and noise.[1] Random inputs are passed through these models to generate outputs, with a bias towards simpler causal structures.[1] During pre-training, TabPFN predicts the masked target values of new data points given training data points and their known targets, effectively learning a generic learning algorithm that is executed by running a neural network forward pass.[1] The new dataset is then processed in a single forward pass without retraining.[2] The model's transformer encoder processes features and labels by alternating attention across rows and columns.[9] TabPFN v2 handles numerical and categorical features, missing values, and supports tasks like regression and synthetic data generation,[1] while TabPFN-2.5 scales this approach to datasets with up to 50,000 rows and 2,000 features.[4]

Since TabPFN is pre-trained, in contrast to other deep learning methods, it does not require costly hyperparameter optimization.[9]

Research

TabPFN is the subject of on-going research. Applications for TabPFN have been investigated for domains such as chemoproteomics,[10] insurance risk classification,[11] and metagenomics.[12]

Limitations

TabPFN has been criticized for its "one large neural network is all you need" approach to modeling problems.[6] Further, its performance is limited in high-dimensional and large-scale datasets.[13]

See also

References

Related Articles

Wikiwand AI