Editing
Tabular Dl
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
<div style="background-color: #4B0082; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> {{BloomIntro}} Tabular deep learning applies neural network architectures to structured tabular data β the spreadsheet-format data that dominates enterprise applications, business analytics, and scientific databases. For decades, gradient boosting (XGBoost, LightGBM, CatBoost) has dominated tabular ML competitions and production systems, consistently outperforming neural networks. Recent tabular deep learning research challenges this status quo: architectures like TabNet, TabTransformer, FT-Transformer, and foundation models for tabular data (TabPFN, SAINT) are closing the gap. Understanding when deep learning beats gradient boosting β and when it doesn't β is essential knowledge for ML practitioners. </div> __TOC__ <div style="background-color: #000080; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Remembering</span> == * '''Tabular data''' β Structured data organized in rows (samples) and columns (features); the dominant format in enterprise ML. * '''Heterogeneous features''' β Tabular data typically mixes numerical and categorical features of varying scales and semantics; unique challenge vs. images/text. * '''Feature interactions''' β Relationships between features that jointly predict the target; gradient boosting discovers these via trees; DL via attention. * '''Entity embedding''' β Representing categorical variables as learned dense vectors; a key technique enabling neural networks to handle high-cardinality categoricals. * '''TabNet''' β An attention-based neural network for tabular data with built-in feature selection; Arik & Pfister (2021). * '''TabTransformer''' β A transformer applying self-attention to categorical embeddings; Sheikh et al. (2021). * '''FT-Transformer (Feature Tokenizer + Transformer)''' β Embeds all features (numerical + categorical) as tokens; applies transformer; Gorishniy et al. (2021). * '''TabPFN''' β A pre-trained transformer that performs in-context learning on small tabular datasets; prior-fitted networks. * '''SAINT''' β Self-Attention and Intersample Attention Transformer; applies attention both within and across samples. * '''XGBoost / LightGBM / CatBoost''' β The dominant gradient boosting frameworks; still the baseline to beat on most tabular benchmarks. * '''Prior-Data Fitted Networks (PFN)''' β Models pre-trained on synthetic tabular datasets that can perform few-shot inference on new datasets. * '''Hyperparameter sensitivity''' β Neural networks for tabular data require careful tuning; GBDTs are more robust to hyperparameter choices. * '''Large Language Models for tables''' β Using LLMs for tabular tasks via serialization; surprisingly competitive on certain tasks. * '''AutoML''' β Automated ML pipeline search including architecture selection; FLAML, AutoGluon, H2O AutoML. </div> <div style="background-color: #006400; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Understanding</span> == The "tabular gap": neural networks excel at images, text, and audio because these have spatial/sequential structure that convolutions and attention exploit efficiently. Tabular data lacks this structure β features are semantically heterogeneous, with no natural ordering. A column called "age" is fundamentally different from a column called "revenue" in ways that have no analog in pixels. '''Why GBDTs win''': Gradient boosted trees handle heterogeneous features natively, discover complex feature interactions via splits, are robust to irrelevant features (automatic feature selection), require minimal preprocessing, and train quickly. They're hard to beat on tabular benchmarks because they solve exactly the problems posed by tabular data without the overhead of deep learning. '''Where DL can win on tabular data''': # '''Large datasets''' (>100K samples): neural networks improve with scale where GBDTs plateau. # '''High-cardinality categoricals''': entity embeddings for user IDs, product IDs with millions of values. # '''Multi-modal inputs''': when tabular data is combined with text, images, or other modalities. # '''End-to-end learning''': when the tabular model is part of a larger differentiable system. # '''Online learning''': neural networks update incrementally more easily than tree ensembles. '''FT-Transformer β current SOTA''': The Feature Tokenizer + Transformer embeds each feature as a token (using a linear layer for numerical, embedding table for categorical), prepends a CLS token, and applies standard transformer layers. It consistently outperforms TabNet and approaches GBDT performance on many benchmarks β while being a clean, generalizable architecture. '''TabPFN β few-shot tabular ML''': Pre-trained on millions of synthetic tabular datasets, TabPFN uses in-context learning to make predictions on new small datasets (up to ~1000 samples) with a single forward pass β no training required. On small datasets it frequently matches or beats XGBoost with minutes of tuning. This is a fundamentally different paradigm from standard ML. </div> <div style="background-color: #8B0000; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Applying</span> == '''FT-Transformer on tabular benchmark:''' <syntaxhighlight lang="python"> import torch import torch.nn as nn import numpy as np from rtdl import FTTransformer # Real-world Tabular Deep Learning library # FT-Transformer via rtdl library # pip install rtdl # Separate numerical and categorical features n_num_features = 8 # Number of continuous features cat_cardinalities = [5, 100, 10] # Cardinality of each categorical feature model = FTTransformer.make_default( n_num_features=n_num_features, cat_cardinalities=cat_cardinalities, d_out=1, # 1 for binary classification or regression; n_classes for multiclass ) optimizer = model.make_default_optimizer() # AdamW with standard tabular LR schedule # Training loop def train_epoch(model, loader, optimizer, task='classification'): model.train() total_loss = 0 for X_num, X_cat, y in loader: logits = model(X_num, X_cat).squeeze(1) if task == 'classification': loss = nn.BCEWithLogitsLoss()(logits, y.float()) else: loss = nn.MSELoss()(logits, y.float()) optimizer.zero_grad(); loss.backward(); optimizer.step() total_loss += loss.item() return total_loss / len(loader) # Quick comparison: TabPFN for small datasets (no training!) from tabpfn import TabPFNClassifier from sklearn.metrics import roc_auc_score clf = TabPFNClassifier(device='cpu', N_ensemble_configurations=32) clf.fit(X_train_small, y_train_small) # Instant β no gradient descent preds = clf.predict_proba(X_test_small) print(f"TabPFN AUC: {roc_auc_score(y_test_small, preds[:,1]):.3f}") # Often matches XGBoost on small datasets without any hyperparameter tuning! </syntaxhighlight> ; Tabular DL framework selection guide : '''Small data (<1K samples)''' β TabPFN (no training needed), XGBoost with Bayesian HPO : '''Medium data (1Kβ100K)''' β XGBoost/LightGBM baseline; try FT-Transformer : '''Large data (>100K)''' β FT-Transformer, SAINT; potentially beats GBDT : '''High-cardinality categoricals''' β Entity embeddings + any DL model; CatBoost also strong : '''Multi-modal (tabular + text)''' β TabTransformer/FT-Transformer + BERT fusion : '''AutoML''' β AutoGluon-Tabular (tests multiple models); strong default </div> <div style="background-color: #8B4500; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Analyzing</span> == {| class="wikitable" |+ Tabular ML Method Comparison (2024) ! Method !! Small Data || Large Data || Categorical Handling || Training Speed || Interpretability |- | XGBoost/LightGBM || Excellent || Good || Good (ordinal encoding) || Fast || Medium (SHAP) |- | CatBoost || Excellent || Good || Excellent (native) || Fast || Medium (SHAP) |- | TabNet || Good || Good || Medium || Slow || High (attention masks) |- | FT-Transformer || Good || Excellent || Excellent (embeddings) || Slow || Low |- | TabPFN || Excellent (β€1K) || N/A || Good || Instant (inference) || Low |- | Random Forest || Good || Good || Poor || Medium || Medium |} '''Failure modes''': Overfitting to small tabular datasets with deep learning (more parameters than samples). Forgetting to normalize numerical features for neural networks. Missing value handling β NNs require explicit imputation; GBDTs handle natively. Hyperparameter sensitivity of tabular NNs (learning rate, weight decay require tuning). Scale mismatch between features causing slow convergence. </div> <div style="background-color: #483D8B; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Evaluating</span> == Tabular deep learning evaluation: # '''Benchmark against GBDT baseline''': always compare to tuned XGBoost or LightGBM β if DL doesn't beat it, use GBDT. # '''Multiple random seeds''': tabular DL has higher variance; report mean Β± std over 5+ seeds. # '''Cross-validation''': strict k-fold with stratification for classification. # '''Calibration''': are predicted probabilities well-calibrated? # '''Computation budget''': account for DL training time vs. GBDT; ROI of DL must exceed the extra cost. </div> <div style="background-color: #2F4F4F; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Creating</span> == Tabular ML production pipeline: # Baseline: train XGBoost/LightGBM with default hyperparameters; measure AUC/RMSE. # AutoML: run AutoGluon-Tabular for 1 hour; assess best model. # If DL worth pursuing: FT-Transformer with AdamW, learning rate warmup, cosine annealing. # Feature engineering: log-transform skewed numeric features; frequency encoding for very high-cardinality categoricals. # Ensembling: stack GBDT + FT-Transformer predictions; often beats either alone. # Deployment: export XGBoost as ONNX or LightGBM native; FT-Transformer as TorchScript. [[Category:Artificial Intelligence]] [[Category:Machine Learning]] [[Category:Tabular Data]] </div>
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Template used on this page:
Template:BloomIntro
(
edit
)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information