Image from Grinsztajn, Oyallon and Varoquaux (arXiv:2207.08815)
Neural networks often struggle to learn irregular patterns in the target, are less robust to uninformative features, and their rotation invariance hurts their performance. This gives practitioners building tabular-specific deep learning models some clear imperatives:
1. Carefully select only important features.
2. Be sure to use adequate regularization and hyperparameter optimization to overcome the bias towards smooth patterns.
3. Preserve the orientation of the data.
Explainable AI is a social challenge as much as it is technical. Through case-study examinations, the authors found that “tweet-length” local explanations and transparency into how others interact with the AI system can go a long way to encourage engagement of end-users. Ultimately, trust was key for the users to realize the full benefit of the AI system. High accuracy was not enough.
Want to dive deeper into papers like these? Join us for the next session of our paper reading group (join here), or hang out with fellow AI practitioners in our slack community.