PyTorch Tabular, a framework for deep learning with tabular data, has announced its latest update, version 1.1.0, on January 15, this year. In this update, the most notable additions include the integration of the DANet Model, which is a new model tailored for tabular data.
Additionally, the update brings in the capability for explainability through the integration of Captum, a model interpretability library. For those looking to fine-tune their models, this update also adds hyper parameter tuner functionality, offering both Grid and Random Search methods to find the best model parameters.
After this update, PyTorch Tabular now supports a “Model Sweep” method, allowing users to swiftly evaluate the performance of various models with given data. The documentation has been enhanced to be more user-friendly and informative, catering to both new and experienced users. The update also addresses dependency updates, ensuring better compatibility and security, and introduces GhostBatchNorm into the library.
In the infrastructure and continuous integration, there have been improvements in CI actions and labels, alongside updates in dependency management. This release also includes some API changes, like an SSL API change, and allows the use of a custom optimiser in the model configuration.
PyTorch Tabular, developed by Manu Joseph and a team of contributors, is designed to simplify the process of using deep learning for tabular data. It is built on PyTorch and PyTorch Lightning, focusing on ease of use, customisation, scalability, and deployment. The framework supports various models, including FeedForward Networks, Neural Oblivious Decision Ensembles, TabNet, and more, making it a versatile tool for a wide range of applications in data science and machine learning.