Automated machine learning – or AutoML – is an approach that cuts down the time spent in doing iterative tasks concerning model development. AutoML tools help developers build scalable models with great ease and minimal domain expertise.
AutoML is one of the most actively researched spaces in the ML community. AutoML studies have discovered ways to constrain search spaces to isolated algorithmic aspects. This includes the learning rule used during backpropagation, the gating structure of an LSTM, or the data augmentation. However, most of these algorithmic aspects remain to be hand-designed.
This approach may save compute time, but has few drawbacks.
- Human-designed components can be biased in favor of human-designed ones, which can reduce the innovation potential of AutoML. Moreover, innovation is also limited because you cannot discover what you cannot search for.
- Secondly, constrained search spaces need to be carefully composed, thus creating a new burden on researchers, and curtailing the purported objective of saving time.
To address this, a team at Google Brain has introduced AutoML-Zero to automatically search for complete machine learning algorithms using a little restriction on the form, and only simple mathematical operations as building blocks.
Overview Of AutoML-Zero
Even before we jump into AutoML-Zero, we need to understand two crucial design aspects of AutoML:
- It should be flexible enough and modular such that everyone can find what they need, and what they are looking for. It can be hyper optimization of parameters, or drift detection, or algorithm benchmarking.
- Secondly, since few domain experts consider the current level of AutoML not to be good enough, there is a lot of research that aims at making AutoML products that focus more on improving feature engineering and reducing the time to search for hyperparameters – a crucial aspect of meta-learning.
AutoML-Zero is aimed at searching fine-grained space simultaneously for optimization approaches, model initializations, and other such chores, while requiring very less involvement of human-design in building the fundamental blocks of the network. This feature of building from scratch might also lead to the discovery of non-neural network algorithms.
These automatic discoveries of algorithms have the potential to perform well on a given set of machine learning tasks.
The approach of AutoML-Zero experiments can be summarized as follows:
- Search experiments randomly explore a vast space of algorithms and the search could be evolutionary as well.
- Through this search, the quality of the algorithms on a subset is measured, and each experiment stands a chance to produce a high-quality candidate algorithm.
- Once the search is done, the best candidates are selected by measuring their performance on another subset of tasks, which is similar to a standard ML model selection with a validation set.
For experiments, the authors have used binary classification tasks extracted from CIFAR-10. To lower the compute cost and achieve higher throughput, they have used random projections to reduce the dimensionality of the features to create small proxy tasks.
Randomly modifying the programs and periodically selecting the best-performing ones on given tasks/datasets will reportedly pave way for the discovery of reasonable algorithms.
With their experimental results, the authors have demonstrated that there is an excellent potential for the discovery of nuanced ML algorithms using evolutionary search.
Key Takeaways
With AutoML-Zero, the authors have tried to accomplish the following:
- Propose a new framework to automatically search for ML algorithms from scratch with minimal human design.
- A new framework with open-sourced code and a search space that combines basic mathematical operations.
The authors conclude that evolutionary methods can find solutions in the AutoML-Zero search space despite its enormous size and sparsity.
The silver lining is that the AutoML-Zero search space provides ample room for algorithms to distinguish themselves. This has the potential for future work that can improve upon the results, with more sophisticated evolutionary approaches, reinforcement learning, Bayesian optimization, and other methods that have helped AutoML in the past.
Know more about AutoML- Zero here.