When it comes to Deep Neural Network (DNNs), we are often confused about their architecture(like types of layers, number of layers, type of optimization, etc.) for a specific problem. This sudden template shift of using deep learning models for a various number of problems has made it even harder for researchers to design a new neural network and generalize it. In recent years, automated ML or AutoML has really helped researchers and developers to create high quality deep learning models without human intervention and to extend its usability, Google has developed a new framework called Model Search.
Model Search is an open-source, TensorFlow based python framework for building AutoML algorithms at a large scale. This framework allows :
- To run many AutoML algorithms from the search of the right model architecture to the best-distilled models.
- To compare different algorithms from search space.
- To customize the neural network layers in search space.
The idea of Model Search was presented at Google Interspeech 2019, Improving Keyword Spotting and Language Identification via Neural Architecture Search at Scale by Hanna Mazzawi, Javier Gonzalvo, Aleks Kracun, Prashant Sridhar, Niranjan Subrahmanya, Ignacio Lopez Moreno, Hyun Jin Park, Patrick Violette. The key idea of Model Search is to develop a novel Neural Architecture search that aims at:
- Defining an incremental search.
- Make use of transferable training.
- Use of generic neural network blocks.
The architecture of Model Search
At the start of every cycle, the search algorithm goes through all the completed trials and decides what to try next with the help of beam search. The search algorithm then runs the mutation algorithm over the best architecture chosen from the search space and gives back the resulting model to the trainer. Here S is the set of training and validation examples and A are all the candidates used during training and search.
Installation & Requirements
This framework is not yet available in PyPI so it can be cloned using git.
!git clone https://github.com/google/model_search.git %cd /content/model_search/
The requirements for Model Search can be installed using requirements.txt. The command is shown below:
!pip install -r requirements.txt
Compile all the proto files by using the protoc compiler, the code is available below:
%%bash protoc --python_out=./ model_search/proto/phoenix_spec.proto protoc --python_out=./ model_search/proto/hparam.proto protoc --python_out=./ model_search/proto/distillation_spec.proto protoc --python_out=./ model_search/proto/ensembling_spec.proto protoc --python_out=./ model_search/proto/transfer_learning_spec.proto
Load flags, if you get any error(unparsed flag error) while importing the model search. The code snippet is available below:
import sys from absl import app # Addresses `UnrecognizedFlagError: Unknown command line flag 'f'` sys.argv = sys.argv[:1] # `app.run` calls `sys.exit` try: app.run(lambda argv: None) except: Pass
Demo – Model Search for CSV data.
This demo shows how to use the Model Search framework for CSV data, where features are the numbers. The steps are following for the classification problem:
- Import all the required modules and packages.
import model_search from model_search import constants from model_search import single_trainer from model_search.data import csv_data
- Model Search does not provide any pipeline for data cleaning and feature engineering. Users have to do this step manually.
- Model Search: Create a trainer instance and pass the csv data in csv_data.Provider where label_index contains the column number where the labels are in the dataframe. logit_dimension represented the number of classes in the data. record_default represents the array(size equal to number of features) for data imputation i.e., in all the four columns, if any null value is present then it should get replaced by 0. filename specifies the data. Finally, spec represents search space, you can create your own or use default as mentioned below.
trainer = single_trainer.SingleTrainer( data=csv_data.Provider( label_index=0, logits_dimension=2, record_defaults=[0, 0, 0, 0], filename="model_search/data/testdata/csv_random_data.csv"), spec= "model_search/configs/dnn_config.pbtxt")
- Try out different models on the trainer object via try_models. This includes the ensemble methods also.
number_models: represents the number of models to try out.
Train_steps: represents that each model should get trained for 1000 steps.
eval_steps: represents that at every 100 steps, the model should get evaluated.
root_dir: path to the directory, to save the results.
batch_size : represents the batch size for the data taken.
experiment_name : represents experiment name (additional information)
experiment_owner : represents experiment owner (additional information)
Run the code below, to start training, searching and evaluating the search space. The example below tries out 200 different models for 1000 steps each and evaluates the model at each 100 steps. It might take some time to run.
trainer.try_models( number_models=200, train_steps=1000, eval_steps=100, root_dir="/tmp/run_example", batch_size=32, experiment_name="example", experiment_owner="model_search_user")
Output of the above code will contain all the model ids, its accuracy at each step.
- You can check out all the trials performed in this directory:
For getting information about each model(accuracy, evaluation, etc):
For each model, the tuner-1 directory contains model architecture, different checkpoints, evaluation data, etc. An example of reading the architecture of model id 1 is shown below:
- For Non-CSV data : To create a custom trainer object, Model Search provides a functionality where you can define your trainer by inheriting some abstract classes. An example of it is shown here.
- Add your Models & Architectures to Model Search Space
- Creating a training stand alone binary without writing a main
- Distributed Computation
In this article, we have discussed Model Search, a flexible and domain agnostic TensorFlow framework for automated ML. As quoted by author of Model Search:
By building upon previous knowledge for a given domain, we believe that this framework is powerful enough to build models with the state-of-the-art performance on well-studied problems when provided with a search space composed of standard building blocks.Hanna Mazzawi, Research Engineer and Xavi Gonzalvo, Research Scientist, Google Research
This framework currently deals with the classification problems only, the regression models is yet to be released.
Official codes, Documentation & tutorials are available at:
Join Our Telegram Group. Be part of an engaging online community. Join Here.
Subscribe to our NewsletterGet the latest updates and relevant offers by sharing your email.
A data science enthusiast and a post-graduate in Big Data Analytics. Creative and organized with an analytical bent of mind.