What tests your patience more than coding a deep learning model? It is training a deep learning model. How good would that be for you if I tell you that you can get a deep learning model of your own with literally zero efforts if compared to traditional practices?
This has been made possible by the research team at Google. With Google Teachable Machine in action, you can train and download a deep learning model of your own.
Let us see how that can be done.
What is Google Teachable Machine?
It is a platform for you to train your system to identify and recognize sounds, poses and even images.
It is widely used for classification purposes. It supports tf.js, coral, Arduino, glitch, p5.js etc.
Let us take a tour of it.
It would look something like this:
The engine works by getting the dataset from the user, then trains on the data provided and lastly predicts the outcome. As mentioned earlier, this model can be downloaded too.
Scroll the site to get a better understanding of the engine.
- Click on Get Started
There would be 3 options in front of you.
So as of now, we will be going up with Image Project.
- It would look something like this.
- So will be adding the dataset to the respective classes.
We will be making a face mask classifier of our own. You can drag-drop the images and even a folder as a whole.
I have forged a dataset on the same, so will be going out with it.
After uploading the images for both the classes we will be moving on to the training process.
Realising that we have a considerably less amount of images, we will be making sure to bar up the number of epochs for training.
You can alter all the parameters in the advanced settings of Training.
Since the number of the training sample is not large, we will be altering the no of epochs from 50 to 150.
Then click on the train to start training the model.
I trained it over 150 epochs in less than 30 seconds, quite fast. You can alter it and suit yourself.
Now testing how good the model really is.
We can set the input to be off if we don’t want it to predict from the live webcam.
Default webcam input is on. You can also add an image for it to predict.
Change the option from webcam to file in the drop-down list.
Now testing with images, we can see the following.
It shows an accuracy of 99 per cent of the person wearing a mask.
Now coming to exporting the model. It means that the model can be exported for further use. You can download the model by clicking on the Export Model.
Exporting the model:
You can export the model in your desired format. Here is a pic demonstrating the step of the same. Click on download my model to download the model.
Conclusion
The aim of this article was to spread awareness among the aspirants / enthusiasts regarding code-less training of deep learning models.
Hope you have liked it.