Deep learning is an integral part of artificial intelligence and the contributions done in the field is immense. With increasing research and development in deep learning, there has been an increase in the use of no-code platforms for deep learning as well. There are a lot of platforms that support machine learning and the processes like data visualization, processing etc. But there are few platforms that focus only on deep learning and one such platform is DeepCognition.
In this article, we will learn a little bit about DeepCognition and build an algorithm using DeepCognition platform.
Who are DeepCognition.ai?
DeepCognition was founded with an aim of democratization of artificial intelligence. They have created a platform that can be used to create and deploy deep learning models with just clicking of buttons and no code at all.
The problem they are trying to solve is to overcome the shortage of expertise in AI that is creating barriers in organizations in the adoption of AI and make deep learning accessible to all.
Features of DeepCognition.ai
Some of the useful features provided by this platform are:
- Designing, visualizing and training of deep learning models with absolutely no code.
- Provide advanced pre-trained models like mask RCNN, DenseNet, MobileNet or build your own custom model.
- Improve security in AI
- Supports AutoML for hyperparameter tuning.
- Easy use of the platform by windows and ubuntu users.
Installation of DeepCognition
To begin working with the platform we need to install it first. Head over to this link and create an account first. The platform is free of cost for all users. After creating your account you will see this page.
Based on the operating system you have either select ubuntu or windows. Once you click on the icon the download will automatically begin. It will take a few minutes to download the dependencies and for the platform to open.
After the download is complete, click on the DeepCognition icon on the desktop and you will see this box.
Here, type in ./dlsctl start as shown above and the platform will open after checking the available ports.
After the platform opens you can see this page.
Exploring a dataset
There are common public datasets available in the platform itself like MNIST, titanic etc. you can see this by clicking on the datasets option on the right-hand side.
You can either use the public datasets or you can upload your own datasets by clicking on my datasets option. Here, you will see an option to upload the datasets from your system.
Since this is a tutorial on the platform I will be using the public dataset. I will make use of CIFAR10 dataset for the implementation.
Create a project
The first step here is to click on Project-> New and give a name and description of your project.
After this click, the green arrow and your project will get created. After this, you will click on your project and can see the dashboard which has multiple options.
Here as you can see the dataset is selected. You can change the train and test split and the option to load the dataset either one batch at a time or all together. After this click, the Next button and you will see a page to create your deep learning model.
As you see above, there are several options for you to choose from. You can first select the input layer, then convolutional layers, max-pooling and even core layers like flatten and dense. This is great when you want to build custom models. But to use the real power of the tool you will click on the last icon in the bar which will pop up the AutoML option.
This will automatically design a neural network for you when you specify the type of input and the type of output. Then, click on the design button and you will see the entire model built for you.
The amazing part here is that as and when you build the model the code is automatically generated for you.
So, you will not only get the model but also the code for your own use.
The next part before training is tuning the hyperparameters. Again, this happens with much ease in the platform.
As you can see you will be able to set the epochs, batch size, optimizer and the loss. I have made changes to suit my program as follows.
After building the model and setting the parameters as needed, we can move to the training part.
To train the model you first need to select the CPU or GPU that you will use. These features are available for free only for a limited time and you can select whichever is suited for the project.
I have selected the 4 GB GPU. Once done, click on start training and the training begins. You can see continuing changes on the screen as and when the training proceeds. Depending on the GPU the time taken for completion will vary.
Once I click the start button, you can see the changes below.
This process will continue until all epochs have been completed and the weights of the model are automatically stored.
Once the training is completed you will see the final accuracy score and the loss.
This seems to be a good accuracy for 10 epochs but we can get better results with more tuning of the model.
After training the model we need to see how well the model performed and if there is any overfitting that has occurred.
To do this click the result section and you can see the graphs in this section.
You can see that there is no sudden spike and there is no overfitting of the model. The final part of the project is the inference of the model.
After training and getting the results we will check the predictions that the model has done so far and gain some inference. To do this click the inference section and you can select the model weights and click the start inference button.
Once the inference is completed after a few minutes the predictions appear on the screen.
Some of the predictions are wrong but there are more right predictions as well. Not only this, even the probabilities of the predictions that are the output from the softmax function is also shown here.
You can keep these weights and deploy your project from the platform as well.
In this article, we saw an introduction to DeepCognition.ai and step by step built an entire deep learning model without having to write even one line of code. This is of great use for non-programmers and business analysts to work on deep learning projects with ease and for free of cost.
Subscribe to our NewsletterGet the latest updates and relevant offers by sharing your email.
You can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
I am an aspiring data scientist with a passion for teaching. I am a computer science graduate from Dayananda Sagar Institute. I have experience in building models in deep learning and reinforcement learning. My goal is to use AI in the field of education to make learning meaningful for everyone.