MITB Banner

The Gurgaon-Based Startup is All Focused on Building the Machine Brain

Olbrain is using the same technologies, which the AI bots are using but we are mixing reinforcement learning and deep learning models.

Share

Listen to this story

Have you ever wondered if machines could have a brain? And if they did, how similar would they be to the human brain? What tasks would they be able to perform? 

In an exclusive interview with Analytics India Magazine, the founder of ‘Olbrain’, Alok Gautam, addresses such curiosities and delves into the making of Olbrain, the problem statements it seeks to address, the challenges the startup faces and its future plans. 

AIM: Olbrain describes itself as a machine brain. How do you interpret that?

Alok: The definition of the brain is the ability to think clearly. For machines to operate in the human world, thinking is needed. So, that is why we call it a ‘machine brain’. And, why machines? Because it will never be identical to a human brain. It will work differently. Its intelligence will be entirely different from human intelligence. It is a single unit which will be able to perform all the tasks unlike AI, where you need to train a separate bot for a separate activity. In the case of the machine brain, the same architecture will be able to work on all kinds of data.

AIM: What would you say contributed most significantly to the birth of Olbrain?

Alok: After developing a good understanding of the working of the human brain, I started working on AI and I realised that the approach towards general intelligence is not a computer science problem, nor a mathematics problem, nor a philosophy problem. It is a trance domain problem. We need to amalgamate understanding from philosophy, psychology, neuroscience, computer science and mathematics. That is why I started building this team. From 2017, we started working on this approach where we came to the conclusion that it is very important for the machine brain to have its own theory of mind. And, since that technology is very nascent and it cannot be a universal brain which understands everything about the human world, we need to create separate brains for specific problem statements. For example, for medical diagnostics, it can create a separate artificial theory of mind. For oil and gas, it can create an artificial theory of mind that understands oil and gas. It does not need to understand human emotions as of now, however it needs to understand how the human world works for catering to specific use cases. 

AIM: Could you elaborate on the problem that Olbrain is attempting to address?

Alok: The current AI technology is such that it can find a pattern in a given data, but by itself, it is not intelligent at all. The data that you feed in, if you deviate even a slight bit from that, the AI model will fall flat. So, I wanted to build a technology which will work even if there is a deviation from that data or even if there is unseen data. It then occurred to me that general AI is the only thing that can work in these scenarios. Most of the real world scenarios are like that only. The exact data which you want, it works only in laboratory conditions. In real-life conditions, it will always deviate.

AIM: Considering the mainstream dialogue around Artificial General Intelligence (AGI), is it plausible that the machine brain will never be able to work like a human brain? 

Alok: That AGI is trying to mimic the human brain, is not entirely correct. AGI is a very vague concept. There are many schools of thought around it. Moreover, intelligence is not absolute. Every kind of intelligence is developed around a central core objective function. In the case of humans, the objective function of the brain is to ensure the survival of the human species. However, when we are talking about machines, the objective function cannot be their own survival. And at the same time, human intelligence has been developed over millions of years. It has been trained on the data and the experiences on how to survive. Same is not the case with machines. They will not get a million years and they are not here to survive by themselves. They are here to serve humans in their human work. So, that is why the intelligence that will be built in the machines will never be alike the human intelligence. 

AIM: ‘Bots are dead. Machine brain is the future.’ How do you differentiate between a machine brain and bots?

Alok: Bots are the single models, which perform one task. Initially, there used to be RDA bots, which were non-AI rudimentary desktop scrapping bots. Then came these robotic process automation (RPA) bots that were too non-AI  but they could mimic a task perfectly. Then came the cognitive process automation (CPA). They are the actual AI bots. All these bots are single models trained to perform one activity. For example, identifying the image of a dog.  But, these bots suffer from a few limitations. One is that they’re very rigid, such that if you feed in data that is slightly different from the data on which they have been trained, they will fall flat and give random results. Another big problem is that of ‘model drift’. With time, these bots will lose efficiency because of either a shift in the context of the data or even the data itself. The third problem is that of ‘generalisation’.  For example, if you train the bot on identifying dogs, and somehow you give them a photo of a horse, so now it will fall flat. Then, there exists the problem of  huge data requirements needed for training. Commercially, as well, training a model from scratch is often not viable. 

The machine brain differs from bots. In that, it works on the principle of the artificial theory of mind that works very much like the humans’ theory of mind. As humans, we have a set of knowledge about the human world in our own brain. So we see everything through that. The machine brain, as well, keeps absorbing intelligence from every data it goes through and interprets things accordingly. The second difference is ‘transfer learning’. While learning about different kinds of tasks, it uses one learning in another as well. For example, if it has learnt to identify a horse, then while learning to identify a dog, it does not need to learn about yellow colour. It has already learnt about a few colours while learning to identify a horse. This ultimately reduces the data requirement for training. 

AIM: What is the technology behind the machine brain architecture?

Alok:  We are using the same technologies, which the AI bots are using but we are mixing reinforcement learning and deep learning models.

AIM: Could you please share with us a specific use case of the technology that Olbrain offers?

Alok:  Olbrain is going to tackle three kinds of problems. The first is the Sensory Processing Micro skills (SPMs) which are about processing data from sensors. Sensory data stands for visual data, images, videos, x-rays, thermograms and audios. The aim is to identify, say chest infection from an x-ray or the possibility of the occurrence of oil from a thermogram. Microskills are basically superpowers of the machine brain i.e., the tasks a machine brain is proficient in accomplishing. The second problem is Subject Signifier Micro skills (SSMs) where Olbrain will not only process the data but also actively engage in gathering the data. A relevant example here could be of a drone surveillance that spots a tank movement in a border area and based on that data, it goes to look for similar such movements in the vicinity. Finally, in the case of object affordance micro skills, Olbrain will have the ability to even grab the objects and manipulate them.

AIM: Could you talk about Olbrain’s current clients? Do you plan to target a specific sector in terms of your clientele?

Alok: We are sector agnostic and open to all. We have started commercialising very recently. At present, we have a healthcare company based out of Bengaluru and the US that uses our technology in interpreting the diagnostic test reports. Also, we have an edtech client who is using our offering to index lecture videos such that students can just type the keyword and are able to rightway access that portion of the video where the professor was talking about that particular topic.  

AIM: One of the key goals of Olbrain is ‘Robolisation’. What is it, and why have you chosen it?

Alok: We have coined the term ‘Robolisation’ which stands for robotic colonisation of space. The idea behind the concept was that space is a hostile environment. For example, if humans go to Mars or the Asteroid belt, they won’t be able to work there. It needs to be machines first. Robots will go and make the space habitable for humans to come later. Thus, we say, before human colonisation, it will be robotic colonisation. We believe that the Earth cannot sustain us for long. Thus, the idea is to build the technology to make our species a multi-planetary species so that we survive even after the planet dies. 

AIM: What have been the biggest challenges for Olbrain so far? How has Olbrain dealt with them?

Alok: Initially, it looked like an R&D problem, but it is very difficult to sustain an R&D problem for that long. We needed to find a commercial use case of this. For us, it took a lot of our time. We came to this roadmap where we started developing small theories of mind for particular use cases in the industry that are immediately useful and can generate value for the clients. The second challenge is that the ecosystem is not developed. Thus, it  was very difficult to even attract investors. However, we have now resorted to generating our own revenue by selling micro skills to our clients who are deriving good value from them and thus willing to pay for them.  

AIM: What is the roadmap ahead for Olbrain?

Alok: We started off in 2017 by trying to build the AGI architecture. In 2022, we launched the Sensory Processing Micro skills. It is already live, and we have clients who are using it. Our next milestone is set for the year 2030. In the next eight years, we envision launching the Subject Signifier Micro skills and finally, in 2040 the Object Affordance Micro skills.  

Share
Picture of Zinnia Banerjee

Zinnia Banerjee

Zinnia loves writing and it is this love that has brought her to the field of tech journalism.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.