MITB Banner

Interview with Ajinkya Bhave, Director (Engineering Services), Siemens

When you look at intelligence, it is evolving mainly because there is life and consciousness. There is the will to survive. My feeling is that unless you can teach AI about death, you will not make it intelligent.

Share

Ajinkya Bhave

AGI as the true north star of artificial intelligence? What is fast technologies’ effect on the environment? Are digital twins the next big thing? Analytics India Magazine caught up with Ajinkya Bhave, Director and Country Head (India) – Engineering Services at Siemens Digital Industries Software, to get answers to these questions and more.

Edited excerpts:

AIM: How did you get interested in artificial intelligence?

Ajinkya Bhave: In the last year of my Bachelor’s that I was pursuing at Mumbai University, we had a course on robotics and AI; this ignited my whole interest in the field, and by extension, in machine learning. From there on, I did my Masters’s in Robotics from Carnegie Mellon University (CMU). I realised that Robotics is a huge field, and I was more inclined toward the machine learning and control systems aspects of it. I went on to pursue a second master’s degree and a PhD in control systems, with autonomous systems being the backdrop. So, in my education, robotics was the focus of my interest and passion, which is the case even today, and machine learning is an enabler for that.

I have been working with Siemens for 10 years now in its Engineering Services (ES) group. We are a unique group within the Siemens Digital Industries Software business unit. We do not work directly on a product but offer state-of-art customised solutions to our clients, combining engineering domain expertise and Siemens tools and frameworks. Currently, I head the ES India group with three teams in the field of control systems, systems simulation, and computational fluid dynamics. Each team has exceptionally talented and motivated members, led by an experienced technical manager. The control systems team focuses majorly on machine learning and autonomous driving.

I have gone from working on control systems to then transitioning to machine learning and artificial intelligence, working in both academia and now in an industry role at Siemens. So, we went from traditional control and automation all the way to machine learning and increasingly into AI.

AIM: Having been part of the AI and machine learning community for a long time, what are the most common misconceptions you would like to bust?

Ajinkya Bhave: There are two concerning trends I see in the AI/ML domain today. The first one is the use of AI as the buzzword for everything. At Carnegie Mellon, when we were taught about AI and machine learning, we were also made aware of the difference between the two – the important fact that ML is a subset of AI. Many of today’s systems use deep learning, but we call them AI generically, which is not completely correct. Our team at Siemens is very specific that we are a machine learning group and not a general AI group.

The second concern is when people say that a neural network is a model of the human brain. It’s not their mistake; the conversation around this topic has largely been misleading. If you talk to a biologist, they will tell you that the human neuron is extremely complex and advanced and that the machine learning model of a neuron does not even come close to that. There are synapses, dendrites and activation functions, and there are multiphase signals going on in a biological neuron. Each human neuron is like one deep neural network, as a recent study pointed out. A single human neuron is a computational factory by itself.

AIM: We all talk about how machine learning and deep learning have been breakthrough technologies. However, we often overlook the newer challenges they bring with themselves. One of them is the effect these fast technologies have on the environment. What are your views on that?

Ajinkya Bhave: As we start computing more and more, we are taking up a lot of power, which will become larger in the coming future. A classic example of this is the GPT-3 model made of ~175 billion parameters. At one point, it becomes an overfitting problem. As John von Neumann said, “With four parameters, I can fit an elephant, and with five, I can make him wiggle his trunk.”

It’s true because, at some point, models are no longer intelligent; they are fitting to the data. It depends on the model, the domain, and the data, but I have been seeing a trend when you have a larger model trying to go for complex problems; it doesn’t really solve the problem; it gives an illusion of solving the problem. This is a problem that machine learning practitioners must think of. There is a certain elegance in having the right model for the job. A new trend should be going from large models to models that are efficient enough to do the job.

AIM: Artificial general intelligence is considered to be the true north star of AI for some. What are your views on this?

Ajinkya Bhave: I believe that the way we do machine learning and AI today, I don’t think we can have machines that think like humans. Fundamentally, when you look at intelligence, it is evolving mainly because there is life and consciousness. There is the will to survive. My feeling is that unless you can teach AI about death, you will not make it intelligent. Unless AI knows and fears death, it will not know and independently evolve strategies for how to survive. Some people say we can penalise the systems in the right way to simulate evolution or use reinforcement learning to learn the right survival actions based on rewards. But it is not the same as the machine semantically understanding and avoiding death. Life and intelligence are complex phenomena, and I don’t believe you can compress them into an explicit formula. You can’t make machines’ intelligent’ by a pure deep learning approach; at best, what you can do is mimicry.

AIM: We have been seeing massive interest in digital twins in machine learning’s context. What is its future?

Ajinkya Bhave: In many cases, these terms are used as buzzwords, mainly because that is how the industry operates. But in our case, Siemens invests a lot in digital twins; it’s not a buzzword for Siemens. Digital twins differ from simulation models because of the amount of detail and fidelity that goes into the twin. Yes, you can simulate anything to a certain point but to develop the detail and realism and to keep the digital twin always updated with the current state of the physical system is what differentiates the two. Typically, in Siemens, a digital twin is a detailed actual scaled model of the hardware. It is a living, digitised embodiment of your plant. You can do a lot of studies on that which you may not be able to do otherwise either because of the time, access, or cost investment. The use of digital twins across the full spectra of engineering applications has increased dramatically in recent years and continues to grow rapidly.

AIM: What are your comments on AI as snake oil?

Ajinkya Bhave: When AI is used so loosely like that, it is usually by people who are not that conversant with the technology. People practising in the field don’t make AI the ultimate thing to go to. At Siemens, when a client comes to us, we don’t say we will use AI as the first approach. We examine the problem, and only if AI or machine learning is applicable we suggest that as a solution. Since we are an engineering group, we don’t take an AI-first approach. We take an AI-guided approach. A lot of our preferred approaches come from physical systems dynamics. We use machine learning in a selective way, applying it only in cases where it lends an efficiency edge that traditional approaches may fail to do.

AIM: What are your tech predictions for 2022?

Ajinkya Bhave: It is difficult to predict technology. However, I think two things might happen. The first is more combination of machine learning with digital twins. Because data is hard to get in the real world, that is why you have to augment the data via simulation. And how do you use simulation smartly and efficiently? — it is via machine learning. Using digital twins to train machine learning models would increase as companies scale up. So, if the data is close to reality, the model would be trained well and eventually be able to predict accurately.

The second would be the scaling down of the machine learning model. This is not an easy thing to do. Scaling down the model means gaining more insights into its architecture and semantics. You have to take out layers from the model to understand what the model is doing. So, linked with the scaling of models will be Explainable AI. As an extension, companies would invest more in the verification and validation of these models. This would not be just limited to autonomous vehicles. Models should be trustable and, at least, supervisable so that they do not do something stupid when deployed to real-world scenarios.

Share
Picture of Shraddha Goled

Shraddha Goled

I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.