MITB Banner

AI Governance Depends On The Kind Of Society One Comes From: Manojkumar Parmar, Bosch

Technology is like a vehicle; it has both an accelerator (innovation) and a brake (governance/security/regulation). The brake gives the confidence to increase the speed gradually and responsibly.

Share

AI Security

AI models require a large amount of sensitive training data and are usually computationally intensive to build. They also suffer from threats from attackers, who leverage loopholes to break into the system. It often results in destroying brand reputation, differentiation, and value proposition.

Recognising this, Robert Bosch has constituted a team that works exclusively on dealing with challenges pertaining to AI security. To understand more about this, we caught up with Manojkumar Parmar, Deputy General Manager – Technical at Bosch on AI and Security, who is also leading the team.

AIM: How did you develop an interest in the security aspect of AI?

I joined NVIDIA Graphics soon after my graduation. I worked there for a year and a half, where I worked in VSLI design. Outside of work, I started exploring AI. In 2018, I joined Robert Bosch’s newly minted unit for innovation and incubation. I was mainly responsible for advanced computing, technology in the capacity of innovation experts. 

Around the same time, I read a very interesting paper on how one can steal a model just by having access to the API. We are dealing with a newer digital asset class of AI, which is very high in value. Traditionally, whenever any new asset class comes into the picture, hackers launch an attack post which the security teams build solutions based on these attacks. This got me thinking a lot.

We often hear a lot about data being the new oil, but what about the model? I believe the model is like jet fuel, enriched with insights. Why would hackers go after crude oil (data) when they can steal your jet fuel?

This got me interested in AI security and eventually motivated me to set up a unit within Bosch exclusively to work in this area.

While the work on building a technology team at Bosch for AI security started a few years ago, it was formalised and officially started operations this year. 

AIM: How was the idea to build an exclusive team to work on issues pertaining to AI security conceived and finally brought to fruition?

I worked with my colleagues for eight months to really understand and get to the crux of what AI security encompasses. Initially, we formulated a very informal study; we realised that the field was very nascent. The key finding of the study was on how to increase the accuracy of the model. All the focus was on how to increase the accuracy of the model, and security was not of any concern.

The higher management approved our idea to build something in this space, and we spent the next few years building the proof of concept to demonstrate the security issues with the production-grade AI. Once that was done, the team and the company decided that it was time investment was made into this initiative. 

In April this year, the entire program was born. The team consists of 12 members from diverse backgrounds like AI, security, product management, and business development.

AIM: Concepts like trust and ethics are very intangible. How do you think we can measure and supervise them?

Manojkumar Parmar: Trust is an inferred quality based upon the lot of things that you do. At Bosch, we have developed AI Codex Ethical guideline for using an AI. It is a top-level set of principles, which gets operationalised internally, contextualised for each of our business use cases.

This framework has been developed over the last few years, and it is operationalised. Every product team or the project team is empowered to look into this particular assessment and understand it. 

AIM: What do you think are the most common security challenges AI models currently face?

Manojkumar Parmar: Today, AI security has become a very common term. But in terms of industry, it generally refers to how AI is being used to do traditional cyber security jobs, which involves tasks like threat hunting, detecting fraud behaviour and malware, etc. 

Now what we are saying is a little different than that. What we are saying is that the AI is opening up newer attack surfaces to attack the AI itself. We are focusing on how to secure AI as an asset class and not utilise it to secure some other asset classes. So in layman’s terms, you can say that cybersecurity is policing the actual asset. We are asking who is policing the police.

AIM: A lot of people also think that concepts like AI security, governance, and ethics may be a hindrance to rapid innovation. What is your take on this?

Manojkumar Parmar: Take a case of recently concluded COP-26. Such an event/conference had to be held because threats were not paid heed to from the early stages. We don’t need to repeat this with every technology, just in the name of innovation.

Technology is like a vehicle; it has both an accelerator (innovation) and a brake (governance/security/regulation). The brake gives the confidence to increase the speed gradually and responsibly.

We, as a society, have to really take care of where our values are and how much we are willing to sacrifice them. For some societies, it is OK to sacrifice all of these values in the name of innovation to move faster, but for some, it is not. So, I don’t have any blanket answer for whether it is right or wrong, but I think regulations put the problem in perspective and tell clearly where to innovate.

I think it is good that we are reacting early. And again, the keyword here is ‘reacting early’. In a lot of cases, we have reacted very late, which had dire consequences.

The last five to six years have been spent in building the technology and making it relevant for the people and organisations undergoing digital transformation. AI assets are increasing, and the need for its security is also rising. So for me, it is the right time to talk about AI security.

AIM: With respect to AI security, what do you predict for 2022?

Manojkumar Parmar: For the first time, academia needs a lot of catching up to do. In terms of AI security, the industry has taken the lead. As industry players, we have burned our fingers with cybersecurity enough—the trillions of dollars that are being lost because of that data security breach and other issues. We have learned our lessons. We want it to be ready for any such unforeseen situations. It’s a selfish viewpoint, but we feel good about being ahead of academia, at least here, because it is generally them dictating terms of what to work on AI.

AIM: Do you think a framework for AI security, governance and trust can be applied to the entire industry?

Manojkumar Parmar: It’s an interesting question because, in AI, the issues regarding governance and trust are related to the culture and society you come from.

So as an industry, we also factor in the cultural aspects of AI. It isn’t easy to have in common regulations for everyone until and unless we factor in the anthropological and social aspects. It may be possible to make regulation to a certain extent (a common, minimal program way). There are already initiatives like a model card or a model sheet, which actually tells users complete things about how the model is built. These are more like informative and not enforceable regulations. It is helpful because you are putting the choice in the hands of a consumer whether they want to really use this or not. This is the normal principle of data privacy extended to the AI-related issue. You as a consumer decide if you want to use it or not, and our job as an industry is to give you that information.

Share
Picture of Shraddha Goled

Shraddha Goled

I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.