“One should undertake research hypotheses with the understanding that not all the experiments will pan out”Praneet Dutta
After completing his Bachelor’s from Vellore Institute of Technology (VIT), Praneet opted for a Master’s in computer engineering and machine learning from Carnegie Mellon University. Analytics India Magazine caught up with Praneet Dutta, Research Engineer, Google DeepMind, to capture snapshots of his ML journey. “I’ve enjoyed the process so far: starting my journey back home and being able to continue to grow here in the US. I’ve discussed my views with leading industry practitioners such as the CII India AI Task Force members. I also recently mentored a University project in my alma mater,” said Praneet.
AIM: Share your ML journey with us, all the way from VIT to Google DeepMind.
Praneet: I started as an Electronics and Communication Engineering undergrad. As a wide-eyed freshman, I was unsure if I wanted to pursue an engineering job after graduation, let alone following a path in AI research. At that time, the regurgitated coursework on Communications engineering wasn’t the most inspiring. I found myself attending the bare minimum number of classes, spending time in my hostel room/workshops on side projects. My lack of interest in “theory” was short-lived. During one of our hostel FIFA gaming nights, a friend mentioned an online certification he completed. Curious, I browsed the catalogue of courses on edX and Coursera. Andrew Ng’s popular Stanford CS229 Introduction to Machine Learning course stood out. I spent my time “slowly” covering it through the year.
At the midpoint of my engineering life, the workload was getting to the point of being overwhelming. I struggled to balance things. Most of my available time was on course projects, the University’s “Creation lab” and Formula Student team, working as an Electrical Engineer. For the latter, interleaved with my ML online learning, an idea sparked: Could we leverage this concept of a neural network which Prof Ng “proposed”? Could it perhaps learn to predict trends, such as the output torque, from other known parameters of our race car’s engine as a universal function approximator?
Working with a close friend specialising in Powertrain engineering, we presented this experiment at an internal student conference. We won an award, which seemed a positive sign, and a revised version was accepted at an international IEEE conference. While the contribution was quite trivial, it got me hooked to the field of Applied AI, being able to generate value in various domains. I took this a step further by presenting my solution at the Institution of Engineering and Technology (IET) India National Scholarship, leveraging a multimodal solution on satellite imagery and sensor fusion for urban pollution monitoring and control in India. This was when my confidence grew, and I felt I could make a difference in this field.
AIM: Why did you decide to move abroad for a Master’s? How does the process look to get into the top universities?
Praneet: During this time, I saw rapid updates in AI from the West. Uber seemed serious about its autonomous driving ambitions, opening its self-driving division (and test track) in Pittsburgh. Google had just acquired DeepMind, and later in 2015, the world was introduced to AlphaGo. Ian Goodfellow released GAN, which is ubiquitous today for generative learning. I wanted to be able to contribute to this technology revolution. Applying for graduate school in the United States seemed like a natural next step.
For an MS, apart from the usual pre-requisites of GRE and a good GPA, a crucial aspect was the personal statement and letter of references. However, what made my application stand out was my projects, evidence of my interest in the field. This was crucial, given that for some of these programs, I had applied for a change of field (technically ML/CV programs fall in CS school), made harder by lack of formal undergraduate computer science coursework nor any work experience. As a result, I was honoured to have received admits from Carnegie Mellon, University of Pennsylvania, Columbia University, Cornell University and Georgia Tech.
Finally, I made up my mind and opted for Carnegie Mellon University. A “cost” to think about- my 16 months in Pittsburgh were going to be expensive. I received the JN Tata Merit scholarship that funded a part of the program (a majority of this was a gift based on academic performance) and served as a Teaching and Research Assistant during my time there in the ML department. My Electrical and Computer Engineering program allowed me to take advanced courses in ML, computer vision and natural language processing. I feel humbled to have worked with some notable researchers in the field, such as Tom Mitchell, being his TA for the Graduate Introduction to ML course.
To obtain industry exposure, I completed an internship at Unity Technologies for Summer’17. During my undergraduate, I was often frustrated by the lack of opportunities for internships, having faced rejection multiple times. So, it was refreshing to get my first taste of Silicon Valley after hearing so much about it over the years. A semester before graduating, I started interviewing for full-time roles, and Google offered a niche customer-facing machine learning opportunity in its growing cloud organisation. After growing and gaining industry exposure in this role for 18 months, I decided to transfer across Alphabet to DeepMind, applied in Jan ‘2020.
AIM: Tell us about your role at DeepMind.
Praneet: I’m a Research Engineer in the Applied ML team, tackling the challenge of deploying machine learning in the real world. This role is interesting – a mix of Research and Software engineering. It involves developing prototypes, scaling DeepMind’s research to Google products and infrastructure used by millions globally. I’m based in Mountain View, California, but I collaborate with partners globally.
My area of focus has spanned the domains of reinforcement learning, video understanding and recommender systems. An example of our team’s work on enhancing data centre efficiency using AI is here. We are now exploring and working to see how we can apply this more broadly to industrial facilities. More on this soon!
AIM: What can India do better when it comes to AI/ML and attracting more talent?
Praneet: I feel our country has an abundance of talent. Today, there also seems to be great research coming out of Indian institutions. A few foundational steps to keep this momentum going might be in investments, resources in moonshots. Providing them with the opportunity and scope of impact can perhaps attract and retain talent.
A mindset shift which could be beneficial – “It’s completely okay to take risks and fail….if you can learn from it”. In my opinion, one should undertake research hypotheses with the understanding that not all the experiments will pan out. Taking educated guesses, learning from these experiences, communicating them to a broader group that can benefit from them can often be quite valuable than just success itself. Learning from one’s (and others!) mistakes is an integral part of the growth mindset.
Finally, in the US, we see meaningful collaborations between academic and industry labs that are mutually beneficial, creating value for all. I am a big fan of this model, and seeing these joint research projects scoped and executed well, can propel the state of AI research in India.
AIM: Can you briefly discuss the most exciting research that you have been part of?
Praneet: Most of my current work is exciting. While some of them are not ready to be shared externally yet, I can speak about my published work:
- My internship at Unity Technologies was the entry point to the world of Deep-RL in video games. I enjoyed working with Danny Lange, Arthur Juliani, leading applied practitioners in this area. Week one into moving into the Bay Area for the Summer, we decided on tackling an exciting gaming project in the RL domain. The goal was to implement a Tensorflow agent embedded within the Unity multiplayer environment (2 tanks pit against each other). It was an interesting challenge learning about RL and game development at the same time, as I ramped up during my three months. The final product in VR was a pleasure to play (albeit I had to calibrate it a bit to get rid of my motion sickness). My project involved controlling one of the “tanks” via an AI agent, whose objective was to match the level of skill of the opposing player as they learned the dynamics of the game. The goal was to keep the user (opponent) engaged by keeping the scores relatively even. (a summary of the project can be found here).
- Almost a year later, I started at Google and hit Imposter Syndrome. My “Noogler Projects” were quite challenging at that time, collaborating with experts from cross-functional domains. I was happy to lead and land our paper at the NeurIPS Workshop 19 on applying Generative Adversarial Networks to the area of Seismic super-resolution.
- Finally, more recently, at DeepMind, we published our work with Liverpool Football Club on AI for Football, led by our Game Theory group. One can find the paper here. I specifically worked on focused aspects related to the Computer Vision sphere: pose estimation and multimodal learning (speech to text). Since my teenage years, I’ve been following the football leagues in England and Spain and was very excited to be part of this. We also organised our AI for Sports Analytics workshop at IJCAI a month ago.
AIM: What resources and books did you use in your ML journey?
Praneet: The good news with the Internet age is that we have a plethora of resources at our disposal—quite a few recommendations. Back in 2014, I personally started with CS229 offered on Coursera and read the online lectures, course assignments. There were a few courses on EdX (MIT The Analytics Edge as one), which I spent my time going over on a high level. In my case, not being from a programming background, the biggest challenge was to self-learn programming. While most of the machine learning courses were in Matlab and R, I invested some time learning Python.
At CMU, I was lucky to be able to take some courses in ML, computer vision and natural language processing from some well-renowned researchers in the field. Finally, I picked up reinforcement learning through online blogs and some of the resources listed below. I enjoy the process of learning, which never ends. At Google, I started an internal Applied Machine Learning reading group to discuss the latest advances with peers. Even today, with my mentors, we review some recent advances in reinforcement learning, covering fundamentals, etc. Some of these include:
For those slightly more interested in RL and Meta-Learning (which I think are exciting fields to be in), these are some (among many) resources which you can have a look at: