“When a team starts solving problems, they think small. Though initially, they may get good results, the challenges start coming to the surface with scaling. The real impact shown on the MVP or a small prototype gets diluted, and it does not make even any sense to deploy it into production as the scaling aspect is missing,” said Nitin Aggarwal, Head of Cloud AI Industry Solution Services (India) at Google, at the MLDS conference.
Here are some facts to consider:
Sign up for your weekly dose of what's up in emerging technology.
– About 80 percent of the structure of the enterprise data is unstructured.
– About 70 percent is almost free of text, documents, email and comments.
– Less than 1 percent of actual data has been analysed.
– About 50 percent of structured data is hardly used to make any decisions.
Nitin said he sees AI as a team sport where everybody has their way of working and their role in solving large-scale problems. But in large scale engineering, data scientists play a pivotal role in making an impact.
– In 70s, engineers applied decision trees to drive machine outcomes
– In 90s, faster computers and software paved the way to apply statistics to drive superior outcomes
– In 2010, Deep Learning ushered in the possibility to solve previously unsolvable problems.
– Now, businesses view AI as an integral part of product development and operational efficiency.
AI is no more just an opinion service. Businesses embed AI as an integral part of their development to generate knowledge and value.
Thought process across AI/ML project spectrum
There are four verticals every ML project should go through: commodity, procedure, gray hair and rocket science.
1. Commodity is the least technical work for ML; you are just embedding AI/ML or just calling text to speech or speech to text, just like calling OCR.
2. Procedure, where you have sorted things in the past, you now have a systematic and comprehensive approach. You have a mature methodology on the outside, and you want to follow the right process. You want to explore some of the pre-built solutions. Now, you can work on a full data set because the overall deployment and the risk are very low, ML skills required are very less, and the majority of the skill set that will be required is that of a software developer.
3. Gray hair is the topmost element right now as many want to build custom solutions and use auto ML. The organisations are driving a lot of the value as the risks are high, costs are high, and ML skills are needed. This is where the scaling comes in. You can start with a sample, but your end to end approach must be scalable. It must work with your business scale; otherwise, it will not work out.
4. Rocket science: You do high-level research on the problem that has never been solved. You have the ability to solve complex challenges with very innovative solutions. It is high risk and requires high ML skills. You have to start with small and scalability comes at the last as you want to test whether this particular approach will work or not.
Most of the data are in silos coming from various systems and are in different formats. These things will directly impact your data processing pipeline. So it would help if you had a very robust system design database to work on the data lineage.
Simpson’s paradox is a classic example of working on a very small problem. You will not be able to see the impact of Simpson’s paradox when you start working on an important feature, but when you get to scale things up, your feature importance changes drastically. And during the preprocessing, we have rejected and neglected a few features that should be very important when solving the problem at scale. Every feature requires different transformations when data comes from different systems procedures.
When you want to build a sophisticated model and start working on it, the selection of printing methods will change how you want to use that model, when you are going to deploy that model, and how you will run the parallel experiments.
In the majority of the science, you will see precision-recall, effort score, accuracy, mean, MSE; this kind of a technical metric will be important, but over a period of time you will see if you are scaling things up: Latency, throughput, deployment time, maintenance, how the systems are behaving in sync vs async become even more important.
Endpoint and match predictions.
When you deploy the model, how you are handling the deployment complexities, what kind of furniture deployment setting, what kind of load balancers you will be using, factor in. Every decision will directly impact your deployment, model architecture, and solution strategy.