Proof of concepts are pegged as the training wheels, moving AI from PoC stage to production. Last week, ZDNet broke a story about Australia’s leading bank, ANZ Bank shelving its artificial intelligence proof of concept instead of pushing it into production, citing health checks, bias and lack of explanation as some of the big reasons for putting a project on hold.
Jason Humphrey, Head of Retail Risk at ANZ Bank was cited as saying how one of the biggest dangers in deep learning is because it is bringing in new attributes and new correlations which could create bias. However, building enterprise-scale AI is not easy and there are several obstacles teams face before putting the models in production. PoCs are shelved when they do not add any business value. Some of the top reasons for failed PoCs are:
- PoC range from vendors failing to prove the concept as originally conceived
- Concept not delivering the expected outcome in terms of value
- POC fails to satisfy the intended stakeholders
- POC results do not add tangible value
A 2017 study from Accenture found that AI can increase profitability by 38 percent, generating over $14 trillion dollars of economic impact in the coming decades. But even though service providers with a formalised PoC spout the benefits of AI applications, they are yet to show tangible business results.
- One of the biggest challenges for companies in the early stage of AI journey is having sufficient skills in-house. Most service providers run self-directed primers on ML and deep learning, designed to help build developers’ understanding of how to map the business problems.
- AI workflows can get technical and enterprises will have to navigate a series of issues related to hardware, software, data security and a quantity of new data for training or inference
- Most small companies prefer building, buying or re-using the hardware and software, and/or whether to make use of cloud services
Besides infrastructure and deployment issues, model construction is the core AI task. It involves data scientists using training data and managing parameters to conduct iterative test runs. Through this approach, data scientists and tech teams can check models for initial accuracy before dispatchings them for broader training and tuning. Most AI practitioners cite how training and tuning AI models are the most computationally-intensive parts of the AI workflow. As part of this process, data scientists determine under what parameters their models converge most efficiently given the available training data, while dealing with traditional IT concerns of job scheduling and infrastructure management. This is also one of the most computationally-intensive tasks with data scientists spending a considerable amount of time wrangling data.
Why AI PoCs fail
The AI proof of concept roadmap poses several challenges to tech teams. From bias against AI-enabled applications or products to the criteria around explainability, enterprises have to perform a series of health checks before putting the models in production. Besides, organisations have to navigate legal issues as well arising from applied inequality or consequential decision making from AI applications. Inequalities in AI applications refer to a set of problems involving the design and deployment of AI.
Bias: For example, the ANZ PoC failed to explain how the AI system does not have a biased view based on shortcomings of the training data, model, or objective function.
Fairness: If decisions are made based on an AI system, it is difficult to verify whether those decisions were made fairly. This also refers to the black box problem of AI. How can discriminatory bias be minimised in training data and how can the output be judged fair?
Causality: Can the model provide not only correct inferences, but also some explanation for the underlying phenomena. According to a research paper by UC Davis, deep learning as a technique is well-known for establishing correlation but is not the best technique for articulating causal mechanism.
Transparency: When it comes to transparency, what are the distinct factors to be explored in understanding the decision making process? Should AI-based insights be explained in terms the user can understand and how can the decisions or outcome be questioned?
Safety: It is of utmost importance to build safe, transparent and accountable AI systems. This will enable users to gain a certain level of belief and confidence in AI systems, so that they can understand the AI decision-making process. For example, policymakers need to set a means of verifying safety standards for driverless cars, drone delivery.
Provide your comments below
If you loved this story, do join our Telegram Community.
Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
Richa Bhatia is a seasoned journalist with six-years experience in reportage and news coverage and has had stints at Times of India and The Indian Express. She is an avid reader, mum to a feisty two-year-old and loves writing about the next-gen technology that is shaping our world.