AI Trust Grows When Hidden

While companies are heavily criticised for being secretive the approach is working in their favour 
Listen to this story

People May Be More Trusting of AI When They Can’t See How It Works,” read one of the articles in the latest edition of Harvard Business Review, which showed how not knowing the workings of a model, helped people trust the process more. 

A similar pattern can be observed among the tech industry leaders. Apple, one of the image-conscious companies of the Silicon Valley lot has also made sure to keep a tight-lip about their AI/ML doings. The same goes for OpenAI, which is trying really hard to hide its technology, and yet struggling to woo enterprise customers. 

During this year’s WWDC, CEO Tim Cook conspicuously refrained from using ‘AI,’ opting for the more subdued ‘machine learning.’ The iPhone maker’s aversion to using ‘AI’ as a label is far from new, as the company has long been cautious about its techno-magical capabilities. Instead, Apple focuses on the practical functionalities of machine learning, highlighting the tangible benefits of its user-centric audience.

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

The company’s chief put it in an interview with Good Morning America today, “We do integrate it into our products [but] people don’t necessarily think about it as AI.” This gives Apple an upper hand over its competitors like Microsoft and Google who are currently boasting their AI-powered products yet struggling to get adapted throughout companies.  

OpenAI is abiding by the same playbook as Apple in terms of secrecy. The 98-page technical paper released by the company lacked even the basic details about the AI model’s data or architecture. While the paper was heavily criticised for being shallow, the secretive approach seems to be working in the company’s favour. 

Trusting the Process

As per HBR, a group of researchers from Georgetown University, Harvard and MIT analysed the stocking decisions for 425 products of US luxury fashion retailers across 186 stores. Half the employees received recommendations from an easily understood algorithm and the other half of the recommendations from one that could not be deciphered.

A comparative analysis of the decisions made it evident that employees align with the recommendations provided by the opaque AI more frequently. The result concluded that individuals exhibit higher confidence in AI systems when they don’t thoroughly know how it works.

Professor Timothy DeStefabo highlighted a well-established phenomenon wherein decision-makers are often reluctant, whether consciously or unconsciously, to embrace AI-generated guidance, opting to override it. This is not the first time. Historically, new technologies receiving initial resistance has been a norm.

DeStefabo and his team partnered with Tapestry, a company boasting a worth of $6.7 billion and the parent entity of Coach, Kate Spade, and Stuart Weitzman. The collaborative effort began to examine the roots of this reluctance and find strategies to mitigate it.

The company had long used rule-based algorithms to help allocators estimate demand. The model was understood by the users from their daily experience and whose inputs they could see. The firm developed a more sophisticated forecasting model that was a black box to users, for better accuracy. Turns out, the shipments were up to 50% closer to the recommendations generated by the latter, suggesting the users trusted the black box model much more.

Prior to this initiative, the company had long relied on rule-based algorithms to help allocators estimate demand. These algorithms were comprehensible to users, based on their daily experiences and inputs. However, the firm developed a more intricate ‘black box’ forecasting model for better accuracy. Surprisingly, the shipments were up to 50% closer to the recommendations generated by the latter, suggesting the users trusted the black box model much more. This outcome suggested that users placed greater trust in the black box model.

One reason allocators overruled the less sophisticated system was due to ‘overconfident troubleshooting’ – users believe they understand models better than they actually do. Even though the employees could not tell how the model worked because it had been developed and tested with inputs from some of their colleagues it gave them confidence in the model, wrote DeStefabo. 

In conclusion, tech companies need to focus on what customers need, and not sell the know-how of technology to their customers. 

Tasmia Ansari
Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.

Download our Mobile App


AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIM Research

Pioneering advanced AI market research

Request Customised Insights & Surveys for the AI Industry


Strengthen Critical AI Skills with Trusted Corporate AI Training

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox