MITB Banner

AI Trust Grows When Hidden

While companies are heavily criticised for being secretive the approach is working in their favour 

Share

AI Black Box Story
Listen to this story

People May Be More Trusting of AI When They Can’t See How It Works,” read one of the articles in the latest edition of Harvard Business Review, which showed how not knowing the workings of a model, helped people trust the process more. 

A similar pattern can be observed among the tech industry leaders. Apple, one of the image-conscious companies of the Silicon Valley lot has also made sure to keep a tight-lip about their AI/ML doings. The same goes for OpenAI, which is trying really hard to hide its technology, and yet struggling to woo enterprise customers. 

During this year’s WWDC, CEO Tim Cook conspicuously refrained from using ‘AI,’ opting for the more subdued ‘machine learning.’ The iPhone maker’s aversion to using ‘AI’ as a label is far from new, as the company has long been cautious about its techno-magical capabilities. Instead, Apple focuses on the practical functionalities of machine learning, highlighting the tangible benefits of its user-centric audience.

The company’s chief put it in an interview with Good Morning America today, “We do integrate it into our products [but] people don’t necessarily think about it as AI.” This gives Apple an upper hand over its competitors like Microsoft and Google who are currently boasting their AI-powered products yet struggling to get adapted throughout companies.  

OpenAI is abiding by the same playbook as Apple in terms of secrecy. The 98-page technical paper released by the company lacked even the basic details about the AI model’s data or architecture. While the paper was heavily criticised for being shallow, the secretive approach seems to be working in the company’s favour. 

Trusting the Process

As per HBR, a group of researchers from Georgetown University, Harvard and MIT analysed the stocking decisions for 425 products of US luxury fashion retailers across 186 stores. Half the employees received recommendations from an easily understood algorithm and the other half of the recommendations from one that could not be deciphered.

A comparative analysis of the decisions made it evident that employees align with the recommendations provided by the opaque AI more frequently. The result concluded that individuals exhibit higher confidence in AI systems when they don’t thoroughly know how it works.

Professor Timothy DeStefabo highlighted a well-established phenomenon wherein decision-makers are often reluctant, whether consciously or unconsciously, to embrace AI-generated guidance, opting to override it. This is not the first time. Historically, new technologies receiving initial resistance has been a norm.

DeStefabo and his team partnered with Tapestry, a company boasting a worth of $6.7 billion and the parent entity of Coach, Kate Spade, and Stuart Weitzman. The collaborative effort began to examine the roots of this reluctance and find strategies to mitigate it.

The company had long used rule-based algorithms to help allocators estimate demand. The model was understood by the users from their daily experience and whose inputs they could see. The firm developed a more sophisticated forecasting model that was a black box to users, for better accuracy. Turns out, the shipments were up to 50% closer to the recommendations generated by the latter, suggesting the users trusted the black box model much more.

Prior to this initiative, the company had long relied on rule-based algorithms to help allocators estimate demand. These algorithms were comprehensible to users, based on their daily experiences and inputs. However, the firm developed a more intricate ‘black box’ forecasting model for better accuracy. Surprisingly, the shipments were up to 50% closer to the recommendations generated by the latter, suggesting the users trusted the black box model much more. This outcome suggested that users placed greater trust in the black box model.

One reason allocators overruled the less sophisticated system was due to ‘overconfident troubleshooting’ – users believe they understand models better than they actually do. Even though the employees could not tell how the model worked because it had been developed and tested with inputs from some of their colleagues it gave them confidence in the model, wrote DeStefabo. 

In conclusion, tech companies need to focus on what customers need, and not sell the know-how of technology to their customers. 

Share
Picture of Tasmia Ansari

Tasmia Ansari

Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.