“The bill for miscommunication always comes due”James Clear
Turing awardee Yann LeCun unintentionally sparked up a debate on the role of bias in machine learning with a single tweet. A paper called PULSE made the rounds last week, thanks to the CVPR conference. It was used to demonstrate how a machine learning model could generate a high-quality realistic face when it is given a pixelated input. However, things went out of hand when few enthusiasts started playing with the model.
When a blurred image of Barack Obama was fed to the model, it blurted out an image of a Caucasian male. Given the situation right now in the US, it didn’t take long for the people to come down heavy on the lack of diversity in ML results. The issue of racial and gender bias has been around for a while, and no one seems to know the solution to it so far.
Amidst the ire of the people, LeCun explained in a tweet about why such bias occurred in the case of PULSE. His initial response was to bring people’s attention towards the biases associated with the dataset. This didn’t go well with the audience, and they accused LeCun of dousing the issue by bringing up the age-old excuse of datasets. Even though he tried to explain his stance on bias more eloquently in the follow-up tweets, there was no going back for the other side.
What The Practitioners Have To Say
Luca Massaron, a data scientist who has been in this field for more than a decade, says that though Yann LeCun is perfectly correct from a technical point of view, the reactions online and the answers explain a lot about how sensible is such an issue now for the public.
There is a widespread fear of unfair control and manipulation. People unconditionally, and somehow unreasonably, fear that AI will take away their liberty from them, not just their work.Luca Massaron
“Personally I do not fear Face Depixelizer or other experiments of this kind. What I fear are the applications that you cannot test and challenge for bias,” said Luca.
As society adopts more and more ML automation, Luca suggests that the role of a legislator can be very crucial. The EU, for example, in order to assure transparency and accountability, requires the service provider to facilitate explainability of algorithms or human control of algorithmic decision making.
If we want AI to advance, Luca advocates that we should claim for transparency of AI more than it being unbiased. If an algorithm is biased, it could be a problem, and we can challenge it and unveil its issues. But, if an algorithm is hidden, deeply integrated and unquestionable, that could be a bigger problem for all of us.
We Are In This Together
Talking about biases in terms of data collection and trying to close the argument by saying humans are biased so will be the machines may sound right coming from amateurs who ride on popular opinions. But, for someone like LeCun who has been championing AI for over three decades has been one of the pioneers in the field, such arguments can come off as vague and borderline irresponsible.
Biases in models can emerge in the guise of assumed domain expertise, through the ignorance of the practitioners and thousand other things. There is no one-stop solution to this. For the models to be unbiased towards certain groups or communities, there needs to be a representative who has been part of it. The ecosystem has to be incentivised in a way so that people from all walks of life can be heard and help build algorithms that we all can agree upon. Given the rapid adoption of AI, picking sides and politicising might give temporary relief and some social media candy points but will only postpone the acknowledgement of important questions.
Provide your comments below
If you loved this story, do join our Telegram Community.
Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
I have a master's degree in Robotics and I write about machine learning advancements. email:firstname.lastname@example.org