With artificial intelligence penetrating most industries to make communication and innovation easier, using it to fight sexual harassment could be its best move yet. According to statistics, 56% of women believe that sexual harassment at the workplace has increased over the years, and 53% of women have been subject to sexual comments, gestures, jokes at the workplace. The same study also states that close to 80% of women are aware of policies against sexual harassment at the workplace, but 30% of women still hesitate to complain to the internal committee about such incidents.
With the prominence of the #MeToo movement growing, organizations have now started to pay more attention to sexual harassment, especially the ones happening at the workplace along with the ones done on emails and messages.
That’s why researchers are now working on AI-powered tools called MeTooBots in order to detect harassment through text communications, emails and instant messages, among others.
In fact, the chatbots developed by tech giants like Apple, Microsoft, and Google are also trained to shoot down any sort of harassing statements made to them by the users.
Also Read: Have AI Bots Become The New Business Gurus?
The concept of having AI bots to fight against sexual harassment is getting popular among organizations, with companies all around the world deploying MeTooBots in their organizations, in order to flag harmful and harassing communications, along with restricting actions.
Nex AI is one such company that has created a version of this bot and has then deployed the same in 50 different companies. This bot utilizes an algorithm which examines company documents, chat and emails and compares it to its training data of bullying or harassing messages. The messages which are detected and termed as harassing are then sent to HR for review.
There are other companies around the world, who are working towards developing AI-powered harassment detection tools. Another mentionable name in this industry is Spot, which has created a special chatbot that is capable of enabling employees to anonymously report allegations of sexual harassment. The bot has been programmed to ask questions and provide insights and advice to the employee, along with further helping them in investigating the incident. Spot chatbot kept the anonymity running because it believes that every organization should deal with harassment issues in a sensitive manner.
Brazil-based Think Eva is another female intelligence core, a solution created to fight harassment. Think Eva is a combination of artificial intelligence and the human touch that would track harassing emails, texts and comments and provide customer service protocols to take care of the situation. The AI inbuilt is also trained to track and analyse bad language and block users.
Supporting The Victims Of Sexual Harassment
Sis Bot is another chatbot, popular in Thailand, which provides 24/7 information services for survivors of sexual harassment, molestation or any sort of violence. This bot is accessible to victims through their mobile phones, laptops, iPads, or a computer. One can send an instant message to this bot via Facebook Messenger, and it will immediately respond with insights in order to tackle such a situation.
Another startup, based out of Montreal, working towards the development of the society is Botler AI, which has recently launched a system which can provide free guidance to the victims of sexual harassment, using deep learning. According to the media, the AI system has been trained on more than 300,000 U.S. and Canadian criminal court documents, including over 57,000 documents and complaints related to sexual harassment. Using the data, the AI will then predict and judge the situation of sexual harassment.
Callisto and AllVoices are also apps that are created to deal with this problem. Callisto is an app that has been aimed at gearing college students and allows them to file encrypted reports that later can be sent to campus leadership. AllVoices, on the other hand, is targeted towards employees working, to anonymously report harassment or any other issues related to insult and defamation.
Although this innovation is gaining its momentum, Prof. Brian Subirana, an MIT and Harvard AI professor, believes that using AI to detect harassment has its limitations.
She explained how harassment languages and dialogues including the ones that are a tad bit flirty can be very subtle and debatable, and therefore considered to be hard for an AI algorithm to detect. AI is believed to only detect a pattern that falls under its training data and could easily miss out new innovative ways of harassment.
In fact, scientists have mentioned that these AI bots can only detect or pick out certain trigger words that are present in their training data, and can only analyze the broader dialogues and the cultural relationship between two employees. Despite such hassles and complexities in AI detection of harassment, Subirana does have a small belief that bots could change a lot the way organizations are dealing with sexual harassment.
These bots could be used as a benchmark in order to train people to detect potential harassment in terms of messages and emails, by creating a database of troublesome messages by employees. Also, this could create a fear of being scrutinized among the wrongdoers, and in turn, can reduce the whole harassment scenario.
Although these AI bots have enough potential in combating harassment, it could be argued that the privacy of the data and the employee will definitely be hampered. Another major concern for the organizations is protecting the data that gets collected. Apart from all benefits, such a technology could also create an environment of distrust and suspicion between organizations and employees.
Sam Smethers, another expert of this field — the chief executive of women’s rights NGO, the Fawcett Society, also expressed concern related to these AI bots, mentioning that it is important enough about the technology before it gets deployed in your organization, also the approach or steps that will be taken post the detection should also be transparent to employees.
To preserve such a big innovation in the AI industry, the methods of detecting sexual harassment complying with the privacy and security aspect needs to be taken care of. This would require a strong collaboration between organisations, employees, developers of the bot, and the regulators. Anonymity could also be considered as one of the options to safeguard the employees. A strong HR team is also a necessity in order to combat these harassers.
Despite its limitations, it is still believed that appropriate uses of AI algorithms and bots can detect harassment and in turn could help the organisation become a safer place for employees.