Now Reading
When Chatbots Go Rogue: Potential Attacks That Can Be Carried Out Using Chatbots


When Chatbots Go Rogue: Potential Attacks That Can Be Carried Out Using Chatbots


There is no doubt that since its advent in the technology world, artificial intelligence (AI) has opened up a lot of doors for the betterment of human race. AI has not only reduced the workload for people but has also taken complete control in many verticals — AI is driving cars, serving food in restaurants, giving relationship advice and what not.



Chatbot is one such technology that has also become popular over the years. With the help of artificial intelligence (AI), a chatbot can mimic human conversations, bridging the gap between messaging and application frameworks.

Today, the way people live and work is transforming and it’s not just common people who are reaping benefits of chatbots, but also big companies and brands across the world are leveraging this amazing phenomenon. According to a 2016 report, 80% of businesses are planning to make use chatbots by 2020 in order to improve customer experience and to cut costs.

However, chatbots have a downside too, especially in terms of security — it presents you and your device as an enticing target for hackers. Even though there are not many cases of chatbot attacks, still there are possibilities that these AI equipped bots can be of used by hackers to carry out different types of attacks.

Let’s have a look at some of the attacks that can be carried out using chatbots

Man-in-the-middle (MITM) attacks

If you don’t know what is a MITM, it is a type of attack where the attacker put himself in the middle of a conversation between a user and an application — either to eavesdrop or manipulate the conversation. Usually, MITM is carried out to steal information such as login credentials, account details, and credit card numbers, which could be used for many purposes like unapproved fund transfers or an illicit password change.

For example, imagine a scenario where you try to log into your bank account, but an attacker creates a website that looks just like your bank’s website and deploys in between you and your bank’s website. So, when you enter your credentials, you are not login into your account but handing it over to the attacker.

How a chatbot can carry out a MITM attack? A chatbot can be built in such a way that looks just like it is from a reputed firm and when the person on the other side of the screen interacts with the bot, he/she ends up sharing sensitive information.  That is not all, it might even tell clients to take certain actions such as a link to install a program (a malware).

MITM attacks have always been one of the most preferred attacks when it is about stealing information or phishing or social engineering, and with chatbots coming into the play, it will only make things worse — it will take hacking to a whole new level.

Evil Bot

This is another concern that comes with the advent of chatbots in hacking. With every passing day, competition between companies are reaching a high level and in order to destroy the opponent’s image in the industry, one might end up using a chatbot (rather call it “evil bot”).
In March 2016, Microsoft released a chatbot called Tay and it was designed to mimic and converse with users in real time. Tay was a Twitter bot that was described as an experiment in conversational understanding. According to Microsoft, the more you chat with Tay, the smarter it gets. However, the bot couldn’t stand up to the expectations as it turned out to be blunder bot with racist, anti-Semitic, and awful invective.

See Also
exit poll bjp congress

Since chatbots exhibit human-like communication, there is no denying that they could be an excellent proxy for carrying out cyber attacks. If a hacker successfully takes over a chatbot on the company’s website, it can manipulate and control it. So, every time someone interacts with the bot regarding queries or information, it would answer irrelevant stuff. Also, another way of carrying the entire attack is to create an evil bot and deploy it on different channels with a company’s name. That is not all, it can even get worse if several bots are deployed in several channels and spread “fake news”. It will not only spread wrong information in mass but will also end up creating havoc in society.

Taking over an open computer

As chatbots continue to become popular, there are many companies across the world that have implemented chatbots that employees can use. However, there is a danger that what if an employee loses its phone or computer, or just leave it unlocked with a chatbot window open, then a hacker could just dig out sensitive information by asking questions.  

Even though this sounds more like a human error, but even in this scenario, the bot is lagging behind. If the bot is designed in such a way that the user needs to answer a few security questions, then it would reduce the risk.

Outlook

We are at a stage where chatbot trend is beefing up are becoming more human, however, current chatbot solutions are not entirely secure and being a chatbot that is vulnerable could expose a business to different kind of cyberattacks.

It is high time that companies take the necessary measures to make these bots more secure — whether it is with two-factor authentication (2FA), behaviour analytics, biometrics, or self-destructing messages.



Register for our upcoming events:


Enjoyed this story? Join our Telegram group. And be part of an engaging community.

Provide your comments below

comments

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
Scroll To Top