Now Reading
How Tech Giants Like Amazon, Microsoft, Google Are Using AI Against Hackers

How Tech Giants Like Amazon, Microsoft, Google Are Using AI Against Hackers


Image source: https://pixabay.com/en/amazon-cellular-tablet-city-3d-3816490/

Amazon, Microsoft and Google are the forerunners among tech giants to leverage artificial intelligence to tackle cybersecurity threats and keep hackers at bay, who often pose as a real user to gain crucial data.

Speaking to an international news agency, the chief security officers from each of the company unanimously agreed that AI and ML plays a crucial role in protecting their company’’ multi-million dollar infrastructure by crunching large pool of data on a daily basis.



While acknowledging that it is impossible to stop every intruder, Stephen Schmidt, Amazon CISO, maintained that the new tech remains largely beneficial for companies like that of Amazon which has to ensure online safety of millions of people across the world. Speaking about the ability of AI and ML  in identifying hacker he said, “We will see an improved ability to identify threats earlier in the attack cycle and thereby reduce the total amount of damage and more quickly restore systems to a desirable state.”

Speaking on the large set of data that needs to be  processed for monitoring unauthorised activities, Mark Risher, product management director at Google said,“The amount of data we need to look at to make sure whether this is you or an impostor keeps growing at a rate that is too large for humans to write rules one by one.”

To ensure more nuanced security protection, Google leverage ML to different sets of data, to track unauthorised logins and for monitoring users’ online behaviour. The company says that if the data suggest a slight variation in the user’s behaviour, chances are that it could be hackers posing as a real user.

See Also

Hence, understanding the ability of hackers to pose as a fake user, Microsoft, on the other hand, uses ML to customise as per their users’ online behaviour and history. Ever since the unrolling of the feature, the company has brought down its false positive ratio to .001%.

Realising that machine learning security needn’t be accurate, due to lack of proper training data sets. Researchers and companies have to be on a constant alert to safeguard the existing system. For this reason, one of the main setbacks is the possibility of hackers to use the very set of machine algorithms to turn against these companies by learning how companies train their system and then using the data to corrupt the algorithm. Though the concept is not widely practised and is still being researched upon, experts say that it is crucial for companies to keep their algorithmic criteria secret and change the formulas regularly.


Enjoyed this story? Join our Telegram group. And be part of an engaging community.


FEATURED VIDEO

Provide your comments below

comments

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
Scroll To Top