Progress In Artificial Intelligence Has Opened Doors To Dystopian Threats

As the field of application for artificial intelligence is growing, so is its threat landscape. Recent improvements in hardware indicate that AI algorithms are now able to surpass human accuracy in several aspects. However, the technological progress also poses unprecedented ethical challenges. Researchers and AI theorists believe that beside a clutch of economic benefits and global opportunities created by AI, the technology also poses global risks, which could surpass nuclear technology as well.

As research progresses in this field, scientific risk analyses suggest that high-potential damages resulting from AI should be taken very seriously — even if the probability of their occurrence was low. As progress in AI research makes it possible to replace swathe of human jobs with machines, this has also sparked fears about automation. Many economists are predicting that an increase in automation could lead to a massive decrease in employment within the next couple of decades. A research indicates that with automation, the global average living standard will also rise, and there is no guarantee that all people — or even the majority of people — will benefit from this.

AIM Daily XO

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

In the wake of recent events such as crashes arising from driverless cars and biased algorithms

leading to skewed results, researchers are mulling the possibility of developing AI standards to prevent these risks. There are several questions to consider regarding this. For example, are the current social systems prepared for a future where the human workforce increasingly gives way to machines?


Download our Mobile App



Let’s take a look at risks posed by AI:

1) Risks posed by autonomous weapons: Autonomous weapons have spawned an AI war of sorts may or may not require a human in the loop for its operation. If it falls into the hands of the wrong person, these weapons could easily cause mass casualties. Besides, experts believe an intense arms race can spark an AI war that also have fatal results. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control in such situations.

2) Digital manipulation and risk to society: According to a study published by 25 technical and public policy researchers from Oxford, Yale and Cambridge experts, the researchers sounded an alarm about the potential misuse of technology — particularly in terms of fake videos created with deep learning technology. The prescient warning came to a pass with DeepFake videos where celebrity faces were realistically put on different bodies. While this is just being used in the realm of celebrity videos, the same concept can be applied to political propaganda, as indicated by Jack Clark, head of policy at OpenAI.

3) Lack of standards such as in the case of driverless cars: As autonomous driving tech evolves, policymakers are rushing to develop safety standards to regulate the industry. With regards to driverless cars, researchers published a paper called Ethically Aligned Design: A Vision for Prioritizing Human Well Being with AI and Autonomous Systems to promote transparency standards in the industry, with an aim to find out how the decision was made. In a similar vein, IEEE launched Global initiative on Ethical Considerations in AI and Autonomous Systems in 2016 to ensure stakeholders involved in the design and development of autonomous and intelligent systems are empowered to prioritise ethical considerations so that these technologies are advanced for the benefit of humanity.

4) Overselling AI systems: There was a time when IBM’s Watson was pitched as an AI system that can overtake diagnostic skills of doctors. Positioned as a welcome development, the project was shelved by Texas’s MD Anderson Cancer Centre since the vendor wasn’t able to justify the complex pattern recognition tasks for cancer diagnosis.

5) Social inequality: MIT economics professor Erik Brynjolfsson warned that social inequality could rise sharply in the face of rapid technological progress. Automating a large swathe of jobs can rob a majority of section of employees of their jobs. In an attempt to counteract this development, Brynjolfsson suggest that limiting certain jobs to humans only. Also, automation may lead to stagnation of income which could sink to below sustenance level.

Conclusion

According to well-known AI theorist Nick Bostrom, there should be three principles governing the use and development of AI:

1) The functioning of an AI should be comprehensible

2) The outcome should be predictable. And the criteria should be achieved within a time frame so that one can react in time

3) AI systems should be impervious to manipulation

Sign up for The Deep Learning Podcast

by Vijayalakshmi Anandan

The Deep Learning Curve is a technology-based podcast hosted by Vijayalakshmi Anandan - Video Presenter and Podcaster at Analytics India Magazine. This podcast is the narrator's journey of curiosity and discovery in the world of technology.

Richa Bhatia
Richa Bhatia is a seasoned journalist with six-years experience in reportage and news coverage and has had stints at Times of India and The Indian Express. She is an avid reader, mum to a feisty two-year-old and loves writing about the next-gen technology that is shaping our world.

Our Upcoming Events

24th Mar, 2023 | Webinar
Women-in-Tech: Are you ready for the Techade

27-28th Apr, 2023 I Bangalore
Data Engineering Summit (DES) 2023

23 Jun, 2023 | Bangalore
MachineCon India 2023 [AI100 Awards]

21 Jul, 2023 | New York
MachineCon USA 2023 [AI100 Awards]

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Council Post: The Rise of Generative AI and Living Content

In this era of content, the use of technology, such as AI and data analytics, is becoming increasingly important as it can help content creators personalise their content, improve its quality, and reach their target audience with greater efficacy. AI writing has arrived and is here to stay. Once we overcome the initial need to cling to our conventional methods, we can begin to be more receptive to the tremendous opportunities that these technologies present.