With the experts worrying about the penetration of artificial intelligence in the battlefield with autonomous weaponry and intrusive surveillance, the government of the United States are betting hard on avocating regulation related to AI.
In a bid to maintain its leadership of having a robust innovation ecosystem and establish AI as a risk-free society, the White House has unveiled ten principles that government agencies need to adhere while proposing new AI regulations for the private sector. This proposal is a part of President Trump’s American AI Initiative that was launched in 2018. However, the ambiguity related the principles announced is still continuing to baffle the AI watchdogs.
Here is a summary of those principles:
- The government should promote reliable, robust, and trustworthy AI applications.
- Encourage public participation by involving them in the decision-making process and feedback process.
- Maintain scientific integrity for sound judgements.
- A thorough risk assessment by the agencies is required to avoid any dangerous technology out in public, such as deepfake.
- Weigh the societal impacts of all proposed regulations.
- All approaches should be flexible enough to adapt to the ever-changing AI domain.
- Make sure there is no illegal discrimination.
- Incorporate transparency because people are always sceptical of what they don’t understand, and it should be remembered that inclusion always leads to increased usage.
- All data should be secured to preserve privacy and should avoid the exploitation of individuals.
- Too much secrecy will hurt among the developer community that might result in a violation of policies and lack of transparency. Agencies should talk to one another to be consistent and predictable in AI-related policies.
Putting data ethics into practice requires a union of technical, philosophical, and sociological components.
People are coughing up data all the time, and one can only be so aware of where their data is being used.
More Emphasis On AI Safety
Imagine a teenager watching a recipe of some popular food item, and because some algorithm uses salt and sugar content as the metric, it will match the watched recipe. It will then recommend things that are inherently unhealthy. The one major disadvantage of such consumer exploitation tactics is the emergence of unhealthy and improper habits.
The ability of the modern-day systems to draw insights in real-time makes the users more prone to exploitation. Customers are always under a deluge of flashcards, which are chosen based on the way the customers have interacted or have clicked in the past.
Using machine learning to boost up sales or even finding the right diagnosis might look harmless in the beginning, but the lack of transparency in the principles that were announced by the White House can easily allow malicious players to use this vast amount of data for personal use.
Countries like the US, in the past, have introduced legislation like the Deceptive Experiences To Online Users Reduction (DETOUR) Act, to prohibit large online platforms from using deceptive user interfaces. They have recognised this act by using the term “dark patterns” in their press release.
However, for the future agencies to holistically incorporate AI to sustain profits and to honour their purpose while exploring the proliferation of algorithmic decision-making, the search for a balance between regulation, innovation and the effects of AI on the dissemination of information along with the questions related to individual rights is necessary.