Listen to this story
|
In 2023, big decisions were made to ensure AI plays fair and doesn’t mess with people’s privacy. Instead of just talking about it, governments and companies got serious feeling the pressure to make new rules. Ethical AI, which used to be this fancy idea, got a makeover – it now had some real power behind it.
The people in charge wanted to make sure AI followed some strict rules to protect people’s privacy. To make sure AI behaves, plans and checklists in 2023, the folks didn’t just talk the talk, they walked the walk.
Here are 6 actions taken in 2023 to make surely AI will be built and used ethically.
UNESCO Forms Business Council for Ethics of AI
The Business Council for the Ethics of AI made its official debut at the Ministerial and High Authorities Summit on the Ethics of Artificial Intelligence in Latin America and the Caribbean, hosted in Santiago de Chile. Microsoft and Telefónica, serving as co-chairs, played an important role in the council’s inauguration.
This collaboration is spearheaded by UNESCO and Latin American companies at the forefront of AI development or application across diverse sectors, and has taken shape as a platform for corporate convergence. The Council will provide a forum for companies to convene, share insights, and champion ethical standards within the AI industry, aligning with UNESCO’s guidelines.
AI Bill for Rights in US
The White House introduced the blueprint for an AI Bill of Rights, a set of rules for responsible AI use. This blueprint involves collaboration with academics, human rights groups, the public, and major companies like Microsoft and Google. The AI Bill of Rights was created by the White House Office of Science and Technology Policy amid an ongoing global push to establish more regulations to govern AI.
It aims to make AI transparent, fair, and safe, focusing on potential civil rights issues in areas like employment, education, healthcare, financial services, and commercial surveillance. By emphasising practical impacts on daily life, the AI Bill of Rights seeks to ensure AI behaves responsibly and respects fundamental rights, ushering in a new era where even artificial intelligence is bound by clear rules and guidelines.
EU AI Act
In the spring of 2021, the European Commission pitched the inaugural EU regulatory game plan for AI. They’re sorting AI systems into risk categories based on their potential impact, with more or less rule-mania depending on the danger level. If greenlit, these guidelines will be the global frontrunners in AI regulation.
Fast forward to June 14, 2023 – MEPs gave a nod to Parliaments negotiating position on the AI Act. Now, the intricate dance begins with EU nations in the Council to hash out the final law draft. The goal is to shake hands on an agreement before the year bows out, setting the stage for a new chapter in AI governance.
PM Narendra Modi Proposes Ethical AI
In August, during one of the most important events in India, called the B20 Summit, Prime Minister Narendra Modi spoke about AI. He said it’s important to use AI in a good and fair way. Modi also had this idea of having a special day each year called the ‘International Consumer Care Day’.
Instead of dealing with carbon credits, he suggested using something called ‘green credit’. In his speech, Modi said that India is a leader in using tech in Industry 4.0. He also mentioned that India is a big deal when it comes to making sure things run smoothly and trusty in the global supply chain.
Pope Warns of Irresponsible Use of AI
In March, Pope Francis took centre stage at the ‘Minerva Dialogues’, an annual brainiac gathering hosted by the Vatican’s Dicastery for Education and Culture to call for the ethical use of AI. While he acknowledged the benefits of AI when used for the common good, he also warned against unethical or irresponsible use of the technology.
This call to the ethical arms echoed just a week after AI mischief-makers had a field day generating images of the Pope using AI that left many fooled. A slick coat may be a style statement, but misinformation? That’s a different AI story.
WHO Calls for Safe and Ethical AI
The World Health Organization (WHO) also stepped up to the mic, urging to hit the brakes on the power of LLMs. In their official statement, WHO made it clear that using AI can be risky, be it for a decision-support tool, or to upgrade diagnostic capacity in under-resourced settings
‘Handle with care’ was the main message of WHO. The organisation also threw down the rulebook, stressing the importance of playing nice with ethical principles and solid governance, because in the world of AI and health, a little caution and a lot of ethics can go a long way.