Download our Mobile App
Developing ethical AI has been a severe concern since the advent of this technology. And as the technology matured, designing a moral framework has been the prime motive for many researchers. With more headlines coming up for biased artificial intelligence, replicating human prejudices and discrimination, there are high chances of such issues being a significant problem when applied to critical sectors like — law, healthcare, banking etc.
Policymakers, as well as business leaders, are increasingly aware of opportunities that artificial intelligence can bring along with its risks. Yet, there has been a significant lack in coming to a consensus of creating a process that can ensure the trustworthiness of AI systems. And thus to address these issues, the World Economic Forum has come up with twelve-step guidance for organisations to design and follow AI frameworks.
In a recent blog post, Lofred Madzou, Project Lead of AI & Machine Learning and Kate MacDonald a New Zealand Government Fellow for the World Economic Forum spoke about the criticality of making sure that the behaviour of the AI system is consistent within a framework including legislation and organisational guidelines.
According to the blog post, the challenges associated are primarily related to how deep learning systems operate to classify patterns using neural networks. This is because these neural networks may contain a massive amount of parameters and thus produce opaque and non-relevant decisions. In turn, it creates hassles in detecting bugs and inconsistencies in the AI systems.
12-Step Considerations For Designing Auditable AI Systems
According to WE Forum, these assessment frameworks will help in identifying, monitoring as well as mitigating risks that can arise within AI systems. Traditionally it has been practised to train a system using a training dataset and then test it on another set for evaluating its performance; however, this approach claims to be different than that.
Here are the steps:
Stay ConnectedGet the latest updates and relevant offers by sharing your email.
Justify The AI-Powered Solution: Before initiating the risk mitigation process, organisations need to clearly justify their reasons and lay out their objective of introducing AI-powered services, and how the system can benefit the end-users, consumers and society at large.
Multi-Stakeholder Approach: Secondly, to empower their AI-powered services, organisations must identify their internal and external stakeholders of each project and should be given relevant information of the AI system under consideration.
Follow Existing Practices: While considering the risks and benefits of the AI systems, it is also critical to include relevant human and civil rights along with the existing practices.
Application of Ethical Framework Across The AI-Lifecycle: As AI software evolves with the usage and the data fed onto it, it is necessary to integrate risk assessment framework while designing the model as well as while monitoring and managing it. Also, it should be easily manageable by the multiple stakeholders involved in the project.
Adopt A User-Centric Approach: In order to design a practical and ethical AI framework, organisations should take up the perspective of the project teams and work around specific use cases.
Explain Risk Prioritisation For Stakeholders: With diverse stakeholders involved in a single IA project, organisations need to address their perception of risks, benefits and their level of tolerance. Project leads should explain the risk and benefit prioritisation scheme to the stakeholders to enhance their interests.
Define Performance Framework of Indicators: Another step that is critical for mitigating risks in AI systems is to define exact parameters of assessing the systems for intended purposes. Not only should it include the accuracy of the system but should also cover aspects like regulatory compliance, user experience, as well as adoption rates.
Define Operational Roles: Once we define the metrics of the assigned AI system, organisations should also define the role of the human workforce involved in the development of the system. This can include the responsibilities of each human involved in the effective operation of the system, along with their competencies required for the role and the risks associated.
Establish the data required: To develop any model or AI system, the teams need data. Thus for developing an effective strategy, the data necessary for training, testing and operating the system should be mentioned beforehand. Mapping the data flow, including the data acquisition, processing, storage, and disposition is critical along with its security and integrity for developing a practical AI system.
Map the Lines of Accountability: Another critical step for making the AI system effective is specifying the lines of accountability for the outcomes generated by the AI system. This would help to provide the right responsibility for any sort of unexpected result of the AI system.
Encourage Experimentation: Experimentation is the key to advancement in the AI field, and thus organisations should encourage the right to experiment around AI-powered services for promoting calculated risks. This can quickly be done by establishing feasibility and validation studies, supporting cross-collaboration of different departments across the organisation, which will enhance sharing learning and knowledge among the employees.
Build Educational Resources: Lastly, creating a repository including the risks and benefits of the ethical AI framework, along with ways to develop strong organisational capabilities while deploying AI systems, can be beneficial for organisations.
If you loved this story, do join our Telegram Community.
Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
What's Your Reaction?
Sejuti currently works as Senior Technology Journalist at Analytics India Magazine (AIM). Reach out at firstname.lastname@example.org