Empowerment without defined responsibility and accountability has got no meaning. The potential of data is limitless. When it comes to making AI (Artificial Intelligence or Augmented Intelligence) responsible, explainable and trustworthy, data democratization and governance will need to be discussed in parallel as they are the two sides of the same coin. Explainable AI is also very important to understand and interpret the predictions and how to further improve the predictions to ensure better decision-making and to balance it with risk and accuracy.
Essentially, data democratization is around the easy accessibility of digital data and information to the average end-user. But to manage its accessibility, usability and protection, data governance procedures are required to be implemented as it ensures that data is used in the right way, by the right user and at the right time. It also brings the focus on responsibility and accountability in case something goes wrong.
Data democratization and governance also lays down the foundation for managing bias, potential risks, trust and transparency, and accuracy issues in AI. It is crucial that data or information access is accompanied by a supervisory and governance framework to ensure that the information is used in compliance with operational and regulatory controls and to keep the information reliable and up to date. It needs a simple way for individuals to interpret and appreciate the information so that they can use it to speed up the decision-making process and uncover growth opportunities.
The social impact or the human augmentation is one such area that is gaining greater attention when the transformative journey of AI and its implications are analyzed with the evolution and changes to the human genome. The businesses and government/federal social responsibility cannot be shifted to an artificial system just because it is a self-learning system and evolving based on the training data it receives from the outside world or generates on its own through reinforcement learning.
No doubt, the lack of availability of required data cannot make the model robust and bias-free; but the user/business policies for the data usage must be established along with the debiasing techniques for low-frequency and high-frequency decisions. It is also very important to establish the evidence in order to confirm the benefits realized so that it can be improved further as the AI or ML (Machine Learning) model becomes more mature.
AI Fairness 360 is one such open-source metrics toolkit to search and minimize the unnecessary bias in datasets and machine learning models.
It checks data and model bias at three different stages- training data level, an algorithm that generates the classifier and at testing and deployment stage when the prediction is made. It continuously learns through the feedback to improve the model further, thus, making the AI system more explainable. Machine learning algorithms search for patterns in the training data that are dependent on a specific prediction in order to accurately make predictions. An algorithm, for instance, might discover the trend that seems to associate an individual with a pension income only and a saving scheme giving better returns on investment to lead a dignified life.
The right understanding of the problem and the data is an undisputed fact that makes the system explainable and making the data understand easily to the user is far more important to make the data democratization effective. Yes, it is true “a picture is worth a thousand words”; but not always if the picture is not oriented well. Visualization is an intuitive way to understand the data, but making it comprehensible and drilling it down to the granular level along with its data lineage will help you to derive meaningful insights and know the reasons as to why a situation or outcome occurred.
The next most important thing is an intelligent data governance strategy which must align with an overarching goal of desired outcomes/benefits. Data governance not only lets you ensure integrity, security and compliance with laws and policies concerning data governance; but also ensures that data is available in the right format, is consistent and also helps in determining which data to keep and which to delete when no longer required. Data governance is an ongoing process as new data sources from disparate systems continue to evolve, data usage can be repurposed, and changes can happen in regulations about data security and privacy.
NIST (National Institute of Standards and Technology), U.S. Department of Commerce prepared a plan for Federal Engagement in Developing Technical Standards and Related Tools highlighting the importance of AI to the future of the U.S. economy and national security and guiding Federal agencies to ensure that the nation maintains its leadership position in AI. It also stresses various AI standards-related tools pertaining to data sets in standardized formats, gathering knowledge and reasoning in AI systems, benchmarking, testing methodologies, metrics, testbeds and last but not the least, about the accountability and auditing instruments for examination of AI system.
When developers and policymakers determine how to factor in risk management for individuals, societies, and society at large, legal, ethical, and societal factors may also need to be addressed. Some standards and standards-related instruments seek to provide risk management guidelines that developers and policymakers may use to regulate how to handle such potential risks.
Hence, to bring fairness and generate trust in the AI system, it requires the explainability, interpretability, reliability and accountability which is human-centered and the actual onus of making AI more responsible and explainable resides with us as it is for the human of the human and by the human. Data democratization and governance must go hand in hand for AI’s effective implementation. Not to undermine AI’s capability, real intelligence is still with the human who can bring more value and acceptability by showcasing the positive social impact of AI.
Acknowledgments and References:
- Introducing AI Fairness 360, A Step Towards Trusted AI – IBM Research
- U.S. LEADERSHIP IN AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools (nist.gov)
Join Our Telegram Group. Be part of an engaging online community. Join Here.
Subscribe to our NewsletterGet the latest updates and relevant offers by sharing your email.
Gaurav Dhooper is a strategic thinker, a professional Agile and IT Delivery Leader, an author and a speaker. Gaurav writes articles on Digital Transformation, Agile Transformation, Agile Project Management and Scrum. He also writes articles on Robotic Process Automation, Artificial Intelligence, Machine Learning and Personal Agility in leading online publications. Gaurav has been reviewer for PMI’s Standard for Earned Value Management and a book on Agile Contracts. He is also a Webinar and keynote speaker in various global conferences and Reviewing Committee Member in PMO Global Awards 2020. Gaurav also holds the voluntary positions of Digital Media Global Director of PMO Global Alliance and Senior Official of IAPM, Switzerland for Metropolitan area of Noida, India.