Toronto-based Armilla AI recently launched a first of its kind, all-in-one AI governance platform. The company also announced that it has managed to bag $1.5 million in seed investment. Armilla AI brings stakeholders together with automation validation tools to test machine learning algorithms for accuracy, robustness, fairness, bias, data drift, etc. The company claims that its services help its customers deploy trustworthy AI models.
The company has an interesting lineup of investors, including entrepreneur-investor Naval Ravikant’s Spearhead Fund, Alana and Eva Lau’s Two Small Fish Ventures, and C2 Ventures. AM Turing Awardee Yoshua Bengio has also invested in the company and Apstat partners Nicolas Chapados and Jean-Francois Gagne. A backing from Bengio, a pioneer in AI and deep learning, lends Armilla AI reliability. But why did Bengio choose to invest in this young start-up?
Sign up for your weekly dose of what's up in emerging technology.
About Armilla AI
AI-based technology has touched almost every aspect of our lives, including work, entertainment, leisure, play, society, and culture. AI has managed to positively impact every sector where it has and continues to be employed, including finance, science, transportation, healthcare, environment, etc. That said, there are a few glaring problems with this technology which are further compounded when scaled and deployed at massive proportions.
In the past, AI fallacies have caused monumental losses to individuals and communities. For example, a study conducted in 2019 revealed that a widely used algorithm in US hospitals was systematically discriminating against black patients. The study found that the algorithm was less likely to refer a black person compared to a white person for the same ailment. Hospitals and insurers use similar algorithms to manage care for about 200 million people in the US each year.
Faulty AI algorithms and models are a result of massive growth and increased complexity. Traditional testing methods have not been able to keep up with such explosive growth, resulting in erroneous outputs that include bias and negative results.
Armilla AI offers an ML testing platform that offers organisations tools to plan, experiment, validate, and archive models to remediate this problem. Armilla automated the testing process, which involves more than 50 steps to assess any miscalculations in the ML models. This system includes Armilla FingerPrint, a validation framework that learns the sensitivities of any system and allows organisations to monitor their machine learning system in production.
The entire process deployed by Armilla is fully auditable by logging all tests conducted, discovered issues, and problems resolved. This allows previously siloed business stakeholders like executives, managers, and data scientists to view and collaborate directly on results in real-time.
Speaking of the company, Yoshua Bengio said, “AI models are making more critical decisions every day, which means they require new oversight protocols that can ensure they are accurate, fair, and curb potential abuse. This growing need for independent validation requires the same attention and investment used to build models themselves. This is how to responsibly build AI.”
Prof Bengio’s Quest for Ethical AI
Bengio is recognised as one of the leading AI experts. Prof Bengio completed his postdoctoral studies at the Massachusetts Institute of Technology (MIT) in Boston. He has been bestowed with prestigious awards like the Turing Award in 2018 and the Killam Prize in 2019.
Bengio is a great proponent of ethical and trustworthy AI. In a 2019 interview with Nature, Bengio spoke at length about irresponsible and unethical AI usage. He said that AI could amplify discrimination and biases. In the same interview, he spoke about how the need of the hour was to go beyond just self-regulation in terms of AI ethics and devising government or international guidelines for AI.
Writing a guest post on The Conversation, Bengio said that the objective of tech innovation is to reduce human misery and not increase it. However, AI is very well capable of doing the latter. He further wrote that such discrimination originates because of the inherent biases of humans developing these machines. In this article and in general, Bengio has called for a resolution of such discriminations by involving all stakeholders, including the government.
More notably, Prof Bengio founded Mila, a research institute for AI, born out of a partnership between the Université de Montréal and McGill University. It has 500 researchers, making the institute one of the largest academic research centres in the world for machine learning. The institute was responsible for establishing an ethical framework through the Montreal Declaration for Responsible Development of AI. This framework proposed ethical principles based on ten fundamental values — well-being, respect for autonomy, privacy and intimacy, solidarity, democratic participation, equity, diversity inclusion, prudence, responsibility and sustainable development.
About the declaration, Bengio has said that its goal is to establish principles that would form the basis for adopting new rules and laws pertaining to responsible AI. He also highlighted that the current laws were not equipped enough to deal with the challenges of AI.