Now Reading
Top Non-Profit Artificial Intelligence & Machine Learning Institutes That Are Working On Making AI Safe

Top Non-Profit Artificial Intelligence & Machine Learning Institutes That Are Working On Making AI Safe


Tech companies across the globe are becoming more and more aware of the power of artificial intelligence and singularity. Not-for-profit institutes are thus conscientiously working on the strategic implications and the openness of artificial intelligence. From subjects such as ethics and policy in AI to the development of machine intelligence, not-for-profit research institutes are working towards the advancement of making AI safe.

In this article, we list down top not-for-profit AI research institutes across the globe that are doing ground-breaking research in aligning AI with human values:

OpenAI: Set up by billionaire technologist Elon Musk of Tesla, OpenAI is a nonprofit research lab that is doing groundbreaking work on developing safe general intelligence. Headed by CEO and OpenAI co-founder Sam Altman, OpenAI is researching on developing a safe and friendly AI future to benefit humanity. With a primary focus on humanity, the institute is committed to working on AGI’s deployment to ensure that it is used for the benefit of all, and to avoid enabling uses of AI or AGI that can affect humanity. with that in mind, the institute has channelled all its resources on research to drive broad adoption of AI and make AGI safe. In their charter, the institute has outlined the guidelines for long-term safety about late-stage AGI development and also cooperating with other research and policy institutions to create a global community.



Machine Intelligence Research Institute:  From developing better decision-making systems to making more reliable general-purpose AI systems, MIRI (earlier known as the Singularity Institute for AI) has been at the forefront of cutting-edge research and has been developing tools to design and analyse AI systems. At MIRI, the research is directed at developing fool-proof AI systems that are well-aligned with human goals. According to Stuart Russell, a MIRI research advisor and co-author of The Long-Term Future of Artificial Intelligence, robustness and safety should be integrated into mainstream research capabilities. MIRI’s technical research agenda clearly focuses on developing formal agent foundations for AI alignment which would help in developing the conceptual tools and theory that for engineering robustly beneficial systems in the future.

The Allen Institute of Artificial Intelligence: AI2, as it is widely known, was started by Microsoft co-founder Dr Paul Allen and is steered by well-known researcher Dr Oren Etzioni with a focus on developing high-impact research for AI. Pegged as one of the largest non-profit AI organisations in the world, AI2 is committed to building responsible AI that will benefit humanity and work on some of the biggest human challenges. Set up in 2014, AI2 focuses on computer vision, machine reading and NLU. It has also developed over dozen AI applications and technologies. Earlier last year, the AI2 joined hands with Partnership on AI, a noted consortium which is laying down the groundwork for best practices for advancing public perception of AI.

Wadhwani Institute for Artificial Intelligence: The Wadhwani Institute for AI which was set up by tech entrepreneur brothers, Dr Romesh Wadhwani and Sunil Wadhwani in Mumbai earlier this year to advance the development of AI for good is India’s first independent nonprofit research institute. It was started with a vision to pursue AI and ML research in domains such as education, healthcare, infrastructure, financial inclusion, and agriculture. Besides fostering a research environment, the institute also aims to develop collaboration between AI scientists from top institutes as well as the government. The other agenda is to create accessible datasets and foster dialogue on the ethics and guidelines for AI development.

See Also
connected car

Future of Humanity Institute: This research centre at the University of Oxford is led by Dr Nick Bostrom, a well-known researcher and author of Superintelligence: Paths, Dangers, Strategies. From assessing game theory, existential risk to the assessment of AI and its long-term impact, FHI Governance of AI program explores the ethical, social and political dimensions of AI and also tracks the current applications of AI in military and cybersecurity among other areas. Bostrom, who has been vocal about the danger of AI has often stressed to “build up the technology and also understand the science of how to predict and control advanced artificial agents”.

Last Word

Besides these institutes, leading tech giants such as Google, Microsoft, Adobe and Facebook, among others, have their own research arms such as FAIR, DeepMind, Microsoft Research and Adobe Research which also carry out groundbreaking work in machine learning and artificial intelligence. Their researches and findings are published on open-source platforms as well as in top journals.


Enjoyed this story? Join our Telegram group. And be part of an engaging community.


Provide your comments below

comments

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
Scroll To Top