AIM Banners_978 x 90

OpenAI Tests Source Classifier Tool for DALL·E-Generated Images

The company is also collaborating with the National Association of Secretaries of State, a nonpartisan group for public officials to improve access to authoritative voting information.
OpenAI Sam Altman
Image by Nikhil Kumar
Ahead of the US Presidential elections 2024, OpenAI has come up with a suite of updates on how to tackle potential abuses, deepfakes and misinformation created by generative AI for a reliable voting process. “Our tools empower people to improve their daily lives and solve complex problems – from using AI to enhance state services to simplifying medical forms for patients. We want to make sure that our AI systems are built, deployed, and used safely,” read the official blog post.  Firstly, the company is coming up with a new provenance classifier tool for DALL·E-generated images and has exhibited promising early results, as per its blog post. This tool is set to become available for feedback from initial testers, including journalists and researchers. This is done to fo
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Shritama Saha
Shritama Saha
Shritama (she/her) is a technology journalist at AIM who is passionate to explore generative AI with a special focus on big techs, database, healthcare, DE&I, hiring in tech and more.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed