Ahead of the US Presidential elections 2024, OpenAI has come up with a suite of updates on how to tackle potential abuses, deepfakes and misinformation created by generative AI for a reliable voting process.
“Our tools empower people to improve their daily lives and solve complex problems – from using AI to enhance state services to simplifying medical forms for patients. We want to make sure that our AI systems are built, deployed, and used safely,” read the official blog post.
Firstly, the company is coming up with a new provenance classifier tool for DALL·E-generated images and has exhibited promising early results, as per its blog post. This tool is set to become available for feedback from initial testers, including journalists and researchers.
This is done to foster transparency in image provenance, allowing voters to check which tools were used in making them. The team is also incorporating digital credentials from the Coalition for Content Provenance and Authenticity to encode image details through cryptography.
Secondly, for ChatGPT and its API, the platform prohibits applications for political campaigning and lobbying until their effectiveness for personalized persuasion is understood. Impersonation of real individuals or institutions, discouraging voting, and misrepresenting voting processes are also forbidden to maintain trust and safeguard democratic processes. Users can report violations using the new GPTs for enhanced accountability and user involvement.
In addition, ChatGPT has integrated with real-time global news reporting, delivering users with sources and relevant links, granting voters the ability to independently evaluate and trust the information they receive.
The AI research lab is also collaborating with the National Association of Secretaries of State (NASS), a nonpartisan group for public officials to improve access to authoritative voting information. ChatGPT guides users to CanIVote.org for accurate US voting details, such as polling locations.
In the quest for transparency, the team behind ChatGPT is taking significant steps to ensure the credibility of image provenance. By allowing voters to scrutinize the tools used in image creation and employing cryptographic encoding from the Coalition for Content Provenance and Authenticity, they aim to fortify the trustworthiness of visual information. The commitment to prohibiting the platform’s involvement in political campaigning and lobbying, along with stringent measures against impersonation and misinformation, underscores a dedication to safeguarding democratic processes and user trust. The incorporation of real-time global news reporting enhances user access to diverse sources, empowering them to independently assess and trust the information received.
Collaboration with the National Association of Secretaries of State (NASS) highlights a commitment to providing authoritative voting information. ChatGPT guides users to CanIVote.org for accurate details, including polling locations, enhancing access to reliable information, and supporting democratic participation. The incorporation of user reporting mechanisms enhances accountability and user engagement, reinforcing the platform’s responsible use of AI in the democratic context keeping humans in the loop.