OpenAI to Introduces Tool to Identify DALL-E 3 Generated Photos

Thu May 09 2024
icon-facebook icon-twitter icon-whatsapp

LONDON: In response to growing concerns about the impact of AI-generated material on this year’s worldwide elections, Microsoft-backed firm OpenAI said that it is releasing a tool that can identify photos produced by its text-to-image generator, DALL-E 3, according to Reuters.

During internal testing, OpenAI revealed that the tool successfully detected photographs created by DALL-E 3 with an accuracy rate of approximately 98%. Moreover, it demonstrated proficiency in recognizing standard alterations such as compression, cropping, and saturation adjustments, exhibiting minimal to no impact on its detection capabilities.

Furthermore, the creator of ChatGPT, a renowned AI language model, disclosed intentions to integrate tamper-resistant watermarking into the identification process. This watermarking technology aims to affix digital files, including audio and images, with a signal that is highly resistant to removal or alteration.

OpenAI has also embarked on developing a standard protocol to aid in tracing the origins of various forms of media. Additionally, the organization has joined forces with industry giants such as Google, Microsoft, and Adobe as part of a collaborative effort to address the challenges posed by AI-generated content.

The prevalence of deepfake and AI-generated material in election campaigns extends globally, with instances reported in countries such as Indonesia, Pakistan, and the United States. These technologies present formidable challenges to the integrity of democratic processes, necessitating proactive measures to safeguard against manipulation and misinformation.

In alignment with this objective, OpenAI and Microsoft have announced the establishment of a $2 million “societal resilience” fund aimed at promoting AI education and fostering resilience against the potential misuse of AI technologies in societal contexts.

icon-facebook icon-twitter icon-whatsapp