20 Tech Giants Agree to Fight AI Election Interference Across Globe

Sat Feb 17 2024
icon-facebook icon-twitter icon-whatsapp

NEW YORK: A group of 20 technology companies announced Friday that they have agreed to work together to prevent deceptive artificial intelligence content from interfering with elections around the world this year.

The rapid growth of generative artificial intelligence (AI), which can generate text, images and video in seconds in response to prompts, has fueled fears that the new technology could be used to sway major elections this year as more than half the world’s population prepares to go to the elections.

Signatories to the technical agreement, which was announced at the Munich Security Conference, include companies that build generative artificial intelligence models used to create content, including OpenAI, Microsoft and Adobe. Other signatories include social media platforms that will be challenged to keep harmful content on their sites, such as Meta Platforms, TikTok and X, formerly known as Twitter.

The agreement includes commitments to work together to develop tools to detect misleading AI-generated images, video and audio, create campaigns to raise public awareness of deceptive content, and take action on such content on their services.

Technology to identify AI-generated content or certify its origin could include watermarking or metadata embedding, the companies said.

The agreement did not specify a timetable for meeting the obligations or how each company would implement them.

“I think the usefulness of this (agreement) is the breadth of companies that have signed up to it,” said Nick Clegg, president of global affairs for Meta Platforms.

“It’s all well and good if individual platforms develop new policies for detection, provenance, tagging, watermarking and so on, but if there isn’t a broader commitment to do this in a shared, interoperable way, we’re stuck in the dust. different commitments,” Clegg said.

Generative AI is already being used to influence politics and even convince people not to vote.

In January, a robocall using a fake voice of US President Joe Biden was sent to New Hampshire voters urging them to stay home during the state’s presidential primary.

Despite the popularity of text-generating tools like OpenAI’s ChatGPT, tech companies will focus on preventing the harmful effects of photo, video and audio AI, in part because people tend to be more skeptical of text, said Dana Rao, Adobe’s chief trust officer. Conversation.

“There’s an emotional connection with sound, video and images,” he said. “Your brain is wired to believe that kind of media.

icon-facebook icon-twitter icon-whatsapp