Oversight Board asks Meta to label AI manipulated content to curb fake videos

"Platforms must keep pace with these changes, especially in light of global elections during which certain actors seek to mislead the public.

Update: 2024-02-05 17:00 GMT

Representative image

SAN FRANCISCO: The Oversight Board on Monday urged Meta to begin labeling manipulated content, such as videos altered by artificial intelligence (AI) or other means, when such content may cause harm.

This comes after the Board, the external advisory group that Meta created, reviewed a misleading seven second clip of US President Joe Biden that made the rounds on social media last year.

It said that the "cheap fake" video of Biden reveals major inconsistencies in the way Meta treats altered content and that it could reshape the company’s misinformation policies ahead of the 2024 election.

The Board said Meta should stop removing manipulated media when no other policy violation is present and instead apply a label indicating the content is significantly altered and may mislead.

"The volume of misleading content is rising, and the quality of tools to create it is rapidly increasing," said Michael McConnell, Co-Chair, Oversight Board, in a statement.

"Platforms must keep pace with these changes, especially in light of global elections during which certain actors seek to mislead the public.

"Manipulated media, however, present special challenges," McConnell said.

Raising concerns about the manipulated media policy in its current form, the Board said it is "incoherent", lacks "persuasive justification", and can specifically disrupt electoral processes.

This was seen in the case of Biden's video which was uploaded to Facebook in May 2023 with a caption describing Biden as a "sick pedophile".

The original video showed the President accompanying his granddaughter Natalie Biden to cast her ballot in 2022, and pinning an "I Voted" sticker and kisses on her cheek. However, the edited version removed visual evidence of the sticker, setting the clip to a song with sexual lyrics and looping it to depict Biden inappropriately touching the young woman.

The video does not violate Meta’s Manipulated Media policy as now written, given that the policy applies only to video created through AI and only to content showing people saying things they did not say. Since the video in this post was not altered using AI and it shows President Biden doing something he did not do (not something he did not say) -- and the alteration of the video clip is obvious -- it does not violate the existing policy.

"As it stands, the policy makes little sense," said McConnell.

"It bans altered videos that show people saying things they do not say, but does not prohibit posts depicting an individual doing something they did not do. It only applies to video created through AI, but lets other fake content off the hook.

"Perhaps most worryingly it does not cover audio fakes, which are one of the most potent forms of electoral disinformation we’re seeing around the world. Meta must urgently work to close these gaps."

While Meta was right to leave up the content, it should take prompt action to amend that policy to bring it into alignment with its stated purposes and label such content as manipulated in the future, the Board noted.

Tags:    

Similar News