In Could, A manipulated video of President Joe Biden appeared on Fb. The unique footage confirmed Biden throughout the 2022 midterm elections, inserting an “I voted” sticker on his granddaughter’s chest and kissing her on the cheek. The doctored model looped the footage to make it appear as if he was repeatedly touching the woman, with a caption that labeled him a “pedophile.”
Meta left the video up. Right now, the corporate’s Oversight Board, an unbiased physique that appears into the platform’s content material moderation, introduced that it’s going to evaluation that call, in an try and push Meta to deal with the way it will deal with manipulated media and election disinformation forward of the 2024 US presidential election and greater than 50 different votes to be held around the globe subsequent 12 months.
“Elections are the underpinning of democracy and it’s important that platforms are outfitted to guard the integrity of that course of,” says Oversight Board spokesperson Dan Chaison. “Exploring how Meta can higher handle altered content material, together with movies meant to deceive the general public forward of elections, is much more essential given advances in synthetic intelligence.”
Meta mentioned in a weblog put up that it had decided the video didn’t violate Fb’s hate speech, harassment, or manipulated media insurance policies. Beneath its manipulated media coverage, Meta says it should take away a video if it “has been edited or synthesized … in methods that aren’t obvious to a median particular person, and would doubtless mislead a median particular person to imagine a topic of the video mentioned phrases that they didn’t say.” Meta famous that the Biden video didn’t use AI or machine studying to govern the footage.
Consultants have been warning for months that the 2024 elections will likely be made extra sophisticated and harmful because of generative AI, which permits extra practical faked audio, video and imagery. Though Meta has joined different tech corporations in committing to making an attempt to curb the harms of generative AI, most typical methods, corresponding to watermarking content material, have confirmed solely considerably efficient at greatest. In Slovakia final week, a faux audio recording circulated on Fb, through which one of many nation’s main politicians appeared to debate rigging the elections. The creators have been in a position to exploit a loophole in Meta’s manipulated media insurance policies, which don’t cowl faked audio.
Whereas the Biden video itself is just not AI-generated or manipulated, the Oversight Board has solicited public feedback on this case with a watch in direction of AI and is utilizing the case as a option to extra deeply look at Meta’s insurance policies round manipulated movies.