Facebook late on Monday published a statement outlining how it will tackle “manipulated media,” also known as deepfakes. It’s the latest social media giant to tackle what is seen as a looming political problem ahead of the 2020 election — but its policy leaves a loophole, experts say.

In a blog post, the company said that it would essentially ban most deepfakes, “investigate A.I.-generated content and deceptive behaviors like fake accounts,” and “partner with academia, government, and industry to expose people behind these efforts.”

However, there is that loophole: “This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words,” the company wrote.

The rollout of the policy also seemed confused: CNN reporter Hadas Gold initially wrote on Twitter that Facebook would make an exception to its deepfake rule for politicians. That is, that politicians who post deepfake content would not be penalized and the content would not be removed. Facebook later retracted this statement.

Deepfakes aren’t the real problem

While most experts Digital Trends spoke with said this policy (minus the politician confusion) was a good first step, ultimately they feel it will be ineffective. If Facebook’s goal is to combat misinformation, it needs to focus on more than just deepfakes, which aren’t even the biggest problem yet.

Areeq Chowdhury, the head of Think Tank at Future Advocacy, which in November produced two viral deepfake videos of U.K. Prime Minister Boris Johnson and opposition leader Jeremy Corbyn, said in an email to Digital Trends that “whilst a welcome announcement, it’s strange that Facebook are applying this policy only to deepfakes and not other forms of manipulated content designed to mislead.”

While deepfakes are certainly a problem on the platform, they’re still difficult for an ordinary person to make. And while they may proliferate soon as they become easier to make, at the moment, this is not the biggest issue that Facebook needs to focus on, said Britt Paris, associate professor of information science at Rutgers University and co-author of the paper “Deep Fakes and Cheap Fakes.” “The way most deepfakes work at present, they have an incredibly high barrier to entry and require a lot of expertise to make,” Paris told DT. “This will likely change, but whether this is something that’s pressing at the moment is another question.”

Right now, Paris said things like fake news, and misrepresentative or decontextualized videos are a more pressing concern. “They’re literally doing the easiest thing possible,” Paris said. “They’re always under fire for misinformation and political problems. This is their PR attempt to say ‘we’re doing something about this.’ It appears to me to be a symbolic effort.”

“There’s a question over whether a ban will make a difference,” wrote Chowdhury. “Lots of activity is technically banned on Facebook but still takes place.”

Facebook did not respond to a request for comment.

What counts as satire?

Shamir Allibhai, the founder and CEO of Amber Video, a video verification platform, applauded the exclusion, pointing out that while many people will try to exploit the loophole, there are many places in the world, such as countries under authoritarian rule, where satire may be the only vehicle to critique and seek change. “They made the right call here,” he wrote in an email to Digital Trends.

This policy, however, is going to be difficult to enforce. Definitions of satire and parody vary the world over, as Paris pointed out, and Facebook’s A.I.-based approach to rooting out misinformation or manipulated content doesn’t (yet) have a sense of humor.

“A.I. is yet to develop the nuance for comedy and context, meaning that real people will be required to filter out deep fake content that falls under the banner of satire and parody,” wrote Jo O’Reilly, digital privacy advocate at the U.K.-based digital privacy information source ProPrivacy. “This, of course, presents its own problems, including but not limited to Facebook’s attempt to hide behind neutrality and avoid political allegiance.”

David Greene, civil liberties director at the Electronic Frontier Foundation, also said he supported the satire exception, but was concerned at how hard it would be to enforce these rules. “This is a good first step, but there are a hundred difficult steps to follow after,” he told Digital Trends. “This type of content moderation is impossible to do perfectly and really, really, really hard to do well. I don’t see them solving things this way. Not that they shouldn’t try.”

Editors’ Recommendations

Source link