Technology

Meta is expanding its policies to include more deepfakes and add context to “high-risk” manipulated media.

[ad_1]

Meta has announced changes to its rules around AI-generated content and manipulated media, following criticism from its oversight board. It said that starting next month, it will label a wider range of such content, including by applying a “Made with AI” badge to deepfakes (also known as synthetic media). Additional contextual information may be displayed when content is manipulated in other ways that pose a significant risk of deceiving the public about an important issue.

The move could lead to the social networking giant flagging more pieces of content that are potentially misleading – a move that could be important in a year that sees several elections taking place around the world. However, for deepfakes, Meta will only apply labels where the content in question contains “industry standard AI image indicators”; Or where the uploader has disclosed AI-generated content.

Presumably, AI-generated content that falls outside those boundaries will escape unlabeled.

The policy change will also likely see more AI-generated content and manipulated media remain on meta-platforms — as it shifts in favor of an approach focused on “providing transparency and additional context,” as she puts it, as “the best way to achieve this.” . process this content” (i.e. rather than removing manipulated media, given the risks associated with freedom of expression). Therefore, for AI-generated or manipulated media on meta platforms like Facebook and Instagram, it appears that more labels and fewer The removals will serve as a revised playbook come summer.

Meta said it would stop removing content solely based on its currently manipulated video policy in July, adding in a blog post published on Friday that: “This timeline gives people time to understand the self-disclosure process before we stop removing the smaller subset of Manipulated media.”

The change in approach may be intended to respond to increasing legal demands on Meta around content moderation and systemic risks, such as the European Union’s Digital Services Act. Since last August, EU law has applied a set of rules to the two major social media networks, requiring Meta to walk a fine line between purging illegal content, mitigating systemic risks, and protecting freedom of expression. The bloc is also putting additional pressure on platforms ahead of the European Parliament elections next June, including urging tech giants to watermark deepfakes where technically possible.

The upcoming US presidential election in November is also likely on Meta’s mind, as the high-profile political event raises the stakes regarding the dangers of misleading content on the local scene.

Criticisms of the Oversight Board

Meta’s advisory board, which is funded by the tech giant but allows it to operate at a distance, reviews a small percentage of content moderation decisions, but can also make policy recommendations. Meta is not obligated to accept the Board’s proposals but in this case it has agreed to modify its approach.

in Blog post Posted on Friday, attributed to Monika Bickert, Meta’s vice president of content policy, the company said it was adjusting its policies on AI-generated content and manipulated media based on board feedback. “We agree with the Oversight Board’s argument that our current approach is too narrow because it only covers videos created or edited by artificial intelligence to make a person appear to be saying something they didn’t say,” she wrote.

Last February, the Oversight Board urged Meta to rethink its approach to AI-generated content next Issued a content moderation review decision regarding a doctored video of President Biden that was edited to include a sexual motive for the platonic kiss he gave his granddaughter.

While the The board agreed with Meta’s decision to leave out selected content, and attacked its policy on manipulated media as “incoherent” — noting, for example, that it only… It applies to video generated through artificial intelligence, allowing other fake content (such as essentially manipulated video or audio) to get away with it.

Mita seems to have taken the critical comments to task.

“In the past four years, and especially in the past year, people have developed other types of realistic, AI-generated content such as audio and images, and this technology is developing rapidly,” Bickert wrote. “As the Board noted, it is equally important to address fraud that shows someone doing something they did not do.

“The board also argued that we risk unnecessarily restricting freedom of expression when we remove manipulated media that does not violate our community standards. It recommended a ‘less restrictive’ approach to dealing with manipulated media such as contextualized labels.”

Earlier this year, Meta announced that it was working with others in the industry to develop common technical standards for Determine the content of artificial intelligence, including video and audio. It is building on this effort to expand labeling to synthetic media now.

“Our ‘AI-made’ labels on AI-generated video, audio and images will be based on our detection of common industry signals of AI images or people self-revealing that they have uploaded AI-generated content,” Bickert said. Noting that the company already applies “Imagined with AI” labels to real-life images created using its Meta AI feature.

The expanded policy will cover “a broader range of content in addition to the manipulated content that the Oversight Board recommended labeling,” according to Beckert.

“If we determine that digitally created or altered images, video, or audio pose a particularly high risk of materially deceiving the public about a matter of concern, we may add a more prominent label so people have more information and context,” she wrote. “This comprehensive approach gives people more information about the content so they can evaluate it better and so they will have context if they see the same content elsewhere.”

Meta said it will not remove manipulated content — whether AI-based or otherwise manipulated — unless Violates other policies (such as voter interference, bullying and harassment, violence and incitement, or other community standards issues). Alternatively, as noted above, it may add “media labels and context” in certain scenarios of high public interest.

Meta’s blog post highlights A A network of nearly 100 independent fact-checkers Which it says it works with to help identify risks related to manipulated content.

These third-party entities will continue to review false and misleading AI-generated content, according to Meta. When they mark content as “wrong or changed,” Meta said it will respond by implementing algorithm changes that reduce the reach of the content — meaning things will appear lower in feeds so fewer people will see them, as well as Meta placing an overlay flag It contains additional information for those eyeballs that land on it.

It appears that third-party fact-checkers will face an increasing workload as artificial content proliferates, driven by the boom in generative AI tools. And because more of these things appear to be staying on Meta platforms as a result of this policy shift.

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button