Meta's Oversight Board Calls for Innovative AI Content Governance Frameworks

· 5 min read

The Oversight Board is once again pushing Meta to revise its policies regarding AI-generated content. In a recent statement, the board emphasized that Meta should establish a distinct set of guidelines for AI content, separate from its existing misinformation protocols. It also urged the company to enhance its detection capabilities and improve the utilization of digital watermarks, among other necessary modifications.

These recommendations emerged following an AI-generated video shared in 2025, which purported to show damaged structures in the Israeli city of Haifa during the Israel-Iran conflict. The clip, which garnered over 700,000 views, was posted by an account that claimed to represent a news organization but was actually managed by someone based in the Philippines.

Once the video was reported to Meta, the company opted not to remove it or apply a "high risk" AI label. The Oversight Board reversed this decision, asserting that the incident highlights critical shortcomings in Meta's current AI regulations. The board further noted that the case underscores the need for more robust measures to address deceptive AI content, particularly when disseminated through inauthentic or abusive networks of accounts and pages.

"Meta must take greater steps to combat the spread of misleading AI-generated content on its platforms, especially through inauthentic or abusive networks, particularly on matters of public interest, so users can clearly discern between authentic and fabricated material," the board stated in its ruling. Following the board's intervention, Meta eventually suspended three accounts linked to the page after identifying "clear signs of deception."

A primary recommendation from the board is for Meta to implement a specific rule for AI-generated content, independent of its misinformation policy. According to the board, this new regulation should outline clear directives on when and how users must label AI content, as well as details on how Meta will enforce penalties for non-compliance.

The board also expressed significant concerns about the effectiveness of Meta's current "AI Info" labels, noting that their application lacks both robustness and comprehensiveness, especially in high-stakes situations such as conflicts or crises. "A system relying heavily on self-disclosure of AI usage and infrequent escalated reviews cannot adequately address the challenges posed by today’s rapidly evolving AI content landscape," the board remarked.

Furthermore, the Oversight Board highlighted the need for Meta to invest in advanced detection technologies capable of accurately labeling AI media, including audio and video. The group also voiced concerns about reports indicating that the company inconsistently applies digital watermarks to AI content generated by its own tools.

In response to the board's decision, Meta stated that it welcomed the outcome and would take action "on content that is identical and in the same context" when technically and operationally feasible. The company has 60 days to formally respond to the board's recommendations.

This isn't the first time the Oversight Board has criticized Meta's handling of AI content. The group has previously described the company's manipulated media policies as "incoherent" on multiple occasions and has raised concerns about its reliance on third-party fact-checkers and other trusted partners to flag problematic content. In this instance, the board noted that these organizations have indicated that Meta has become less responsive to their outreach, partly due to reduced internal team capacity. The board emphasized that Meta should be capable of conducting harm assessments independently rather than relying solely on external partners during times of armed conflict.

While the Oversight Board's decision pertains to a post from last year, the issue of AI-generated content during armed conflicts has gained heightened importance amid the ongoing Middle East crisis. Since the start of U.S. and Israeli strikes on Iran earlier this month, there has been a noticeable surge in viral AI-generated misinformation across social media platforms. The board, which has previously indicated its intent to collaborate with generative AI companies, included a suggestion that appears to extend beyond Meta's scope.

"The industry requires consistency in helping users identify deceptive AI-generated content, and platforms must address abusive accounts and pages sharing such material," the board wrote.

Update, March 10, 10:53AM ET: This story was updated to reflect Meta’s response to the Oversight Board.

This article originally appeared on Engadget at https://www.engadget.com/social-media/the-oversight-board-says-meta-needs-new-rules-for-ai-generated-content-100000268.html?src=rss