Meta’s Oversight Board on Tuesday, March 10, called for an overhaul of the social media giant’s policies to address AI-generated deepfakes and misinformation, especially during cases of armed conflicts and crises.The Oversight Board, which is funded by Meta but operates independently, said that the company needs to roll out a new set of rules to ensure users can recognise AI-generated content. These rules must require providing details about the origins of AI-manipulated media by implementing content provenance standards more consistently at scale.Meta should also invest in stronger AI detection tools and develop better methods for appropriate labelling of AI-generated content, the Board said. “Additionally, it should amend its current policies to ensure a timely and adequate response to deceptive AI-generated output,” it added.
The Oversight Board laid out its recommendations as part of a ruling that was issued after reviewing an AI-generated video posted on a Facebook page during the 12-day Israel-Iran war in June 2025. The AI-generated video, debunked by fact-checkers, falsely depicted extensive damage to buildings with overlaid text in English reading “Live now – Haifa Towards Down” [sic] with the posting date.
The Board overturned Meta’s decision to leave the post up, stating that the company should have affixed a ‘High Risk AI’ label to it as the content “posed a material risk of misleading the public on an important matter at a critical time.”
The latest ruling by Meta’s Oversight Board comes in the midst of the US-Israel war on Iran, which began on February 28, and has since plunged West Asia into heightened tensions even as it pushes up energy and commodity prices worldwide. The conflict has also unleashed a wave of viral war footage online, much of it gaining millions of views before being exposed as entirely AI-generated or clipped from video games by third-party fact-checkers.
“As the quantity and quality of AI-generated content increase, its impact on people and societies will be profound. The risks are heightened when deepfake output designed to deceive, manipulate or increase engagement is shared during conflicts and crises, such as in Iran and Venezuela in 2026, and spreads rapidly on different companies’ platforms,” the Oversight Board said.Story continues below this ad
However, the Board also cautioned that misleading AI-generated outputs should not become a reason to restrict freedom of expression. “The industry needs coherence in helping users distinguish deceptive AI-generated content and platforms should address abusive accounts and pages sharing such output,” it said.
Labelling recommendations
Meta typically applies an ‘AI Info’ label to content that is modified or created using AI. However, the Board sharply criticised Meta’s current labelling mechanisms as “neither robust nor comprehensive enough” to address the scale and velocity of AI-generated content, particularly during a crisis or conflict.
It also pointed out that Meta’s implementation of the coalition for Content Provenance and Authenticity (C2PA) standards is inconsistent, even when it comes to content generated by its own AI tools.
In its recommendations, the Oversight Board echoed certain provisions of India’s recently notified rules governing the creation and dissemination of AI generated content online. For instance, the Board recommended that Meta attach provenance information and invisible watermarks to content created by its AI tools.Story continues below this ad
The IT Rules amendments related to synthetically generated information (SGI), that went into effect from February 20, 2026, require AI platforms and tool providers to ensure that AI-generated content is prominently labelled or embedded with a permanent unique metadata or identifier that is clearly visible and tamper-proof.
The Oversight Board has gone one step further to recommend that Meta attach new ‘High Risk’ and ‘High Risk AI’ labels to appropriate content, along with clearer escalation channels and automated review for labelling at-scale. Additionally, some Board members recommended that posts with High Risk AI labels must be subsequently demoted or taken down to avoid the spread of deceptive AI content.
The Board further warned that labelling systems dependent on self-disclosure of AI usage and escalated review may be inadequate in the current environment. This is in contrast to the SGI rules, which require big tech companies like Meta to ensure that users declare when information is SGI, deploy appropriate technical measures to verify the accuracy of such a declaration, and once verified, ensure that the same is clearly and prominently displayed with an appropriate label or notice.
Other recommendations by Oversight Board
The Oversight Board has suggested Meta to invest in stronger tools that are capable of detecting multi-format AI-generated content, including audio, audio-visual and images. The company has been urged to to publish a clear list of penalties that users may face for failing to self-disclose digitally created or altered content.Story continues below this ad
Finally, the Board has recommended that Meta create a separate Community Standard for AI-generated content with comprehensive rules on provenance preservation, AI labeling protocols and self-disclosure. It has also been suggested to amend the existing Misinformation Community Standard to ensure that “swift review of misinformation that directly risks imminent violence or physical harm does not depend solely on signals from external partners.”
While the Oversight Board’s decisions related to content on Meta’s platforms are binding, the company is not required to accept the Board’s recommendations in the same manner. However, Meta is required to publicly respond to each recommendation within 30 days, as per its website.
