An independent oversight board that reviews Meta’s content moderation decisions recommended that the company revise its cross-check program, and the company agreed — sort of.
The Oversight Board, the “independent body” that reviews Meta’s content moderation decisions, issued 32 recommendations for changes to the program, which places content from “high-profile” users in a moderation queue separate from the automated one used by the company for normies. Rather than being removed, flagged content from public figures such as politicians, celebrities, and athletes is kept up “pending further human review.”

The Board’s investigation was prompted by a 2021 Wall Street Journal article(Opens in a new tab) that examined the exempted. The board acknowledged the inherent challenges of moderating content at scale in their decision, saying that while “a content review system should treat all users fairly,” the program faces “broader challenges in moderating immense volumes of content.”
For example, at the time of the request, they say Meta was performing such a high volume of daily moderation attempts — about 100 million — that even “99% accuracy would result in one million mistakes per day.
Nonetheless, the Board claims that the cross-check program was “more directly structured to satisfy business concerns” rather than “advancing Meta’s human rights commitments.”
Meta agreed to implement 11 of the 32 suggestions made by the Board to amend the cross-check program, partially implement 15, continue to assess the feasibility of one, and take no further action on the remaining five. In an updated blog post(Opens in a new tab) published on Friday, the company stated that it would make the program “more transparent through regular reporting,” as well as fine-tune the program’s eligibility criteria to “better account for human rights interests and equity.” In addition, the company will update operational systems to reduce the backlog of review requests, implying that harmful content will be reviewed and removed more quickly.
Meta’s pledges to address the serious structural challenges inherent to its cross-check program are a landmark moment. https://t.co/j0uEisW7Ot
— Oversight Board (@OversightBoard) March 3, 2023
In a Twitter thread(Opens in a new tab), the Board stated that the changes “could render Meta’s approach to mistake prevention more fair, credible, and legitimate,” but that “several aspects of Meta’s response haven’t gone as far as we recommended to achieve a more transparent and equitable system.”

