Until recently, Meta relied almost entirely on human reviewers to carry out what are internally referred to as privacy and integrity assessments.
However, internal documents obtained by NPR reveal that up to 90% of these evaluations will soon be handled through automation.
In effect, this means that key updates — such as changes to Meta’s algorithms, new safety tools, and modifications to how content can be shared across platforms — will largely be approved by AI-driven systems. These decisions will bypass the usual human oversight that once involved deliberations about potential unintended consequences or risks of misuse.
Within Meta, the shift is being welcomed by product teams, who see it as a way to speed up the launch of new features and updates. But both current and former employees are sounding alarms, warning that the reliance on automation transfers complex judgment calls to AI — decisions that could have significant real-world impacts.
“If this new system essentially allows more things to ship faster with fewer checks and objections, then you're increasing the likelihood of risk,” said a former Meta executive, speaking anonymously due to fear of retaliation. “You're less likely to catch the harmful side effects of product changes before they happen.”
Meta responded in a statement saying it has poured billions into improving user privacy protections.
Since 2012, Meta has operated under the scrutiny of the Federal Trade Commission following a settlement over its handling of user data. According to current and former staff, that agreement led to mandatory privacy reviews for new products.
In its recent statement, Meta said the changes aim to simplify the decision-making process. The company emphasized that "human expertise" will still play a role in handling "novel and complex issues," and only "low-risk decisions" are subject to automation.
Yet, internal documents reviewed by NPR show Meta is considering automating review processes even in sensitive areas, such as AI safety, youth risk, and integrity — a category that includes issues like violent content and misinformation.
A former employee cautioned: “Engineers are not privacy experts.”
One presentation slide outlines how the new process works: product teams will usually receive an "instant decision" after completing an AI-analyzed questionnaire about their project. The system flags potential risks and outlines any requirements. Before launching, teams must confirm those requirements have been met.
Trump: I want no more property taxes across the United States
7/4/2025 10:14 PMTrump's "Big, Beautiful" has $1.1 trillion in health cuts and 11.8 million losing care
7/3/2025 7:31 PMTrump’s Big, Beautiful bill passes the House
7/3/2025 7:27 PMGas prices haven’t been this low for the Fourth of July since 2021
7/3/2025 4:32 PM
Stay Updated
Subscribe to our newsletter for the latest financial insights and news.