Meta Policy Changes
Yesterday, Meta’s CEO and Founder Mark Zuckerberg announced sweeping rollbacks to critical integrity programs that safeguard users across the three most popular applications in the U.S.: Facebook, Instagram, and Threads. The announcement shocked many, but for those familiar with Meta’s internal workings—including myself, having served on both the Operational (Regulatory Escalations) and Governance (Global Governance of Meta Applications) teams—this news was less surprising. The timing, however, is notable: the rollback was revealed less than a day after the certification of the 2024 U.S. election results and the appointment of a Republican Washington insider and Meta policy veteran as President of Global Public Policy. The sequence is striking.
Years in the Making
While these changes were announced in rapid succession, the policies themselves have been years in the making. This shift highlights a fundamental truth about companies: they are not democracies. Having worked in the U.S. Senate, I’ve seen how the priorities of governments—focused on the well-being of their constituents—differ vastly from corporations, which prioritize revenue. For companies like Meta, their “constituents” are consumers of their products, and decisions about content moderation often prioritize business interests over societal well-being. Policies targeting hate speech, bullying, and harassment may be framed as optional unless they clearly mitigate real-world harm.
The rollback impacts only Facebook, Instagram, and Threads for now, but there’s potential for further policy changes affecting WhatsApp, MetaAI, and Meta’s hardware products like Ray-Bans and Oculus headsets. Global regulations, including the EU’s AI Act and the U.S. Executive Order on AI, provide some safeguards, but vigilance is required to monitor how Meta adapts these policies globally.
Ripple Effects Across the Tech Industry
The signal these rollbacks send to other technology companies like Alphabet, TikTok, BlueSky, Snapchat, Discord, and OpenAI cannot be ignored. How civil society, academia, and liberal democracies—and even authoritarian regimes with strict privacy laws—respond will be critical in ensuring that global communities are protected from online harms. These changes also underscore the urgent need for comprehensive consumer protection laws in the U.S., where social media platforms currently operate with minimal regulatory oversight.
Spotlight on the Third-Party Fact-Checking Program (3PFC)
Perhaps the most significant and controversial change is the dissolution of Meta’s U.S.-based Third-Party Fact-Checking Program (3PFC), which will be replaced by Community Notes. Here’s a breakdown of what this means:
What Is Happening
The U.S. leg of the 3PFC program is being deprecated this week. Fact-checking organizations, including Politifact, have indicated they will continue their vital work independently, but their partnership with Meta will end. For now, this change impacts only U.S. partners, not global ones.
What’s at Stake
This move signals a turning point in Meta’s approach to tackling misinformation. While the 3PFC program was flawed, it provided a critical layer of verification that flagged and contextualized misleading content. Community Notes, Meta’s replacement, is a fundamentally weaker tool that relies on crowdsourced feedback—a method prone to manipulation by bad actors and lacking the rigor of professional fact-checking.
Without robust fact-checking mechanisms, misinformation will proliferate unchecked. High-risk scenarios include elections, public health crises, and issues affecting marginalized groups. The absence of transparent classifiers and probability scoring systems further exacerbates the challenge of combating misinformation at scale. This is not just a freedom of speech issue; it’s a modern warfare issue that requires multifaceted interventions.
Global Implications
The dissolution of the 3PFC in the U.S. raises concerns about the future of the program globally. Countries without built-in safeguards like the EU’s Digital Services Act (DSA) are particularly vulnerable. A world without effective fact-checking allows false narratives to spread unchecked, posing threats to public safety and democratic processes worldwide.
Looking Ahead
The role of the Oversight Board will be pivotal in the coming months. Will they advocate for reinstating or strengthening fact-checking programs in high-risk regions? Will they push for better solutions to address misinformation in areas like elections, healthcare, terrorism, and hate speech? These are critical questions that demand answers.
Next Steps
Next week, I’ll explore how the Trump Administration’s lobbying efforts for companies like Meta intersect with global trade policies and the broader push to curtail “government overreach on censorship.” This complex dynamic extends beyond social media to include AI companies, highlighting the intricate web of influence shaping technology policy today.
The battlefield of misinformation warfare is evolving, and we must remain vigilant. The stakes are too high to leave this fight to chance.