The Significance of the EU Election for the U.S.
Last week, I joined Katie Harbath on her podcast to talk about the EU elections and what we could take away as political and technology specialists. A week later, and a day before the first presidential debate, I’ve started thinking about what the results of the EU election really mean for us—if anything.
Key Takeaways: The DSA guidelines will be more helpful for the upcoming European elections and are something the US should look to replicate for our own elections, either as a standalone or tied into the national security efforts currently focused on platforms and AI.
While there was a lot of speculation around the impact of AI on this particular election cycle, the types of misinformation campaigns were low-tech, focusing on comments, memes, and traditional forms of misdirection content. EDMO published a daily bulletin throughout the cycle, along with detailed reports.
There was a report published by Maldita, an IFCN signatory that focuses on disinformation, that showed platforms did not act against half of the misinformation about elections (I highly suggest reading the report). On the other hand, companies like OpenAI busted propaganda campaigns from Russian, Chinese, Iranian and Israeli groups leading up to the EU Election.
What can we learn as trust and safety technology specialists that can be applied for the US election:
War rooms, policy plans, and tooling should start to roll out at least six months before the actual election and last until 3 months. In the DSA, there are guidelines around when companies should start to implement their election plans and when those companies should cease tracking, etc. In the US, there are main inflection points that start almost a year outside of the actual November election. Here’s a breakdown of political flashpoints for the US election:
Super Tuesday
Presidential debates
Political party conventions
GOTV (Get Out The Vote period/early voting)
Election Day
Inauguration Day
These are the major points, just focused on the presidential level, not the other federal, state, and local races taking place in all 50 states. One thing I flagged is to either focus on the US writ large and have broad-based policies or create carve-outs and dedicate teams that understand the political targeting for disinformation for a handful of states. For the US, those would be the battleground states, including Wisconsin, Michigan, Pennsylvania, Nevada, Arizona, Georgia, and North Carolina. In the EU, it was France, Germany, Poland, Italy, Slovenia, and Malta.
Back to the DSA, we all know technology is such a polarizing issue in Washington, which makes it hard for anything to pass in Congress or be signed into law by the executive unless it relates to China (i.e., the TikTok ban). While members of the Senate Judiciary Committee want to pass laws that focus on privacy and CSAM, there’s been little actual movement. But on the executive level, the Biden administration is viewing AI as a national security issue, which is a course correction from how social media platforms have been viewed. With either candidate returning to the White House in 2025, adopting election guidelines that target social media platforms (distribution platforms) and AI (content creation companies) that would go through various stages before the 2028 presidential election should be a top priority. With a thorough understanding that not one size fits all, and election information integrity is not the same as creating regulation to prevent innovation. In fact, it’s creating norms and standards that should be adopted across all media entities that touch any election.
Lastly, for the conservative swing in the EU and the uphill battle to bring the EU back towards the center with representatives from the center-right and center-left at the Parliament, Council, and country levels—the issues that people are worried about are very similar to the ones faced here in the US, and like the EU and specifically France, didn’t appear overnight. Immigration, wages, healthcare, investment in wars but not domestically to focus on infrastructure, etc., are the same everywhere and have been since 2016. The othering, fear-mongering, and anti-identity politics are easy things to exploit online. Those topics trigger an emotional response, which makes misinformation campaigns around certain candidates and issue areas easier to digest and amplify. Labeling misinformation and watermarking AI-generated content is just the baseline of what technology companies can do. But it’s more than the technology companies that need to do better and invest in operational capacity; political parties and lawmakers need to understand what they’re dealing with—it’s not good enough to rely on the tech-savvy person on the team to explain technology, nor is it enough to just play with cool tools. Partnerships need to be created between companies and lawmakers, politicians, political parties, and the people who drive the campaigns. Technology can be used for good. It can show people where and how to vote, how to participate in democratic processes, but both sides need to make an effort. As technology and our dependency on it continue to grow, it’s imperative to get this right—now.
We cannot blame the outcome of the 2024 election on AI or on distribution platforms if we don’t like the outcome. For bad actors, the EU election was a test run for the US, and even though leaders in certain countries are not happy with the results, i.e., France, Germany, Belgium, they are not blaming technology companies for their downfall—because the responsibility lies with governing. In this case, the DSA was the best thing that could have happened, even if the election guidelines were recommendations. The US is still the Wild West, with a knowledge gap, and we’re running out of time to fix it.