Misinformation: Who’s Regulating the Wicked Problem of Now? Anyone?
In 2025, the World Economic Forum named mis- and disinformation as the most severe short-term risk to global security. Yet state governments and corporations lack a unified definition of these terms in the digital sphere and a distinct strategy to combat their spread. What defines mis- and disinformation? Well, ask any CEO or state leader, and they’re likely to give you a different answer. As informational borders continue to lower with globalisation, state-unified regulation and legal principles governing exposure to mis- and disinformation are becoming less likely to be definitively agreed upon between leaders and less appealing to technologically powerful states.
Part of the disinterest of some states in regulating AI and the information output of digital systems is the ‘race’ aspect of AI innovation. Much as nuclear weapons led up to the Cold War, states are unwilling to slow their tech industries through regulatory measures in fear of being beaten by an adversary state’s own tech company to ‘AGI’ – particularly in the case of the U.S. and China. And in the U.S., the current success of the stock market is very reliant on the continued growth of the AI industry, which has led to an AI economic bubble.
The EU has emerged as the global leader in disinformation regulation, most notably through the 2025 European Democracy Shield and the integration of the voluntary Code of Practice on Disinformation into the Digital Services Act as a co-regulatory benchmark. These measures represent a critical evolution in digital law by formalising fact-checkers, media literacy experts, and researchers as partners within the regulatory framework for responding to informational crises. The formalised Disinformation Code also requires platforms to release detailed information on content circulating online, increasing transparency and accountability for their operations—a significant shift given that platforms currently operate with minimal disclosure about the traffic on their sites. This EU approach serves as a case study in regional legal strides toward democratic governance in states otherwise dominated by globalised private industry.
However, substantial challenges remain. The trust gap between governments and technology companies makes cooperation no small feat, not to mention the investment required to operationalise measures that combat each informational hazard. With hundreds of millions of posts per day, any combative task would require a deep understanding of users’ perceptions of online speech, symbols, context, and evolving lingo, as well as the capacity to scale alongside the constant evolution of these human expressions. Even with the EU’s protective legislation, the tools necessary for long-term disinformation monitoring are either inadequate to operate effectively or completely absent from serious discussion of platform integration with platform leaders themselves. For these regulations to achieve lasting efficacy, policy coordination across regions would be necessary to compel tech platforms to cooperate with these mechanisms - something the EU has recognised, but that remains unachieved.
Harvard Law Review proposes that a systems-thinking approach, which uses proactive and continuous governance of speech administration, would be the most effective regulatory mechanism states could implement in tech platforms' operations. It considers content moderation a complex, nuanced system to be put in place rather than relying on post-by-post moderation. Not only would this effectively eliminate the time moderators spend sifting through online posts, but it would also inherently put philosophy and ethics as specialties at the forefront of digital operations, the next phase of AI companies, and global digital governance.
In this future hypothetical, individuals responsible for the ethics and philosophy of AI systems thinking, in order to ensure effective governance, need to be in constant collaboration with legislators, not only private companies in the initial development, in order to pass on the transparency they were given by their own companies on how systems operate, so regulators can effectively regulate. These roles would be the crux of protecting democracy, serving as the ethical communicator and moderator between private industry and regional governance. In a world increasingly digitised, the most central role in ensuring our own protection in digital operations will then be the humans philosophising it.
As I see it, the most effective way forward —a variation beyond EU legislation —is through regional and/or national legislation mandating the employment of ethics and transparency professionals—building upon the stakeholder model established in the Digital Services Act—as 'ethics envoys' within each tech giant. This specialty industry of ethics and transparency could be the next generation of essential innovation and focus in the tech industry.
Governments will continue to try to legislate, and companies will continue to operate. Unless governments are willing to allow tech companies to dominate the world order with this obvious power asymmetry, they must be willing to collaborate with other countries on regulation, with ethics at the forefront of such legislation to control private industry. An underinvested specialised industry in digital governance now (in terms of the direct state-to-private role) will be the future of operating in a digital world. Much as governments established treasury departments to regulate financial systems and negotiated trade agreements to govern cross-border commerce, states must now build digital governance departments and establish international coordination mechanisms to regulate information systems and protect democratic institutions.
What seems inherent to individual interests is diluted by the accumulation of power by technology companies: to reinvent 19th-century governance structures for 21st-century information systems. We must build agility and flexibility into our policies, both geographically and institutionally, to evolve in parallel with the technology itself. While centralised state rulings are effective, long-term and collective interests are best served with flexible, systems-thinking, transparent operations from platforms, and ethical supervisors enshrined in regulatory protections. No matter what regions or states decide, what is imperative is that regulatory models that hold these characteristics will be essential for the integrity of global information systems.
As we enter a period in which state and private relationships have never been more nuanced, and corporations continue to accumulate influence, we must remember that nothing like this in terms of informational and monetary power among private actors has existed before. Therefore, if we are still committed to maintaining the integrity of our democratic operations and institutions in each state and region, we cannot continue to operate as we have in very different circumstances. We must rise to the specific moment for each other. Or else we risk state information being completely subverted by private platforms' influence.
References
1. World Economic Forum. (2025). Global risks report 2025. https://www.weforum.org/publications/global-risks-report-2025/digest/
2. Cassar, D. (2023). The misinformation threat: A techno-governance approach for curbing the fake news of tomorrow. Digital Government: Research and Practice, 4(4), Article 24. https://doi.org/10.1145/3631615
3, 4 Higashi, H. (2023, January 20). Right capacity and will in content moderation: A case for user empowerment. Tech Policy Press. https://www.techpolicy.press/right-capacity-and-will-in-content-moderation-a-case-for-user-empowerment/
5 Douek, E. (2022). Content moderation as systems thinking. Harvard Law Review, 136. https://harvardlawreview.org/print/vol-136/content-moderation-as-systems-thinking/