Hey, you dropped this! (Prompt and robust AI regulation)

Tech companies consistently oppose AI regulation, using disagreement among experts over the risks of AI as leeway against its sweeping implementation. Optimists like Dr. Yann LeCun, Meta's head of AI research, predict AI will bring unprecedented prosperity and progress. ¹ Pessimists like Geoffrey Hinton—a Nobel laureate who left Google over AI safety concerns—warns it poses an "existential threat to humanity."² National security concerns over ability to combat foreign use of AI further complicate regulatory efforts. More innovation, they rationalise, could mean more protection from other AI systems.

You could think of it like a new Cold War of sorts between China and the U.S. Except, instead of 20th century technological rivalries, the two states are racing to invent the first Artificial General Intelligence (AGI) system.

While both countries' leaders may be concerned about the risk AGI poses to humanity, the prospect of the first system falling into rival hands is far more concerning than the consequences of developing it themselves.

Winning the AI race means setting the standard of its features, as well as centuries' worth of bragging rights. It also swings the balance of power in favour of whoever wins. Getting them in a room together and agreeing to stop development for the time being would be something like negotiating both countries to give up nuclear weapons – I’ll throw out mine when you throw out yours. An endless flow of capital has gone into the development of AI in both countries. Stopping development would ultimately damage economic growth, which is practically unimaginable to the two sparring economic superpowers.

Regulation is also challenging because most AI companies operate in the U.S., where individual states enact their own AI laws, creating inconsistent regulatory requirements. This patchwork of regulation is also due to Trump’s rescindment early this year of Biden's 2023 Executive Order that mandated safety standards for AI development. 3 With limited federal oversight, state-level regulation has become increasingly important.

California Governor Gavin Newsom recently addressed some of these gaps by signing the Transparency in Frontier Artificial Intelligence Act (TFAIA) on September 29, 2025. 4 The law is supposed to increase whistle-blower protections against AI and require companies earning over $500 million USD annual revenue to report how they are considering federal and international safety standards in their practices. These regulations are critical because top AI companies such as xAI, Meta, OpenAI, Anthropic, and Perplexity are all headquartered in California, making the state's policies directly impactful.

Newsom’s bill is more specific than President Biden’s executive order, focusing on sector-specific regulation versus long-term rights-based risk. California's regulatory specificity reflects AI's evolution since 2023. Concrete threats have emerged—including sexual deepfakes and election disinformation—prompting targeted state-level responses. This contrasts with Biden's 2023 Executive Order, which took a broad approach in response to AI's rapid industry growth and America's dominance in AI investment and start-ups. ⁵

The bill signed by Newsom, however, lacks specific provisions for AGI regulation, even though AGI poses the most significant risk to human security. The legislation omits several critical safeguards: emergency response plans in the event AGI achieves unprecedented autonomous control; mandatory pre-deployment risk assessments for AGI systems; and third-party auditors to ensure companies comply with regulations in practice. ⁶ All three measures have been identified as essential safety protocols by experts and academics in the AI community. ⁷

Despite these gaps, the bill does advance AI regulation beyond any other U.S. state. Most notably, it requires companies to publicly disclose catastrophic risks and how they plan to mitigate them—the most comprehensive transparency requirement in the country. ⁸ It also mandates reporting of AI models that employ deceptive techniques, protecting users who may rely on AI for critical tasks.

In contrast, Europe's AI Act (which came into force in 2024) is far more comprehensive and imposes stricter requirements. Unlike California's law, which applies only to companies earning over $500 million annually, the EU AI Act regulates AI systems across all company sizes based on risk classification. 9 More significantly, the EU requires high-risk AI systems to undergo third-party conformity assessments pre-deployment, while California merely requires companies to disclose their risk mitigation strategies. These measures protect European consumers from the deployment of AI systems developed outside of the standards outlined within the EU. In China, foreign AI services are blocked by the Great Firewall, while domestic AI systems must pass government security assessments and comply with strict content regulations before deployment. 10 However, the government's stranglehold on the process raises concerns about AI models having strong political bias.

Inconsistent and inadequate regulation is becoming increasingly dangerous. The threat of AI to employment is already mounting. The World Economic Forum’s Future of Jobs report 2025 claims that for every job AI creates over the next decade, it is set to threaten just as many, notably “white collar, entry-level roles”. 11 Real people are absorbing the destabilising consequences of AI. Policymakers desperately need to establish legal frameworks that restrict the power of AI companies in developing and releasing AI systems without ample oversight. The 'AI race' encourages companies to prioritise rapid deployment over addressing legitimate safety concerns. This rush is especially dangerous as the industry races toward AGI - a model which could emerge within the next few years and pose unprecedented risks to humanity.

The lack of ample regulation of AI highlights a broader global regulatory loophole: the difficulty in regulating multinational corporations (MNCs). For decades, many MNCs have been operating across borders with the same power and capital as some small countries. Their political power alone rivals that of immense state power, particularly in the U.S., where much of the country's global leverage depends on tech companies. Holding MNCs accountable is going to grow even more critical as the race to AGI accelerates. The gap between accountability and enforcement is the fundamental nexus for maintaining democracy and advancing human agency. Today, it serves as the basis for resisting a techno-feudalist society.

References

1. "Opinion: AI Destruction, Technology, and the Future." The New York Times, October 10, 2025. https://www.nytimes.com/2025/10/10/opinion/ai-destruction-technology-future.html.

2. Lauritzen, Pia. "The Biggest Existential Threat Calls for Philosophers, Not AI Experts." Forbes, June 22, 2025. https://www.forbes.com/sites/pialauritzen/2025/06/22/the-biggest-existential-threat-calls-for-philosophers-not-ai-experts/.

3. Executive Order 14110. "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." Federal Register 88, no. 210 (November 1, 2023): 75191–226. https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence.

4, 6, 8. "Governor Newsom Signs SB 53, Advancing California's World-Leading Artificial Intelligence Industry." Governor of California, September 29, 2025. https://www.gov.ca.gov/2025/09/29/governor-newsom-signs-sb-53-advancing-californias-world-leading-artificial-intelligence-industry/.

5. Haag, Alex. "The State of AI Competition in Advanced Economies." FEDS Notes. Washington: Board of Governors of the Federal Reserve System, October 6, 2025. https://doi.org/10.17016/2380-7172.3930.

7. Schuett, Jonas, Noemi Dreksler, Markus Anderljung, David McCaffary, Lennart Heim, Emma Bluemke, and Ben Garfinkel. "New Survey: Broad Expert Consensus for Many AGI Safety and Governance Practices." Centre for the Governance of AI, June 5, 2023. https://www.governance.ai/analysis/broad-expert-consensus-for-many-agi-safety-and-governance-best-practices.

9. "The EU Artificial Intelligence Act." Accessed October 12, 2025. https://artificialintelligenceact.eu/.

10. Webster, Graham, ed. "How Will China's Generative AI Regulations Shape the Future? A DigiChina Forum." DigiChina, Stanford University. April 26, 2023. https://digichina.stanford.edu/work/how-will-chinas-generative-ai-regulations-shape-the-future-a-digichina-forum/.

11. "Is AI Closing the Door on Entry-Level Job Opportunities?" World Economic Forum, April 2025. https://www.weforum.org/stories/2025/04/ai-jobs-international-workers-day/.

Previous
Previous

All eyes on Palestine

Next
Next

What the f*** is a proxy war?