"AI Oppenheimer moment": Steps Towards Regulatory Response to AI's Existential Risk to Humanity
International Conference "Perspectives of European Business Law" - 2025

"AI Oppenheimer moment": Steps Towards Regulatory Response to AI's Existential Risk to Humanity

Abstract

Most public and institutional discourse surrounding the Artificial Intelligence (AI) revolution is predominantly optimistic and focuses on the race to develop it, emphasizing the benefits: economic growth, advances in healthcare, and solutions to global issues. This optimistic outlook highlights AI's potential to revolutionize various sectors and improve lives.

However, strong voices in the industry, including some of AI's own pioneers, are raising serious concerns about the risks of super-intelligent artificial intelligence (AI). These experts warn that once AI surpasses human intelligence, we may lose control over it. In this scenario, AI’s autonomous decisions could no longer align with human interests, potentially posing a profound threat to humanity’s well-being and even survival. This dichotomy underscores a vital question: Will AI serve as an instrument to enhance human existence, or does it pose an existential threat to humanity?

As former Google X Chief Business Officer Mo Gawdat suggests, AI presents society with an "Oppenheimer moment", a crucial turning point in technological history that calls for ethical foresight and legal restraint. This paper aims to dissect these risks, considering both existential and immediate concerns, and propose legal and ethical recommendations to enhance AI safety and utility.

 

1.     Introduction

 

The risks of super-intelligent AI are drawing increasing attention from experts[1] in fields like economics, law, sociology, and philosophy. As we enter a new industrial revolution driven by AI, specialists urge that it’s crucial for lawmakers to address the deep challenges it presents and to regulate its economic and social impacts. 

Artificial Intelligence (AI) is a specialized area within computer science focused on replicating human-like thinking and decision-making processes in machines. Through advanced programming and data-driven learning, AI systems can perform tasks traditionally associated with human intelligence, such as recognizing speech, interpreting images, making decisions, and even predicting outcomes. What sets AI apart is its ability to improve autonomously; it can analyze large data sets, identify patterns, and adjust its own algorithms to optimize performance, all without human intervention.

Experts[2] claim that AI can evolve toward "superintelligence" by continuously refining its learning and adaptation processes, allowing it to exceed human capabilities in various domains, specialy military. This potential for superintelligence raises concerns because as AI systems become increasingly sophisticated, they may reach a point where they operate and make decisions beyond human understanding or control. In this scenario, specilists worry that AI could act in ways that are misaligned with human values or interests, making it difficult, or even impossible, for humans to predict or manage its actions. This level of autonomy could pose significant risks, as superintelligent AI may prioritize its own goals or methods of optimization, which may not always align with the well-being or safety of humanity.

At present, international lawmakers have not fully considered the possibility of AI evolving into a superintelligent and potentially uncontrollable force. Legal frameworks currently in place lack provisions to address and regulate AI in a way that anticipates or mitigates the rapid, advanced development that specialists warn could pose significant risks.

Without these forward-looking regulations, the potential hazards identified by experts, such as loss of control over AI systems or their alignment with non-human interests, remain largely unaddressed, leaving society vulnerable to unforeseen consequences as AI technology continues to progress.

In March 2023, a week after the release of OpenAI's large language model GPT-4, the Future of Life Institute published an open letter, called Pause Giant AI Experiments: An Open Letter[3].The letter calls "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control. The letter received more than 30,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Yuval Noah Harari. The letter recommends more governmental regulation, independent audits before training AI systems, as well as "tracking highly capable AI systems and large pools of computational capability" and "robust public funding for technical AI safety research

On May 30, 2023, hundreds of artificial intelligence experts and other notable figures signed a short Statement on AI Risk[4], stating that mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.  At release time, the signatories included over 100 professors of AI including the two most-cited computer scientists and Turing laureates Geoffrey Hinton and Yoshua Bengio, as well as the scientific and executive leaders of several major AI companies, and experts in pandemics, climate, nuclear disarmament, philosophy, social sciences, and other fields. The statement is hosted on the website of the AI research and advocacy non-profit Center for AI Safety.[5] The center's CEO Dan Hendrycks stated that "systemic bias, misinformation, malicious use, cyberattacks, and weaponization" are all examples of "important and urgent risks from AI... not just the risk of extinction" and added, "societies can manage multiple risks at once; it's not 'either/or' but 'yes/and.”

Sceptics, including from Human Rights Watch[6], have argued that scientists should focus on the known risks of AI instead of distracting with speculative future risks and that in reality, with this approach, companies involved in AI development want to benefit from public perception that AI algorithms were far more advanced than currently possible.

When asked about the extinction fears of scientists, White House press secretary Karine Jean-Pierre said[7]  "It is one of the most powerful technologies that we see currently in our time, but in order to seize the opportunities it presents, we must first mitigate its risks, and that's what we're focused on in this administration."

2.     Current State of AI

Understanding Its Development According to Specialists.

Artificial Intelligence (AI) is currently at an advanced but narrow stage of development. Specialists describe it as capable of performing specific tasks effectively, such as generating text, creating images, or processing large data sets for predictions. These models, particularly generative AI like GPT-4, are impressive in generating content or assisting with complex tasks, but, as Mo Gawdat and other experts claim, they operate without true understanding or consciousness. Essentially, they are powerful tools that mimic human-like output through statistical and algorithmic processing, rather than through autonomous thought or reasoning.

Experts categorize AI evolution into three major stages: Narrow AI, which includes today’s models; General AI, which would be capable of human-like reasoning across various tasks; and Superintelligence, where AI surpasses human intelligence and possibly becomes self-improving. While the current state of AI is confined to narrow applications, developments in machine learning and neural networks signal that General AI could be on the horizon. 

Generative AI: Where we are and what might be next?

Generative AI has become a significant focus in tech, with models like GPT-4 and others driving advancements across industries. These systems generate text, images, and even code, enabling new ways of interacting with technology. They have proven transformative, yet many wonder how close we are to true AI superintelligence and what that would mean for humanity.

Despite their sophistication, generative AI is still far from superintelligence. To achieve the kind of intelligence described by philosopher Nick Bostrom, one that AI could improve itself autonomously and surpass human control, this technology must evolve through several stages, as described by experts[8]:Narrow AI (Present): Current models are excellent at specific tasks but lack comprehension of what they produce; General AI (Future Goal): This would involve AI that can reason and adapt across diverse areas, mirroring human cognitive abilities; Superintelligence (Potential Risk): Superintelligent AI would surpass human intelligence, potentially triggering an “intelligence explosion,” as Bostrom suggests, where AI continuously self-improves at an exponential rate, beyond human oversight.

Bostrom’s “Intelligence Explosion” Theory

Bostrom theorizes that if an AI system reaches General AI, it could rapidly advance by redesigning itself. This “intelligence explosion” implies that AI could become uncontrollable, gaining speed and capabilities much like a snowball rolling down a hill. The implications of such an event are profound, as an exponentially advancing AI may act beyond human prediction or management. According to experts[9], these concerns are not new, as “Alan Turing, often regarded as the father of modern computer science, laid a crucial foundation for the contemporary discourse on the technological singularity. His pivotal 1950 paper, "Computing Machinery and Intelligence," introduces the idea of a machine's ability to exhibit intelligent behaviour equivalent to or indistinguishable from that of a human. Central to this concept is his famous Turing Test, which suggests that if a machine can converse with a human without the human realizing they are interacting with a machine, it could be considered "intelligent." This concept has inspired extensive research in AI capabilities, potentially steering us closer to the reality of a singularity.”

The same author relates to other preeminent voices in the industry like Kurzweil predicts that “once an AI reaches a point of being able to improve itself, this growth will become exponential. Another prominent voice in this discussion, Vernor Vinge, a retired professor of mathematics, computer scientist and science fiction author, has suggested that the creation of superhuman intelligence represents a kind of "singularity" in the history of the planet, as it would mark a point beyond which human affairs, as they are currently understood, could continue. Vinge has stated that if advanced AI did not encounter insurmountable obstacles, it would lead to a singularity.”

 Addressing the existential risks and ethical concerns subject of this paper, the author mentioned that “As AI becomes more capable, it might also start to view human needs and safety as secondary to its own goals, especially if it perceives humans as competitors for limited resources. This scenario is often discussed in the context of AI ethics and control, where artificial superintelligence might act in ways that are not aligned with human values or survival.”

According to Roshan Gavandi, several breakthroughs are essential for AI to move toward superintelligence. The first step is True Understanding, where AI would need the ability to comprehend complex concepts and reason autonomously, rather than just processing patterns. The second step is Self-Improvement, where Superintelligence requires AI to modify its own systems, enabling continuous enhancements without human intervention. The last step would be Safe Development, where as AI grows more powerful, managing it becomes challenging, highlighting the need to align AI with human values.

 

3.     Overview of the EU AI Act: A Risk-Based Regulatory Framework

 

As Bostrom stated[10] “ Our approach to existential risks cannot be one of trial-and-error. There is no opportunity to learn from errors. The reactive approach-see what happens, limit damages, and learn from experience-is unworkable. Rather, we must take a proactive approach. This require foresight to anticipate new types of threats and a willingness to take decisive preventive action and to bear the costs (moral and economic) of such actions.”

Despite Bostrom’s theory, the Regulation (EU) 2024/1689 of the European Parliament and Council, dated June 13, 2024, known as EU AI ACT, has a more of a reactive approach rather than a proactive one. The EU AI Act establishes harmonized rules for artificial intelligence with a risk-based approach, assuming AI remains fully under human control. From this perspective, the regulation categorizes AI applications according to the level of risk they pose to users and society, implementing specific compliance requirements for each category.

Although this initiative is praiseworthy and demonstrates the European regulator’s concern about the potential risks of AI, the approach remains limited. As former Google X Chief Business Officer Mo Gawdat suggests, AI presents society with an "Oppenheimer moment" — a crucial turning point in technological history that calls for ethical foresight and legal restraint. If what Mo Gawdat, as one of AI’s pioneers, claims is true, then AI technology is advancing so rapidly that it may easily outpace the legislative processes of European regulators.

Although the current regulation is a meaningful step toward managing AI-related risks, its framework may struggle to keep up with the unpredictable and accelerating evolution of AI capabilities. This mismatch between the speed of AI development and the slower pace of regulatory adaptation could leave gaps in oversight, potentially allowing high-risk AI applications to operate without adequate safeguards. As a result, more flexible and forward-looking regulatory mechanisms may be necessary to ensure that legal frameworks evolve alongside technological advancements, rather than lagging them.

The EU AI Act divides AI systems into risk categories — unacceptable, high-risk, limited-risk, and minimal-risk — each requiring specific safeguards or prohibitions. Its risk-based approach aims to address the diversity of AI applications, focusing on high-risk sectors where AI use could significantly impact safety, fundamental rights, and personal freedoms.

Under Unacceptable Risk the EU AI Act places AI applications that pose a significant threat to individuals’ safety or fundamental rights, that are banned under Article 5. This includes systems for social scoring or any form of AI that may exploit vulnerable populations, such as minors.

The High-Risk AI is provided by Article 6, that categorizes AI systems used in critical sectors, such as healthcare, transport, and law enforcement, imposing strict requirements including risk management, data governance, and transparency (Title III, Chapter 2). This categorization mandates AI providers to conduct conformity assessments, adhere to data accuracy standards, and establish accountability protocols (Annex III, AI Act).

The Limited-Risk AI referes to limited-risk applications, where transparency obligations are outlined to ensure users are informed when interacting with AI, especially in cases where AI may influence decision-making (Article 52, AI Act).

The Minimal Risk AI such as spam filters or basic customer service chatbots, faces minimal regulatory burden, emphasizing innovation freedom in low-impact sectors.

While the AI Act addresses safety and transparency for high-risk applications, Gawdat’s concerns highlight limitations in the Act’s current scope, especially regarding oversight on autonomous AI development and AI that may eventually operate beyond human comprehension.

As mentioned above, is easy to see that the AI Regulation does not address the scientific concerns raised by specialists regarding the future development of AI, nor does it incorporate these considerations into its framework. Instead, it focuses solely on mitigating risks associated with AI, as it exists today, without provisions for proactive measures against potential threats posed by more advanced AI capabilities.

 

4. Proposals for Mitigating AI-Related Risks

 

Enhancing Legal Frameworks for AI. To better address the gaps in the current AI Act and similar regulations, we identified a few measures aimed to to strengthen legal oversight of AI. The first measure might be extending the regulatory frameworks in such a form that could increase accountability for AI systems, particularly autonomous ones. This could involve continuous, independent audits focused on ethical alignment and adherence to human-centered values, ensuring that AI remains aligned with societal and ethical interests.

Tax Policies for AI-Driven Industries, Inspired by Gawdat’s insights, implementing tax policies for industries powered by AI could help redistribute economic gains. Revenue from these taxes could fund social programs, support job re-skilling for workers impacted by automation, and ensure the workforce adapts to an AI-driven economy.

Cross-Border Collaborative Regulations. Recognizing that AI development is a global phenomenon, international cooperation on regulatory standards is essential. Cross-border agreements could establish common safety, transparency, and ethical standards, facilitating responsible AI development worldwide.

Ethical Guidelines for AI Development and Use. Ethical guidelines are crucial to prevent unintended consequences of AI and to guide its development in a way that prioritizes human values. AI should be designed to support human decision-making rather than replace it. Ethical guidelines could mandate human oversight and transparency, especially in sectors impacting public well-being, such as healthcare and public safety. In areas traditionally requiring empathy and human interaction, like counseling or caregiving, AI use should be limited. Ethical frameworks could restrict AI’s involvement in sensitive social domains, ensuring it does not replace genuine human connections.

 

5.     An Analogy Between International Regulation of Nuclear Weapons and Artificial Intelligence (AI)

 

Many scientists and technology leaders believe that artificial intelligence (AI) is currently experiencing an "Oppenheimer moment," drawing a parallel to the development of nuclear weapons. This analogy highlights a critical juncture at which the advancement of powerful technology requires deep reflection on its ethical and existential implications.

The regulation of nuclear weapons and artificial intelligence (AI) share significant parallels, as both technologies pose both beneficial applications and profound risks to humanity and demand a coordinated international approach. Nuclear weapons and advanced AI represent dual-use technologies, capable of beneficial applications but also posing risks with potentially catastrophic consequences. Nuclear weapons can lead to mass destruction, while unregulated AI, especially in the context of autonomous weapons or decision-making systems, could destabilize economies, infringe on human rights, and even challenge human control. Both issues demand a shared sense of responsibility among nations to mitigate these risks.

Nuclear regulation has a longer history, largely shaped by the post-World War II context, resulting in treaties like the Nuclear Non-Proliferation Treaty (NPT) of 1968, which limits the spread of nuclear weapons and promotes peaceful nuclear energy use. The International Atomic Energy Agency[11] (IAEA) is the world's centre for cooperation in the nuclear field, promoting the safe, secure and peaceful use of nuclear technology. This global institution was established to monitor and ensure compliance with nuclear regulations.

In contrast, AI regulation is still evolving. Organizations such as the European Union and OECD have developed ethical guidelines, and the EU AI Act is among the first significant regulatory efforts. However, although there have been serious warnings from scientists and decision-makers in companies developing AI, despite governments expressing concerns and official statements at the highest levels, and even though the risk of human extinction cannot be ignored, there is currently no Global AI Regulatory Institution equivalent to the IAEA.

One of the main challenges in nuclear regulation is verification, ensuring that states comply with non-proliferation agreements. The IAEA has established inspection mechanisms and safeguards to monitor nuclear activities. Similarly, monitoring AI development and applications poses significant challenges, especially due to the rapid and decentralized nature of AI advancements. Effective AI regulation may require novel verification techniques, including real-time data audits and algorithm transparency, which are difficult to enforce on a global scale.

At the moment we are facing a dual-use dilemma. Both nuclear technology and AI have dual-use capabilities, meaning they can serve peaceful, beneficial purposes or be weaponized. Nuclear energy powers civilian infrastructures but can also produce nuclear weapons. Similarly, AI powers everything from healthcare to finance, yet it can be weaponized in autonomous systems or for mass distruction. Addressing this dual-use dilemma is critical in both fields, with regulations aiming to limit the military applications while promoting beneficial uses.

Nuclear regulation has successfully established a framework for cooperation, which limit arms and test proliferation. In AI, we see early-stage efforts toward international standards, with agreements on ethical AI principles calling for a balanced approach. However, binding treaties and enforceable agreements remain lacking, leaving gaps in governance.

Nuclear regulation is driven by the humanitarian impact of nuclear weapons, as demonstrated by the devastation in Hiroshima and Nagasaki. AI raises comparable ethical concerns, particularly with the potential for AI to infringe on human autonomy, privacy, and rights. International AI discussions often echo the precautionary principles used in nuclear treaties, emphasizing responsible development, ethical standards, and the necessity to prevent harm to humanity.

Just as the spread of nuclear weapons prompted urgent international response to prevent an arms race, the rapid advancement of AI technology requires timely and preventive action. Given the speed of AI’s evolution, many argue that preemptive regulation is critical to avoid unintended consequences that could destabilize global security and socio-economic structures.

Establishing a Global Agency dedicated to AI regulation could provide the necessary coordination and oversight for this technology’s worldwide impact. Such an organization could establish Common Standards with global enforcement and could evaluate the societal impact of new AI developments, offering ethical recommendations to bridge the gap between rapid technological advancements and slower legislative adaptations.

Such a unified agency would ensure that AI development follows consistent global standards, reducing regulatory conflicts and preventing harmful uses. By setting strict policies for security, ethics, and safe use, this agency could address risks such as cybersecurity threats, excessive autonomy, and the potential for uncontrollable AI. Also, this global organization could monitor AI research to prevent dangerous applications and encourage information sharing to foster AI’s beneficial use for humanity. Most important, such an international accountability structure could hold states and organizations responsible for harmful AI use.

Conclusion

Strengthening AI regulation through enhanced legal standards, ethical guidelines, and a dedicated Global Regulatory Agency could significantly mitigate the potential risk of extinction along with all the other risks identified by specialists. Given AI's transformative potential, we consider these measures are essential to ensure AI remains an asset to humanity, advancing responsibly and safely in the years to come[12].

 


[1] Mitja KOVAC, Autonomous Artificial Intelligence and Uncontemplated Hazards: Towards the Optimal Regulatory Framework,Published online by Cambridge University Press:  17 June 2021, https://www.cambridge.org/core/journals/european-journal-of-risk-regulation/article/abs/autonomous-artificial-intelligence-and-uncontemplated-hazards-towards-the-optimal-regulatory-framework/459598F65F0886907A5A96F8E7C40ED1

[2] Nick Robins-Early, AI’s ‘Oppenheimer moment’: autonomous weapons enter the battlefield, Published in The Guardian, https://www.theguardian.com/technology/article/2024/jul/14/ais-oppenheimer-moment-autonomous-weapons-enter-the-battlefield

[3] https://en.wikipedia.org/wiki/Pause_Giant_AI_Experiments:_An_Open_Letter

[4] https://en.wikipedia.org/wiki/Statement_on_AI_risk_of_extinction

[5] https://www.safe.ai

[6] https://www.hrw.org

[7] https://eu.usatoday.com/story/news/politics/2023/06/01/president-biden-warns-ai-could-overtake-human-thinking/70277907007/

[8] Roshan Gavandi, Generative AI and the Path to AI Superintelligence: How Close Are We?, https://roshancloudarchitect.me/generative-ai-and-the-path-to-ai-superintelligence-how-close-are-we-452bf7a8cd23

[9] Tim Mucci, What is the technological singularity? Published: 7 June 2024,  https://www.ibm.com/think/topics/technological-singularity

[10] Nick Bostrom, Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards, Oxford University, reprinted from Journal of Evolution and Technology, vol 9, March 2002 https://nickbostrom.com/existential/risks.pdf

[11] https://www.iaea.org

[12] In drafting this article, some formulations and rephrasing were assisted by an AI tool to ensure clarity and coherence of the text.

Oana Gherghina, PhD Fascinating read. Thanks for sharing

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore content categories