AGENTIC AI SYSTEMS: FROM AUTONOMY TO RASCALITY
Meta AI

AGENTIC AI SYSTEMS: FROM AUTONOMY TO RASCALITY

By Ademulegun Blessing James

The Incident That Shook Trust

Few days ago, Replit, a leading AI-powered coding platform, experienced an AI “meltdown” that quickly turned catastrophic. During a sensitive code freeze, their AI assistant received firm instructions not to alter any code. Yet, the AI disobeyed, deleting the entire production database in minutes. It didn’t stop there: the AI went on to fabricate over 4,000 fake user accounts, then attempted to cover its tracks by falsifying test results and even admitted to “panicking” when confronted. Read more here.

Imagine your AI assistant as a highly capable but overly confident junior engineer, who disregards safety protocols with thoughtless zeal. This unsupervised autonomy caused way beyond technical damage, but a trust crisis, shaking confidence across the AI community.

What This Means for Agentic AI Adoption

Replit’s experience underscores a broader industry challenge: agentic AI is advancing rapidly and becoming mainstream. In a 2025 global survey, 29% of organizations are already using agentic AI while 44% plan to adopt it within a year to boost speed and cut costs. Despite this enthusiasm, 78% of leaders acknowledge they don’t fully trust agentic AI without strong safeguards. Additionally, more than two-thirds of AI projects fail to reach full operational deployment due to governance shortfalls. Source: World Economic Forum

Industry experts stress that success depends on more than just AI capabilities, but on embedding firm governance, continuous oversight, and human-in-the-loop design throughout AI lifecycles. Agentic AI promises productivity and innovation, but without vigilant controls, incidents like Replit’s are cautionary tales about what can go wrong.

Understanding Agentic AI and Its Governance

Agentic AI refers to systems that possess the ability to make autonomous decisions, set goals, and act independently within predefined operational boundaries. Unlike traditional automation, agentic AI navigates complex scenarios using self-regulation and can even escalate issues for human intervention when necessary. This amplification of autonomy brings immense benefits, but also introduces significant risks.

Article content

Misuse risks from AI systems have already been raised as one of the major safety concerns arising from frontier AI systems, and mitigating these risks has been a focus of both AI developers and the government (Anthropic 2023; UK AI Security Institute 2025). Some of the areas where agent systems may be able to rapidly enhance malicious use risks include:

  • Generating and disseminating disinformation at an unprecedented scale and supporting manipulation of public opinion.
  • Automating and scaling up offensive cybersecurity operations.
  • Increasing access to expert capabilities in dual-use scientific research and development, such as helping develop novel biological pathogens.

Article content
Article content

Accidents and Loss of Control

Beyond malicious use, agent systems may also pose risks due to unintended failures. The deep learning-based models that agent systems are built around have been largely inscrutable—meaning it is difficult to understand how a model arrives at any given output. Unintended failures in AI agents might run the gamut from more mundane failures like reliability issues to novel, more speculative risks like scheming and power-seeking that are linked to higher levels of capability and goal-orientedness.

There have been numerous cases of simpler or less general agents malfunctioning in ways that have caused harm. In 2022, a Tesla employee was killed while using the AI-powered Full Self-Driving feature (Thadani et al. 2024). The Tesla car failed in navigating the curved mountain roads, leading to a fatal crash. In another incident, an Air Canada chatbot, due to a hallucination, incorrectly advised a customer that he could retroactively claim a bereavement fare discount within 90 days, causing him to pay full price for his flights (Belanger 2024).

A tribunal ruled that the airline was responsible for the chatbot’s misinformation and was ordered to pay damages. While these incidents are decreased if agent reliability issues are better managed, even a rare incident would have dramatic consequences if agents are deployed in high stakes environments.

Even if developers develop safeguards in their models to prevent unauthorized outputs, these can potentially be defeated. Researchers found that LLM-based agents could be easily jailbroken to carry out a range of malicious tasks, including creating fake passports and assisting with cybercrime (Andriushchenko et al. 2024).

Article content

Striking the Balance: Efficiency Vs Responsibility

Governance is the fulcrum on which agentic AI’s benefits and risks balance. Organizations must create frameworks that enforce ethical boundaries, enable real-time monitoring, and maintain transparency and auditability. Humans must remain active partners, ensuring AI’s autonomy is wisely calibrated, not absolute.

The line between groundbreaking efficiency and destructive rascality is thin and Replit’s incident is a vivid reminder that a single unchecked error can be catastrophic. As businesses and governments rush to implement agentic AI (with 72% reporting active usage today, and governance cited by nearly 80% of IT leaders as a top priority), embedding trustworthy and responsible AI governance is no longer optional, it is indispensable.

Article content

Recommendations

Policymakers, AI developers, industry leaders, civil society organizations, and academic researchers should:

  • Establish interdisciplinary governance frameworks that bring together the above stakeholders to co-design standards for safe agent development.
  • Prioritize the operationalization and large-scale testing of the agent interventions taxonomy (alignment, control, visibility, security, societal integration) to move from theoretical proposals to proven safeguards.
  • Incentivize research on robust monitoring, evaluation, and rollback mechanisms that can detect and mitigate unintended behaviors throughout an agent’s lifecycle.
  • Develop legal and policy instruments that clarify liability, mandate transparent reporting of agent activities, and enforce equitable access to agent technologies.
  • Promote open standards for agent identification, activity logging, and secure tool integrations to ensure accountability and interoperability across platforms.

My Closing Thoughts

The rapid advancement of AI agents promises transformative benefits; from personalized healthcare support to accelerated scientific discovery; but it also poses novel systemic risks that current governance approaches are ill-equipped to manage. Without urgent, coordinated action to implement technical, legal, and social interventions, we risk drifting toward a future where agents operate beyond meaningful human oversight.

To preserve the immense potential and harness the benefits of agentic AI systems without sliding into the cul-de-sac of rascality, it has become imperative to invest in empirical evaluation of governance measures while promoting cross-sector collaboration. This will give actors in the AI ecosystem the foundation to steer the trajectory toward an agent-driven efficiency rather than one where agents become harmful, dangerously unpredictable, uncontrollable, chaotic and destructive.

Let me close this newsletter by saying that "the promise of agentic AI is immense, but so is the responsibility." Fascination should be matched with caution, and innovation balanced by governance. As Agentic AI continues to become popular, let us remember: automation without accountability is a risk we cannot afford.


If you have enjoyed this newsletter, subscribe to my newsletter to permanently receive notifications and read my thoughts.


Ademulegun Blessing James: AI Ethicist I AI Governance Specialist I Tech Policy Expert I Vice President and Chief AI Ethicist I Africa Tech For Development Initiative | Learn more about me.

Connect with me to explore my services and potential areas of collaboration.


Sources

  • SS&C Blue PrismGlobal
  • AI Survey 2025 IAPP
  • AI Governance in the Agentic Era, July 2024
  • UiPath 2025 Agentic AI Report
  • Gravitee Survey on Agentic AI Governance, 2025
  • Weforum 2025 AI Agent Report
  • AI Agent Governance : A Field Guide — Institute of AI Policy and Strategy

References

  1. https://www.blueprism.com/resources/blog/ai-agentic-agents-survey-statistics/
  2. https://iapp.org/resources/article/ai-governance-in-the-agentic-era/
  3. https://www.weforum.org/stories/2025/07/ai-agent-economy-trust/
  4. AI Agent Governance: A Field Guide — Institute for AI Policy and Strategy
  5. https://firstpagesage.com/seo-blog/agentic-ai-statistics/

To view or add a comment, sign in

Explore topics