Securing America's Artificial Intelligence (AI) Future: Navigating the Two Sides of the AI Security Coin

Securing America's Artificial Intelligence (AI) Future: Navigating the Two Sides of the AI Security Coin

Let's Talk About America's AI Ambitions 

You've probably heard about America's new AI Action Plan by now. It's a big deal, and for good reason. As we race to lead the world in artificial intelligence (AI), we're facing a fascinating challenge that keeps security folks like me up at night: AI is both our strongest shield and potentially our adversaries' sharpest sword. As we highlighted in our 2023 "Darker Side of AI" perspective, understanding both the transformative benefits and inherent vulnerabilities within the AI ecosystem remains crucial to responsible implementation. 

At Accenture Federal Services, we see this dual-use nature of AI playing out every day across government. The same AI that can spot a cyberattack in milliseconds can also be turned around to probe for weaknesses in our systems. It's like discovering fire – incredibly useful but demands respect and careful handling. 

This reality creates a whole new security landscape. Winning the global AI race isn't just about who builds the most powerful models or the biggest data centers. It's about who masters both the protective capabilities and innovation potential of this technology. The Action Plan gets this right – we need "robust defensive posture" while leveraging AI's power as an "excellent defensive tool."

What We're Up Against 

Let me walk you through what we're seeing on the frontlines of AI security. It's complex, but understanding these challenges is the first step toward addressing them: 

  1. Our AI infrastructure is vulnerable in new ways. The Plan's "Build, Baby, Build" push for new data centers and computing power is essential, but each new facility becomes a potential target. Adversaries are weaponizing AI to find weak spots and exploit our critical systems in minutes, rather than days. Meanwhile, our defensive AI systems are processing trillions of security signals daily – volumes no human team could possibly handle. It's truly an AI-versus-AI battle brewing. 
  2. Supply chains have become security nightmares. A typical AI chip has components from over 20 countries. Think about that for a second – 20 different opportunities for something to go wrong. Foreign entities are increasing investment in U.S. AI companies, raising new supply chain and foreign influence concerns. The tools we're building to map and mitigate these risks? They're AI-powered too – showing again how this technology cuts both ways.  
  3. Nation-state threats have gone algorithmic. Adversarial AI algorithms can be trained to recognize what is considered "normal" behavior by anti-virus products, so that malicious code can be deployed and co-exist, undetected. There have been successful attempts to "poison" foundational AI models, creating biased outputs that persisted through multiple updates. Our defensive tools are advancing too, with AI systems that can predict and identify attacker infrastructure, including domains and command and control servers, before it's even activated – but it's a constant arms race.  
  4. We don't have nearly enough AI security talent. There are over 750,000 empty cybersecurity chairs across the country, with AI security specialists being the unicorns everyone's hunting. Only about one in ten security pros feel confident securing AI systems. The good news? Organizations that successfully deploy AI security tools are seeing individual analysts handle over four times more security events than before – the technology is helping us multiply our human expertise. 
  5. The attack playbook is evolving daily. Researchers have shown they can manipulate large language models through carefully crafted prompts, bypassing security in over 80% of tests. There have been attacks that extracted private health data from models that were never supposed to reveal their training information. The silver lining? These same techniques are being used by defenders to build better "red teams" that continuously test and improve our defenses.  

What We Need to Do About It 

Based on our experience supporting government agencies through these challenges, here's what we believe needs to happen next: 

  1. Make Security Part of the Foundation. The Action Plan talks about "build, baby, build" for AI infrastructure."  I'd add: "secure, baby, secure." We need security architectures specifically designed for AI environments, recognizing that the same AI capabilities powering these systems can be used to attack them. The OWASP AI Security and Privacy Guide provides valuable insights into the AI security landscape, offering a comprehensive framework for identifying and mitigating risks associated with AI systems. Following OWASP Top 10 for Large Language Model applications, we should embrace secure-by-design principles when architecting AI systems and implement continuous Red Teaming, along with supply chain verification that uses AI to detect potentially compromised components. Let's create testing environments that essentially use AI to secure AI – fighting fire with fire, but in a good way.
  2. Create an AI Security Nerve Center. We need to reinvent the Security Operations Center where experts use AI to defend other AI systems – because the most effective defense against AI-powered attacks is AI-powered defense. This should include forensic capabilities specifically designed to analyze compromised AI models and data.Cross-functional teams with both cybersecurity and AI expertise need to continuously adapt to evolving attack techniques. 
  3. Team Up Across Government and Industry. The proposed AI Information Sharing Center is a great start, but let's structure it with focused working groups for different parts of the AI ecosystem: computing infrastructure, data sources, and models. We should implement automated threat sharing that works at machine speed. And let's run joint exercises that use cutting-edge offensive AI to test our defenses – we need to know our weaknesses before our adversaries do. 
  4. Set Standards That Make Sense for AI. Traditional security standards just don't fully address AI's unique challenges. We need certification programs that incorporate all security principles. Let's develop specific standards for machine learning systems that protect against the very techniques AI can be used to deploy. The NIST Risk Management Framework (RMF) for AI provides an excellent foundation that Accenture has helped enhance with our RMF AI Playbook recommendations. And procurement guidelines should ensure government systems leverage AI's defensive capabilities while managing its risks.
  5. Grow the Right Talent. We need specialized AI security career tracks in government agencies that develop expertise in both data science and cybersecurity. Cross-training programs should help traditional security folks understand AI, and AI experts understand security. Partnerships with universities should focus specifically on the dual-use nature of these technologies – not just the technical skills but the strategic thinking required.

Why Accenture Is Your Best Partner for This Journey 

I don't want to brag, but there's a reason so many federal agencies trust us with their most sensitive AI security challenges. We've built something special here:  

  1. Our AI security practice is the world's largest, with thousands of specialists who've secured some of the most sensitive AI deployments across defense, intelligence, and civilian agencies.
  2. We're not just theorizing about AI implementation – we've delivered over 350 AI projects across more than 50 federal agencies. This hands-on experience spans everything from securing language models for classified environments to designing secure-by-default AI infrastructure for mission-critical systems.
  3. Our global supply chain security capabilities give us visibility into the complex networks supporting AI infrastructure. Our platform monitors millions of suppliers worldwide, using AI to spot potential security risks, foreign ownership concerns, and component vulnerabilities before they become problems.
  4. Our research team includes 1,400+ AI specialists working at the cutting edge of technology, with seven dedicated AI security centers worldwide. This investment helps us anticipate emerging threats and develop countermeasures before they can be exploited.
  5. We've built partnerships with every major AI technology provider that matters to the Action Plan, giving us unique insights into securing the end-to-end AI stack. And we've got the operational chops to back it up, currently managing security for over 30 federal agencies.
  6. To address the talent gap, we're investing over $1 billion annually in developing our people's skills, directly addressing the shortage highlighted in the Plan. 

We don't just advise on AI security—we live it every day across hundreds of federal missions. When you partner with us, you're getting a team that understands both the promise and the peril of AI's dual-use nature.  

The Bottom Line 

The future of AI in government is bright, but it must be built on a foundation of uncompromising and adaptive cybersecurity. The global race for AI dominance is ultimately about secure AI dominance. As the Action Plan says, "Winning the AI race will usher in a new golden age" for America – but only if we build security into every layer of our AI journey. 

We're ready to roll up our sleeves and help make this vision a reality. By understanding and addressing the dual-use nature of AI in the cyber domain, we can ensure America not only wins this race but stays in the lead for generations to come. 

Stay Connected 

Contact us today for a personalized demo of our AI SOC capabilities that empower cyber defenders in this evolving threat landscape and follow us on LinkedIn for continuous updates on AI and cybersecurity innovations. Join us at the Billington Cybersecurity Summit in September to learn more about the Pace of Innovation or attend our session at the National Cyber Summit in Huntsville where we'll explore the Dual Nature of AI within the cyber domain.  

Stay innovative and secure.

Elaine Turville

Connector. Mission Problem Solver. Gratitude Spreader

1w

Great thoughts all around anchored to the change imperative and how skills play into our future!

Like
Reply

Sounds really insightful, Drew. Thank you for sharing!!!

Like
Reply
Joe Kim

EVP Engineering & CTO Squadra Solutions

1w

Great point about supply chain security which should also include c-scrm for both hardware components and software components

Dan Matlick

Principal Director at Accenture Federal Services, Managed Security Services Lead, MxDR for Government Lead

1w

Spot on, Drew.

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore topics