The Next Decade of Cyber Sovereignty
Introduction
Cyber sovereignty – the ability of a nation or organization to control and secure its digital destiny – is emerging as a defining strategic priority. As we look to the next decade, five fault lines threaten to undermine this sovereignty: The Illusion of Coverage, Security Staff Cuts, AI and Vassal States, Citizen Discontent, and Warfighting Governance. Addressing these fault lines will require decisive action from CISOs, policymakers, strategists, and cyber defense leaders. Each fault line carries global implications, with particular resonance in the United States, and each demands a blend of tactical insight and strategic foresight to navigate. We explore each in turn, followed by a bonus section on the State-Level Vanguard leading many of these efforts. The tone is direct and sovereignty-focused – because in cyberspace, independence must be earned, not assumed.
1. The Illusion of Coverage
Despite rising investment in cybersecurity and risk management, organizations and governments alike often suffer from an illusion of coverage. This is the false confidence that existing measures – insurance policies, security tools, compliance frameworks – fully protect them from cyber threats. In reality, gaps and exclusions lurk beneath the surface, leaving critical blind spots.
Cyber Insurance Gaps: Many businesses have turned to cyber insurance as a safety net, but coverage is far from comprehensive. Modern cyber policies are typically modular and riddled with exclusions. A striking example is the industry’s move to exclude state-backed “cyber war” incidents: Lloyd’s of London mandated in 2023 that insurers exclude nation-state attacks from standard cyber policies, causing confusion and prompting buyers to question the value of their coverage. In effect, companies learned that what they assumed was covered might never have been covered at all unless tested in court. Furthermore, insurers are raising the bar for coverage – the average self-insured retention (deductible) for small and midsize enterprises has surged by nearly 400%, increasing out-of-pocket costs in any incident. Insurers are also explicitly not covering intangible losses like reputational damage. The result is a financial safety net full of holes: organizations may feel insured but remain exposed to catastrophic risk. Notably, even the cyber insurance market leader warns that 87% of global managers say their company is not adequately protected against cyber threats, underscoring a broad recognition that insurance alone is insufficient.
Tooling and Compliance Blind Spots: A similar illusion comes from an over-reliance on security tools and compliance checklists. Enterprises deploy an array of point solutions – firewalls, endpoint monitors, scanners – and often assume their attack surface is covered. In practice, fragmented tools can create a false sense of security, with siloed operations leaving “blind spots that bad actors can quickly exploit”. The gaps between tools, where no one has full visibility, are where attackers thrive. A network may pass a compliance audit yet still fall victim to a novel ransomware technique the checklist never contemplated. Leadership may take comfort in ticking boxes for frameworks (ISO, NIST, GDPR, etc.), but compliance is not security. In high-profile breaches, it’s common to find the victim had been certified or “secure” on paper, only to discover that security controls were illusory in practice. As one industry expert put it, many organizations are victims of security theater – they have the appearance of protection without the substance.
Case in Point – NotPetya and the Fallout: The 2017 NotPetya malware attack offers a cautionary tale. It caused billions in damage globally, and companies like Merck and Mondelez initially saw their insurance claims denied under “act of war” clauses. Years of legal battles ensued to clarify whether a cyberattack by a nation-state (Russia, in this case) constituted war in the traditional sense. Courts eventually sided with the insured in some cases, forcing payouts, but the saga revealed a painful truth: businesses that believed they were covered discovered their coverage was an illusion subject to interpretation. The Lloyd’s exclusion mentioned above was a direct response to this ambiguity – going forward, insurers want no doubt that state-sponsored attacks are not covered unless a special policy is in place. For executives and policymakers, the lesson is clear: read the fine print and assume nothing. True cyber sovereignty means preparing for worst-case scenarios on your own terms, not assuming an insurer or vendor will save the day.
To pierce the illusion of coverage, organizations must adopt a more sober, granular view of their risk posture. This means conducting regular reality checks, such as independent penetration tests and war-gaming exercises, to reveal what insurance, tools, and policies don’t cover. It means demanding clear terms from insurers and being ready to invest in preventive controls if certain risks (like state-sponsored sabotage or extended business outages) are effectively uninsurable. It also means streamlining and integrating security architectures for unified visibility, rather than assuming a patchwork of products automatically equates to protection. In short, sovereignty in cyberspace begins with brutal honesty about one’s exposure. Only by dispelling false confidence can leaders shore up the defenses that truly matter, ensuring no critical risk is blindly ignored.
2. Security Staff Cuts
Another fault line threatening cyber sovereignty is the ongoing reduction of skilled security personnel. Around the world – and notably in the U.S. – economic pressures and automation trends have led to hiring freezes, budget cuts, and layoffs that are thinning the ranks of cyber defenders. This comes at a time when threats are escalating, creating a dangerous mismatch between offense and defense.
Global Workforce Contraction: After years of expansion, the cybersecurity job market is experiencing disruptions. In the U.S., demand for certain cyber roles has dropped significantly; for instance, job postings for Security Engineers and Analysts have declined steadily over the past three years. Cloud Security Engineer positions fell by 43% since 2022 as companies lean on streamlined cloud services and automation. Overall hiring of technical security staff is slowing, attributable in part to AI-driven security tools and increased outsourcing of security functions. Globally, the picture is mixed: while there remains an estimated 3.4 to 3.5 million cybersecurity jobs unfilled (a long-standing workforce gap), many organizations are simultaneously imposing budget cuts on security departments. According to the 2024 (ISC)² Cybersecurity Workforce Study, 25% of organizations reported cybersecurity layoffs in 2024 (up from 22% the year prior) and 37% faced budget cuts in their security teams (up from 30%). Nearly half of security teams worldwide have experienced some form of cutback or hiring freeze in the past year as economic conditions tightened. This erosion of human capital is occurring even as 60% of organizations say their cybersecurity skill gaps are exposing them to significant risk. In short, many teams are being asked to do more with less, just as the threat landscape becomes more complex.
The Risks of “Defense on a Diet”: Cutting cybersecurity staff can have immediate and long-term consequences that directly undermine a nation or enterprise’s security sovereignty. Fewer analysts and engineers on hand means slower detection and response to incidents, less proactivity in hunting threats, and weaker oversight of critical systems. The risks extend beyond simply “having fewer eyes on glass.” Mass layoffs themselves create insider threats and knowledge loss. Studies show that 80% of employees take valuable intellectual property or data with them when they depart, especially during sudden layoffs. Disgruntled or desperate ex-employees may walk out the door with sensitive files or passwords, intentionally or not, which then become liabilities. As Mimecast’s chief product officer warned, periods of rapid staff transition are rife with distractions and lapses – it’s easier for mistakes or malicious acts to go unnoticed. Meanwhile, the institutional memory and skill of experienced defenders cannot be easily replaced; when senior security staff are cut, organizations may lose the only people who know how to prevent certain attacks. One SANS Institute expert described the situation starkly: laying off cybersecurity teams now is like “cutting the fire department during wildfire season”. The analogy is apt – threats are at an all-time high (ransomware “wildfires” raging across industries), yet some companies are impairing their first responders.
Beyond internal risks, cutting security staff can also invite external aggression. Cyber criminals and state-sponsored hackers pay attention to their targets’ posture. A company or government agency that announces budget cuts or layoffs in its IT security division may inadvertently signal that it’s an easier target. In the United States, even critical national cyber agencies are not immune. In April 2025, reports emerged of massive proposed cuts at the Cybersecurity and Infrastructure Security Agency (CISA) – with sources indicating up to 1,300 positions (nearly 40% of the workforce) on the chopping block. This development sparked widespread alarm; critics warned that such cuts could cripple U.S. cyber defenses and set back years of progress in protecting critical infrastructure. While the final outcome of that proposal is uncertain, the mere prospect illustrates how political shifts could drastically alter the security manpower landscape. If the nation’s lead cyber defense agency were downsized by almost half, the ripple effects on federal, state, and private-sector security posture would be profound.
Automation vs. Augmentation: Many organizations justify staff cuts with the promise of automation – deploying AI and machine learning to replace certain analyst functions. Indeed, artificial intelligence can handle routine threat detection, and managed security service providers can offload some work. But this is a double-edged sword. Automated tools can themselves introduce new vulnerabilities or blind spots, and they require skilled humans to tune and supervise them. The (ISC)² study found that while teams plan to leverage AI, most do not see it significantly reducing the need for human talent in the immediate term. Forward-leaning security leaders are therefore approaching AI as an augmentation, not a wholesale replacement. Nonetheless, some boards may overestimate what AI can do and cut staff prematurely – only to find out in the next crisis that no algorithm can replace intuitive human judgment in the heat of an incident.
For CISOs and strategists, the mandate is clear: resist the temptation to gut cyber defenses for short-term savings. The cost of a breach or sabotage far outweighs a security salary line item. If cuts are unavoidable, they must be surgical – preserving core incident response and engineering talent. Additionally, investing in cross-training and upskilling remaining staff can mitigate the loss of specialized roles that have been eliminated. Another key mitigation is to formalize knowledge management; ensure that playbooks, network insights, and threat intel gathered by departing employees are documented for those who remain. From a policy perspective, the U.S. and other countries need to address the workforce gap as a matter of national security – through education incentives, public-private partnerships for talent development, and perhaps even “cyber reservist” programs (more on that in the State-Level Vanguard section). A strong cyber workforce is the front line of digital sovereignty. Letting that workforce dwindle is effectively disarming in the middle of an ongoing cyber conflict. As threats mount, leaders should be adding firefighters, not removing them.
3. AI and Vassal States
In the realm of artificial intelligence, a new geopolitical fault line is forming: the divide between AI superpowers and those who risk becoming their digital vassals. As we enter an era where advanced AI capabilities are a cornerstone of economic and military strength, nations that lag in AI development face the prospect of dependency on those who lead – a direct challenge to their cyber sovereignty.
The term “vassal state” is provocative but increasingly apt. AI pioneer Kai-Fu Lee forecasted a scenario in which less technologically advanced countries might have “no choice but to become a vassal state to the U.S. or China” in AI, trading their data and digital allegiance for the benefits of AI systems built by those powers. In his vision, the world could devolve into a neo-imperial AI order, where a few dominant AI nations hold sway and all others must align under one of them. Five years ago, this was a hypothetical warning; today, large language models and AI services from the U.S. and China are proliferating globally, and the warning feels prescient. Without intervention, the gap between AI haves and have-nots could cement a form of digital colonization.
Global AI Competition and the Autonomy Dilemma: The United States and China currently command an outsized share of AI resources – they possess the most formidable compute power, the most advanced AI models, and the highest private AI investment flows. Both nations are in an explicit race for AI supremacy, investing heavily in research and military applications of AI. For them, “cyber sovereignty” extends to dominating the AI supply chain: from chips to algorithms to cloud infrastructure. By contrast, many other countries simply aim to avoid falling irretrievably behind. More than 60 countries have now published national AI strategies, recognizing that AI sovereignty – keeping AI development and data control in domestic hands – is critical for autonomy. The motivation is clear: if your critical industries, governance, and defense rely on foreign AI, you are at the mercy of those foreign powers’ interests. Digital sovereignty requires AI sovereignty. States want to ensure they can deploy AI on their own terms and are not cut off from progress or exploited via AI systems owned by others.
However, achieving AI independence is easier said than done. The resource asymmetry is huge. Many nations lack supercomputing facilities, advanced semiconductor fabs, or the huge datasets needed to train competitive AI models. For example, most of Africa (barring South Africa) is described as a “compute desert,” lacking local AI infrastructure for even basic needs. These nations may aspire to AI development, but in the near term they must import AI services (e.g. using American or Chinese cloud AI APIs) – a dependency that can translate into strategic weakness. Brain drain is another dilemma: skilled AI researchers from developing countries often get poached by global tech companies or move to AI hubs like Silicon Valley, London, or Beijing, leaving their home countries with less talent to build an AI ecosystem. Moreover, big tech firms engage in what some call “digital feudalism” – they extract data (the new oil) from users worldwide and funnel back AI-driven products, with local economies paying rent to those platforms. In this model, the tech giants (mostly headquartered in a few powerful states) are the lords, and everyone else risks living in their fiefdoms.
A Bid for Self-Determination: There is a growing countermovement of nations determined not to be consigned to vassal status. In Europe, “digital strategic autonomy” has become a rallying cry. French President Emmanuel Macron bluntly stated that the future of AI is a political issue “centered on sovereignty and strategic autonomy.” Europe, he insists, should not be a follower or “vassal” in an AI order dominated by Washington or Beijing. The EU is pursuing a “third way” – an alliance-based approach to AI sovereignty. Notably, at the 2025 Paris AI Summit, France and India co-hosted discussions on an international AI coalition. The vision is to pool resources among like-minded countries (European states, India, Japan, South Korea, etc.) to create a third major pole of AI innovation. Concretely, the EU has announced support for projects like OpenEuroLLM, an open-source large language model trained on European supercomputers. They have also implemented strict data residency and cloud sovereignty rules – for instance, France’s SecNumCloud framework requires that cloud data centers in France used for sensitive data be immune to extraterritorial foreign control. These efforts aim to ensure Europe isn’t forced to rely wholly on U.S. or Chinese AI clouds for critical functions. While Europe on its own still trails in the AI race, collaboration with other middle powers could bolster its position. Countries like Japan, South Korea, Canada, and Israel are also investing heavily to stay near the innovation frontier and not become dependent. A middle power bloc leveraging their combined human capital and markets may carve out some independence in AI development.
Middle-income countries in the Global South – such as India, Brazil, Indonesia, and South Africa – have a particularly delicate balancing act. They see AI as vital for development (e.g. improving agriculture, healthcare, education through AI solutions) but also fear falling into “digital vassalage” where their data fuels foreign AI with little local benefit. These countries are asserting agency by investing in domestic AI talent and startups, and by pushing for seats at the table in global AI governance forums. For example, India has framed its AI strategy around being an “AI garage” for solutions that benefit the developing world, and it has emphasized ethics and equity in AI at the UN. Such nations are also leveraging their bargaining power – many have large populations of internet users (hence valuable data) and they can use market access as leverage to demand technology transfer or local research partnerships from big tech firms. As one analysis noted, “they hold significant leverage… They have powerful markets that tech companies covet”. By banding together (e.g., through alliances like the G20 or specialized groups), these countries seek to avoid a future where AI is something done to them rather than created with them.
U.S. Strategy: Guarding Against a Bifurcated World: The United States, for its part, recognizes the high stakes of the global AI competition not just in terms of power, but in terms of the kind of world that will result. A 2023 Lawfare analysis urged the U.S. to take the lead in promoting an “open, rule-bound, and balanced global AI ecosystem” – essentially to prevent the AI landscape from splitting entirely into closed spheres of influence. U.S. policymakers have begun discussing “AI alliances” with democratic partners, mirroring how alliances work in defense. Initiatives like the Global Partnership on AI (GPAI), which includes the U.S., EU, UK, India, Japan, and others, are early steps toward setting norms and sharing AI expertise across friendly nations. The U.S. has also imposed export controls on advanced semiconductors and AI chip technology to China, aiming to slow China’s march toward dominance. This can be seen as an effort to maintain a favorable balance of power – though it also risks accelerating a tech decoupling. Washington’s ideal endgame is a world where AI tech is broadly available among allies and guided by shared values (like privacy, human rights, and security), rather than a world where every smaller nation must pledge fealty to either a Silicon Valley or a Beijing tech stack.
From a sovereignty perspective, the implications for the next decade are profound. Nations that successfully cultivate their AI industries (or align with consortia that do) will retain far more control over their economic future and defense capabilities. Those that fail to do so may find their critical infrastructures – from power grids to healthcare – running on black-box algorithms from abroad, with all the dependency and vulnerability that entails. Policymakers should treat AI capability as on par with energy security or food security – a domain requiring self-reliance or trusted partnerships. Tactically, this means investing in education (to produce AI talent), in compute infrastructure (national AI labs or cloud compute subsidies), and in data governance (so that local datasets can be harnessed for local innovation, not just vacuumed up by foreign platforms). It also means crafting regulations that balance openness with the protection of strategic assets – for example, allowing open data flows for commerce but shielding certain datasets (like citizens’ biometric information or national security data) from foreign exploitation.
Finally, leaders must be mindful of the AI vassal trap in dealing with private tech giants as well. Even within a country, if a handful of companies control AI development, the rest of the society can become dependent on them – a sovereignty issue in a domestic sense. Antitrust actions, support for open-source AI projects, and public-private collaboration can help ensure no single entity (state or corporate) unilaterally dictates the AI future. The next decade will likely determine whether AI becomes a force for digital empowerment or a new vector of digital imperialism. The fault line is drawn: those who act decisively to build or ally for AI capacity will stand on the sovereign side of it, and those who remain passive may wake up to find themselves tenants in a world someone else’s AI built.
4. Citizen Discontent
Cyber sovereignty isn’t just a contest among states and corporations – it’s also a social contract with citizens. When citizens lose faith in their government’s ability to manage digital issues (security, privacy, access to information), discontent brews. That discontent can manifest as political backlash, non-compliance with government initiatives, or even civil unrest. In the past few years, we’ve seen rising public concern over how data and cyberspace are governed, particularly in the U.S. Public trust is eroding, and this presents a fault line that leaders must address or risk undermining their own cyber agendas.
Privacy and Surveillance Worries: Across democracies, citizens are increasingly anxious about their personal data – who has it, how it’s used, and whether it’s secure. In the United States, a large majority (71%) of adults now say they are worried about how the government uses their personal data, up from 64% a few years ago. They fear mass surveillance, improper data sharing, or breaches of sensitive information. Simultaneously, about 77% of Americans have little or no trust that social media companies (and by extension Big Tech) will handle their data responsibly. Scandals like Facebook’s Cambridge Analytica incident (where millions of profiles were mined to influence elections) and repeated big-box retailer breaches have left a scar. People feel exposed and exploited in the digital realm. Tellingly, 87% of U.S. adults feel they have little to no control over how their information is collected and used by companies or the government. This sense of powerlessness feeds discontent: if citizens believe their privacy is violated with impunity, they will view authorities and companies as adversaries, not protectors.
Distrust in Governance and Accountability: The public’s skepticism extends to government’s role as a regulator and defender in cyberspace. A recent Pew survey found that 71% of Americans lack confidence that tech leaders will be held accountable by the government for data missteps or abuses. In other words, most people think government is either too inept or too captured to rein in Big Tech when it trespasses on privacy or other rights. This is a striking vote of no-confidence in regulators and lawmakers. Additionally, as artificial intelligence becomes more prevalent, about 70% of those aware of AI say they trust companies little or not at all to use AI ethically and responsibly. From predictive algorithms to facial recognition, citizens worry that AI could exacerbate surveillance or biases, and they doubt the government will step in effectively. In the U.S., trust in government overall remains near historic lows (only ~20% of Americans trust Washington to do the right thing most of the time). Cyber-related governance – whether securing elections from hackers or passing basic privacy laws – often becomes another front in the partisan divide, which further alienates the public.
This distrust has tangible effects. For example, when the U.S. government in 2021 floated the idea of scanning personal devices for illegal content (to curb child abuse material), there was swift public backlash on privacy grounds, and the proposal was shelved. In democracies, citizen sentiment can directly veto or stall cyber policies that overreach. Meanwhile, in more authoritarian contexts, citizen discontent over digital life can fuel unrest in different ways. Consider that in countries like Iran or Russia, when authorities impose internet blackouts or censorship to control narratives, it often emboldens portions of the populace to find workarounds (VPNs, satellite internet) and can add an anti-regime narrative to protest movements (“they fear the truth”). Even in China, where the social contract has long traded some personal freedoms for security and growth, younger citizens show frustration with pervasive surveillance and lack of digital freedoms – though such discontent is largely expressed privately or anonymously for fear of reprisal.
Demands for Action and Change: On the flip side of discontent is a growing demand for solutions. Citizens are not simply throwing up their hands; many are calling for stronger rules and protections to restore trust. In the U.S., there is notable bipartisan public support for tougher tech regulation. As of early 2024, 51% of Americans say they want more government regulation of major technology companies, up from the mid-40s a couple of years prior. This is a majority view, crossing political lines – indicating that people want their elected officials to step in and set boundaries on data use, AI deployment, and content governance. We see this pressure reflected in various legislative efforts: for instance, numerous U.S. states (as discussed in the next section) have passed their own privacy laws due to federal inaction, partly because constituents made clear they care about these issues. In Europe, citizen outcry over data privacy led to the GDPR (General Data Protection Regulation) coming into force in 2018, which is one of the strictest data protection laws in the world and has since been used as a model by other nations. Likewise, concerns about online misinformation and harms to children have spurred regulatory moves – from the EU’s Digital Services Act (aimed at transparent content moderation) to U.S. proposals for kids’ online safety acts. While not all these initiatives succeed or are well-crafted, they signal that democratic governments feel pressure to do something to address citizens’ digital grievances.
In Florida, the enactment of the Digital Bill of Rights (DBR) in 2023 modified Section 501.171, Florida Statutes, giving residents explicit rights to access, correct, and delete personal data collected by certain technology companies. It also restricts the collection of biometric and geolocation data without consent and grants parents control over their children’s data profiles. The DBR exemplifies a state-level assertion of data sovereignty, shifting power back to the citizen and setting enforceable expectations on how private entities must handle digital identities. It echoes GDPR-style protections but aligns them with Florida’s broader emphasis on individual liberty and operational transparency. This codification of visibility and deletion rights marks a tactical evolution in the citizen-government-tech relationship: privacy is no longer assumed; it is demanded and legislated.
It’s crucial to recognize that citizen discontent can undermine cyber sovereignty if left unaddressed. For example, if a significant portion of the population refuses to adopt a government-backed digital ID or contact-tracing app due to privacy fears, that tool fails and potentially weakens national cyber readiness (as seen when trust issues hampered some COVID-19 digital tracing efforts in Western countries). Or, if the public believes election systems are vulnerable to hacking and that nothing is being done, confidence in democracy falters – a security issue in itself. In extreme cases, public anger at cyber incidents can trigger political change. The breach of India’s Aadhaar biometric ID database, for instance, led to court battles and more oversight after public concern about privacy. In Ukraine, years of citizens suffering from Russian cyberattacks (like power grid hacks) galvanized political will to bolster cyber defenses and align with Western security standards – essentially, citizen demand helped drive sovereignty-strengthening measures.
Rebuilding Trust and Engagement: To heal this fault line, leaders must treat citizens as active stakeholders in cyber policy, not passive subjects. Transparency is key: governments should openly communicate about cyber threats and incidents, and about what they are doing to protect the public’s data. For example, when a breach occurs that affects citizens (be it a credit bureau leak or a government database hack), the response should include clear notification, support (such as credit monitoring, identity protection services), and evidence of lessons learned to prevent a repeat. Regulators and lawmakers need to deliver visible results – whether it’s actually enforcing penalties on a company that violates privacy, or passing that long-promised comprehensive privacy law at the federal level. Each concrete action can start to mend the trust deficit.
Engaging the public through education is also vital. Governments can promote digital literacy campaigns so people understand, for instance, how their data is collected and the trade-offs involved in digital services. A more informed citizenry can make nuanced choices and contribute constructively to policy debates (rather than being guided by fear or misinformation). Some countries hold public consultations on digital policies (the EU often solicits citizen and expert input on tech regulation drafts) – a practice worth expanding. Ultimately, cyber sovereignty is strongest when it has the consent and confidence of the governed. If citizens feel their rights are protected and their voices heard in cyberspace, they are more likely to support national cyber initiatives, whether it’s reporting cyber incidents, embracing new technologies, or adhering to security best practices that help the whole society. Conversely, if discontent festers, even the best-laid cyber strategies can fail due to lack of public cooperation or legitimacy. The next decade demands a recalibration: putting the “civil” back in cybersecurity, and treating trust as a strategic asset.
5. Warfighting Governance
Cyberspace has emerged as a domain of conflict where nations engage in constant low-level battles and occasional high-impact strikes – all without the formal declaration of war. This blurring of war and peace in the digital realm challenges traditional governance. “Warfighting governance” refers to the evolving frameworks and doctrines for managing cyber operations (offense, defense, and deterrence) at state and international levels. Over the next decade, the stability of cyber sovereignty will hinge on how well we govern cyber warfighting: Can we establish rules of engagement, accountability, and resilience amid continuous cyber skirmishes and full-blown cyber warfare campaigns?
The Battlefield Everywhere, All the Time: In cyberspace, “contested at all times” is the new normal. Unlike conventional war, cyber conflict doesn’t observe clear boundaries or truces. State-sponsored hackers and military cyber units are active even during ostensible peacetime – stealing data, probing critical infrastructure, pre-positioning malware on adversary systems. According to an ongoing tracker by the Council on Foreign Relations, 34 countries have been publicly suspected of conducting state-sponsored cyber operations since 2005. Four adversaries – China, Russia, Iran, and North Korea – account for the lion’s share (roughly 77%) of these operations, ranging from espionage and intellectual property theft to disruptive attacks on companies and government agencies worldwide. The reality is that many nations now have dedicated cyber command units (or even separate cyber forces) integrated into their military structure. NATO recognized cyberspace as a domain of military operations back in 2016, putting it on par with land, sea, air, and space. This was a signal that the Alliance views cyberattacks as potential triggers for collective defense. In 2023, NATO allies further agreed to enhance their Cyber Defense Pledge with ambitious new goals to strengthen national cyber defenses – especially protection of critical infrastructure – as a priority. Cyber warfighting has thus moved from the shadows to an open topic of strategic planning.
One consequence of this militarization is that the line between civilian and military targets has blurred. A hacker targeting a power grid might be a uniformed officer, or they might be a criminal proxy hired by a nation-state’s intelligence service – the victim can’t readily tell. Meanwhile, that power grid might be privately owned but is also critical national infrastructure. For instance, Russian state hackers have routinely targeted Ukraine’s civilian power and telecom networks as part of their war strategy, an approach they could extend to other countries in a conflict scenario. Western governments worry that malware planted in peacetime on, say, U.S. energy or water systems could be activated to cause chaos ahead of or during a geopolitical crisis. This gray zone of constant hostile cyber activity poses a governance quandary: traditional international law (e.g. the Geneva Conventions) prohibits attacking civilian infrastructure in war, but what about in “peacetime” cyber campaigns? What about when states use criminal ransomware gangs to do their dirty work, as has been observed with Russian and North Korean actors? Determining attribution (who is behind an attack) and intent in cyberspace is tricky, which complicates proportional response and accountability. All of this threatens sovereign stability – a nation under persistent cyber siege can suffer economic and social damage without a single shot being fired, and yet the slow drip of damage falls below the threshold of formal warfare that existing treaties cover.
From Reactive to Proactive Defense: To govern this domain, many countries are shifting from a purely defensive mindset to a proactive or even offensive one. The United States, for example, has adopted a doctrine of “Defend Forward” and “Persistent Engagement” in cyberspace. Rather than waiting to be hit, U.S. Cyber Command actively deploys to foreign networks (with permission of allies or covertly in adversary infrastructure) to “disrupt and degrade” malicious cyber actors’ capabilities before they can harm the U.S.. In 2023 alone, U.S. Cyber Command conducted hunt-forward missions in 22 countries, exposing and neutralizing threats and yielding 90 new malware samples for industry and allies to shore up defenses. These operations – often done in partnership with host nations – have had real successes, for instance helping block attempted ransomware and election interference campaigns at their source. They also set a precedent: state cyber forces operating outside their borders as a routine matter. The U.S. Department of Defense’s 2023 Cyber Strategy explicitly names China and Russia as top adversaries and vows to “go after cybercriminals or other groups that threaten U.S. interests” even if they reside in third countries. This is a muscular interpretation of sovereignty – it asserts a kind of self-defense right to take action in cyberspace anywhere, anytime, to preempt attacks (with the justification that cyberspace has no borders in a traditional sense). Other nations, including U.S. allies, are watching and in some cases following suit with more aggressive postures. For example, Australia has talked about developing offensive cyber capabilities to “punch back” at attackers, and several European countries have quietly built malware disruption teams in their intelligence agencies.
Importantly, these proactive moves are as much about governance as about technology. Governments are crafting rules and oversight mechanisms for offensive cyber operations. In the U.S., there is now a process where high-level interagency review and presidential authorization are required for significant cyber operations, guided by a framework (previously classified) known as National Security Presidential Memorandum 13 (NSPM-13). This was refined to give Cyber Command more standing authority to act quickly, but Congress still demands briefings on what actions are being taken to ensure they align with law and strategy. Other democracies are grappling with similar questions: How to ensure cyber weapons are used responsibly and under proper civilian control? How to avoid escalation or collateral damage? For example, when the U.S. or Israel deploys a cyber weapon like Stuxnet (which was used to disable Iranian nuclear centrifuges), it must consider that the worm might spread globally (as a previous Russian attack, NotPetya, did and caused billions in unintended damages). Thus, part of warfighting governance is developing norms of restraint and clarity. In international forums, the U.S. and allies advocate that international law and voluntary norms apply in cyberspace – such as not targeting hospitals or not attacking critical services in peacetime. In 2021, nearly all UN members agreed to a set of cyber norms (non-binding) along these lines. The challenge will be enforcement and verification, especially as authoritarian regimes may pay lip service to norms while violating them in practice.
Coalitions and Collective Defense in Cyber: Cyber defense is also prompting new forms of international cooperation. NATO, as mentioned, treats a major cyberattack on a member as potentially on par with an armed attack, meaning it could trigger Article 5 collective defense. While that threshold is intentionally kept high (only truly catastrophic cyberattacks might qualify), NATO has been very active below that level. In 2023, at the Vilnius Summit, NATO allies endorsed a new strategy to integrate cyber defense into overall deterrence, and launched a Virtual Cyber Incident Support Capability (VCISC) to help member states under significant cyber attack. This means if, say, a smaller ally is hit with a massive cyber assault on its banking system, NATO’s cyber experts can remotely assist in real time – an acknowledgment that not all members have equal capacity and that a hit to one is a concern for all. Furthermore, in 2024, NATO agreed to establish a first-of-its-kind Integrated Cyber Defense Centre to improve joint situational awareness and network protection across the alliance. We can think of this as a cyber command center for NATO, paralleling its traditional military commands, aimed at coordinating defensive measures and sharing intelligence rapidly. These developments point to a future where international defense isn’t just fighter jets and battalions, but also threat hunting teams and malware analysts working together across borders.
U.S. alliances in Asia are also adapting – for example, the U.S., Japan, and Australia have increasingly included cyber exercises in their joint military drills. Even the AUKUS pact (Australia-UK-U.S.), known for submarine technology sharing, has a significant cyber cooperation component, focusing on joint cyber capabilities and AI. This all reinforces sovereignty by collective means: an attack on one’s digital infrastructure might be thwarted or answered by a network of allied cyber responders.
Warfighting with Private Sector Involvement: A distinctive feature of cyber governance is the prominent role of private companies. Unlike traditional war where governments and their militaries handle critical assets, in cyberspace much of the terrain (telecom networks, undersea cables, data centers, software platforms) is owned by the private sector. This reality has made private tech companies de facto participants in cyber conflicts. A vivid example was during Russia’s invasion of Ukraine: Microsoft’s Threat Intelligence Center and Google’s cybersecurity teams actively helped Ukraine fend off Russian cyberattacks on government networks. Elon Musk’s SpaceX provided Starlink satellite internet to keep Ukraine online, but later there was controversy when Musk reportedly declined to enable Starlink for a Ukrainian drone operation – essentially a private actor influencing a military action. Such scenarios raise complex questions of governance: should companies be making decisions with strategic impact, and under what guidance or obligation? Western governments have generally pulled companies closer via information-sharing and by clarifying legal avenues for cooperation. For instance, in the U.S., big tech firms sit alongside federal agencies in joint cyber defense collaborative portals at CISA. But there’s also talk of needing clearer expectations – perhaps even requirements – for companies to assist in national cyber defense (akin to how civilian ships were conscripted for logistics in past wars).
On the international law front, efforts are underway to update norms to this new reality. The International Committee of the Red Cross in 2023 published guidelines urging hackers (including civilian volunteer hackers) to respect humanitarian law principles – essentially a plea that those engaging in cyber operations during conflict avoid targeting civilians, similar to armed combatants. They also reminded states that they must refrain from encouraging civilian hackers to violate these laws. Whether such guidance will be heeded is uncertain, but it represents an attempt to civilize cyber war.
Resilience and Continuity of Governance: Lastly, warfighting governance isn’t just about offensive and defensive operations – it’s about ensuring that governance continues even under attack. A sovereign nation must be able to maintain essential services and command and control in the face of cyber onslaught. This has led to strategies focusing on resilience: backup systems, network segmentation, cyber incident response drills at the national level, and crisis coordination plans. The concept of “cyber continuity of government” has gained traction. For example, some countries have set up secure bunkers and alternative communication channels for leaders to use if primary networks go down (akin to Cold War era nuclear command bunkers, but for digital comms). Regular large-scale exercises (like NATO’s annual Locked Shields exercise, or U.S. Cyber Storm drills) stress test how governments and industry would work together during a massive cyber incident. These efforts bolster governance by ensuring that an adversary cannot achieve strategic victory simply by hacking and causing chaos – the targeted society can absorb a hit and still function and fight back.
In summary, warfighting governance is about bringing order, alliances, and resilience into a domain that could otherwise be a Wild West. The next decade will likely see more formalization: clearer red lines (e.g., declaring that certain critical civilian systems are off-limits for cyberattack, with potential retaliation if crossed), maybe treaties on specific issues (there’s talk of an accord against targeting financial systems, for instance), and faster collective response mechanisms. For cyber sovereignty to endure, countries must avoid both anarchy (no rules, constant unchecked aggression) and paralysis (fear of acting or defending, leaving the field to adversaries). Striking that balance – being agile and assertive in cyber defense while working internationally to prevent spiraling escalation – will be one of the great strategic tasks of our time.
Bonus: State-Level Vanguard
While national governments often grab headlines in cyber strategy, in practice many of the most impactful cybersecurity advancements are happening at the state and local levels. In the United States especially, state governments have become a vanguard for cyber initiatives – pioneering laws, response teams, and public-private partnerships that the federal level has been slow to match. This “state-level vanguard” is a critical element of cyber sovereignty: it represents a distributed, bottom-up strengthening of resilience and governance, closer to where impacts are felt. Globally, we see similar decentralization in federal countries and alliances, but the U.S. offers a prime case study of empowered sub-national actors driving progress.
Legislative Trailblazers: U.S. state legislatures have been extraordinarily active in cybersecurity and data privacy lawmaking in recent years. In 2024 alone, lawmakers in at least 45 states introduced over 350 bills or resolutions on cybersecurity – and 33 states plus DC and territories enacted at least 75 of those bills into law. This surge of activity ranges from offensive measures to protective regulations. Notably, states have led the charge on data privacy: by late 2024, 19 states had enacted comprehensive privacy laws (with a 20th if you count Florida’s Digital Bill of Rights) to give residents rights over their personal data. California was first (with the CCPA/CPRA), but others like Colorado, Virginia, Connecticut, and Texas followed with their own variants. Absent a federal privacy law, these state statutes fill the void, often inspired by Europe’s GDPR but tailored to American contexts. They require businesses to disclose data practices, honor consumer opt-outs (like “do not sell my data”), and implement security controls, thereby elevating privacy and security standards nationwide (companies must adapt or face state-level fines).
States are also innovating in niche but important areas. For example, Florida’s Digital Bill of Rights (modifying §501.171, F.S.) exemplifies how states are not only reacting to citizen discontent, but operationalizing cyber sovereignty, codifying transparency, deletion rights, and biometric consent into enforceable law. Minnesota passed a law requiring all counties and municipalities that run elections to use .gov domains for their websites. This seems small, but it’s a big boost to election security – a .gov domain is harder to spoof, and voters can trust they’re on a legitimate site. Ohio enacted a law banning TikTok and certain foreign-owned apps on state government devices, citing security concerns about data going to rival nations. Dozens of states issued similar executive orders or laws on this issue, essentially leap-frogging any slow federal deliberation and directly reducing the attack surface on government networks. And in a bold stance against ransomware, Tennessee now prohibits state agencies from paying ransoms to cybercriminals and requires prompt reporting of any cyber incidents to state authorities. The logic is to remove the incentive for criminals to target Tennessee government entities (if attackers know the state by law won’t pay, they might think twice). This no-pay principle is debated, but Tennessee’s move may guide others in taking a firm position that public funds won’t fund crime.
Another trend is states setting cybersecurity requirements for specific sectors under their jurisdiction. For instance, several states require insurance companies to implement robust data security programs (modeled on an NAIC standard law). Alaska in 2024 enacted a law compelling insurance firms to undergo risk assessments and notify regulators of cyber incidents. States like New York, through its Department of Financial Services, have stringent cyber rules for financial institutions operating in the state (including banks and crypto companies), with annual certification of compliance. All these create a patchwork that, while sometimes inconsistent, often pushes higher standards. The bottom line: U.S. states are not waiting for Washington – they’re exercising their sovereignty to protect their citizens and infrastructure in cyberspace, often serving as laboratories of innovation.
Incident Response and the Cyber “National Guard”: When digital crises hit, state-level resources are frequently the first on scene. State governments have been building response muscles through both the National Guard and civilian volunteer programs. The National Guard, a reserve military force under dual state-federal control, has become a key cyber responder. As of 2021, 40 states’ National Guards had altogether 59 dedicated cyber units ready to be mobilized for cyber defense missions. These units consist of IT-savvy Guard members who train to assist in everything from hardening election systems to responding to ransomware at a city agency. And they have been busy: Governors activated National Guard cyber teams at least 41 times between 2018 and mid-2021 to tackle ransomware attacks and shore up election networks. Those activations spanned 27 states, showing how widespread cyber incidents have become. Colorado’s governor was the first to declare a state cyber-emergency in 2018 (after a ransomware attack on the transportation department) and call in the Guard; many have since followed that model, including Louisiana and Texas in 2019 during waves of local government ransomware hits. The Guard brings disciplined, trained personnel who can reinforce overwhelmed local IT staff, help with forensic analysis, and coordinate with federal experts. They operate under emergency management frameworks that states use for hurricanes or wildfires, now applied to digital disasters. In effect, the National Guard has become a bridge between federal cyber defense (like DHS/CISA) and on-the-ground needs of counties, cities, and utilities. This greatly enhances overall resilience – attackers hitting an American town might find they quickly face not just a small local IT team, but a whole cyber task force drawn from state and national expertise.
Beyond the Guard, some states have created volunteer cyber reserve corps. Ohio is a pioneer here: in 2019 it legislated the creation of the Ohio Cyber Reserve, a team of vetted civilian cybersecurity professionals who can be called up by the state for emergencies. These volunteers (think of them as akin to volunteer firefighters, but for cyber) train and stand ready to help municipalities with things like incident response or cybersecurity audits. They even assist with training exercises to raise cyber awareness at the local level. Maryland and Michigan have explored similar concepts. These efforts recognize that in a large-scale incident, private-sector expertise in the state (from tech companies, universities, etc.) can be harnessed for the public good, if a structure exists to do so. It’s a smart way to multiply resources without permanently expanding government headcount.
Interstate Collaboration and Information Sharing: States are also banding together to share knowledge and capabilities. The Multi-State Information Sharing and Analysis Center (MS-ISAC), supported by CISA, is a major hub where state and local entities receive threat intelligence, alerts, and best practices. All 50 states participate, and MS-ISAC has a 24/7 watch center that often detects and disseminates information on threats targeting state/local governments. Additionally, regional coalitions are forming – for example, Mid-Atlantic states have held joint cyber exercises, and the Western states share strategies on protecting the energy grid. Such horizontal collaboration means a clever attack that hits one state can quickly be communicated to others so they can preempt or prepare (for instance, if a specific type of phishing campaign is hitting multiple school districts across different states, an alert can go out broadly).
It’s worth noting that U.S. states also liaise internationally: state governors and National Guard units have “partnership” programs with other countries (often as part of Defense Department initiatives). Through these, they sometimes conduct cyber training with foreign military or civil agencies. For example, the Maryland National Guard has a partnership with Estonia – a country famed for its cybersecurity – and they’ve exchanged knowledge on cyber defense tactics. At a sub-national level, this is a unique form of diplomacy strengthening global cyber resilience, one state at a time.
Driving Innovation Upwards: The cumulative effect of state-level action is that it often spurs federal action or at least sets a de facto national standard. We saw this with data breach notification laws: by 2018, every state had one, which put pressure on Congress to consider a uniform law (though one hasn’t passed yet, companies essentially operate as if a national rule exists because they must comply everywhere). Now with privacy, as 19+ states create different rules, industry is actually lobbying for a federal law to simplify compliance – ironically, state leadership might finally force Washington’s hand to legislate baseline privacy protections nationwide. In election security, states have implemented paper ballot requirements and post-election audits in response to 2016-2020 concerns; these measures have improved integrity to the point that federal bodies are recommending them as best practices across the board. And when states ban something like TikTok on government devices due to espionage fears, it amplifies the scrutiny that federal policymakers apply to foreign tech threats.
Furthermore, some states aren’t just consumers of tech, but producers – California’s Silicon Valley and Washington’s Seattle area, for example, are where many cybersecurity companies and innovations originate. State policies that encourage tech growth (like tax incentives for cyber startups or robust computer science education in schools) contribute to the overall cyber capability of the nation. One could argue that states like California, Texas, and Virginia (with their tech hubs and government contractors) are critical to America’s cyber power, and their local policies on workforce and innovation directly feed the national strength.
In the global context, other countries also see regional or provincial governments stepping up. In Germany, for instance, each state (Land) has its own cybersecurity agency in addition to the federal office, sometimes launching initiatives tailored to local industries (like the auto industry in Bavaria). In federal nations like India, state governments have begun rolling out cybersecurity policies for their jurisdictions, especially to protect rapidly digitizing services like state-run power utilities or public health systems. They coordinate with national CERTs but have autonomy to innovate.
Sovereignty Strengthened Through Federalism: The state-level vanguard demonstrates a broader point: distributing cyber responsibilities and empowering local authorities actually fortifies overall sovereignty. It creates multiple layers of defense and governance. If one layer fails or is slow (say, the federal government is tied up in partisan gridlock), another layer (states) can fill the gap. It also means policies can be tested on a smaller scale before scaling up – reducing risk of a one-size-fits-all failure. Of course, coordination is key; fragmentation can be a downside if not managed (companies don’t love having 50 different regulations to follow). But so far, the U.S. approach has shown that a dynamic equilibrium can be achieved where state initiatives complement federal efforts.
For leaders, supporting this vanguard means investing in state and local cyber capabilities – grants for local government security upgrades, training programs to address talent shortages outside of coastal tech centers, and clear communication channels between federal and state cyber responders. It also means listening to states’ experiences. Often, the people in a state CIO’s office or a city’s IT department have a ground truth perspective on what threats are hitting Americans day-to-day (from library ransomware to attempted hacks of smart traffic systems). Feeding that back into national risk assessments yields more accurate priorities.
In sum, the state-level vanguard is not just a “bonus” story – it’s a pillar of resilience. Sovereignty in cyberspace doesn’t reside only in the capital; it lives in every statehouse, city hall, and community that takes ownership of its cyber readiness. By retaining this local initiative and amplifying it, the United States and countries with similar structures harness their federal nature as a strength in the cyber domain. The next decade will likely see this trend continue: more state-driven standards, closer state-federal incident collaboration, and a more cyber-savvy public sector workforce across all levels of government. It’s a welcome development, because in cybersecurity, distributed defense is effective defense.
Conclusion
The coming decade will test our collective resolve to secure the foundations of cyber sovereignty. The fault lines identified – illusory protections, dwindling human defenders, AI-fueled dependency, public distrust, and the fog of cyber war – are challenges we can no longer afford to downplay or address in silos. Yet across each fault line, we also see sparks of progress and proactive strategy: insurance policies being rewritten and security postures reassessed; workforces being realigned and augmented with new skills; nations banding together to avoid AI subjugation; citizens pushing for accountability and rights; and alliances forging norms to tame the chaos of conflict. The State-Level Vanguard exemplifies how empowerment at every level fortifies the whole. If there is a unifying lesson, it is that cyber sovereignty in the 2020s will demand constant, deliberate action – a fusion of tactical agility and strategic vision.
In practical terms, that means CISOs instituting continuous validation of security (never trusting that you’re “covered” without proof), and policymakers crafting adaptive regulations that actually bite when needed. It means treating cybersecurity talent as mission-critical and not an expendable cost center. It means investing in innovation and ethics in AI, so technology can advance without chaining us to foreign powers or eroding public values. It means engaging the public with transparency and building a digital society where security and privacy are not at odds. And it means preparing for conflict by setting rules and reinforcing shields now, rather than scrambling after the fact.
Sovereignty has always been about control over one’s fate. In cyberspace, that translates to control over one’s networks, data, and technological trajectory. The next decade offers a choice: either grasp the nettle and take bold, coordinated steps to secure that control, or continue with piecemeal measures and wishful thinking until the fault lines erupt into crises. The call to action is therefore clear. As a concluding thought, consider the adaption of a timeless principle of vigilance:
“In the digital realm, the price of sovereignty is eternal vigilance.”
Cyber sovereignty will not be won or lost in a single moment; it will be determined by the steady commitment of leaders at every level to anticipate threats, shore up weaknesses, and champion the values that make security worthwhile. The next decade awaits – let us navigate it with foresight and fortitude, so that the promise of a secure and sovereign cyber future can be realized.
VP Product & Programs | Defence & Cyber | Global, NATO, EU
3moWell said. Cybersecurity + data sovereignty + AI = future cyber-resilience & national security.
President @ R3 | Robust IT Infrastructures for Scaling Enterprises | Leading a $100M IT Revolution | Follow for Innovative IT Solutions 🎯
3moYou're right - and the stakes are much, much higher this way Ryan May
CTO @ C4C Group | Driving Real Change and Results
3moSpot on. Sovereignty used to mean borders and passports. Now it’s who owns your cloud stack, digital independence isn’t patriotic it’s operational.
Senior Cyber Executive
3moThanks for sharing, Ryan. Excellent post!
Information Security Manager at Florida Dept of Environmental Protection
3moRyan, incredibly insightful and impressive post. I have a lot of thoughts, but won't post them here. Next security social though I will certainly bend your ear. Great post though.