Can the GRC function generate risk?
In any well-governed organization, managing risk isn’t the responsibility of just one team—it’s a layered effort involving everyone, from frontline staff to independent auditors. This is where the Three Lines of Defense model comes in. It provides a structured way to ensure that risks are effectively identified, managed, and independently reviewed.
The Three Lines of Defense Model
The first line of defense lies with the people who are closest to the business activities—the operational teams. These are the employees and managers who interact with systems, processes, and customers every day. They're responsible for owning and managing risks in real-time, embedding controls into daily operations. For instance, an IT administrator who ensures software is regularly patched and user access is properly restricted is acting as a vital part of this first line.
The second line of defense supports and oversees the first. This includes risk management, compliance, and governance functions—often including cybersecurity Governance, Risk, and Compliance (GRC) teams. Their role is to set the rules of the game: developing risk frameworks, policies, and controls, while also monitoring whether the first line is operating within acceptable risk limits. They don't directly manage risk on the ground, but they guide and monitor those who do.
Finally, the third line of defense provides independent assurance. This is the domain of internal audit. Their job is to evaluate how well the first and second lines are functioning, ensuring that risks are being properly managed and controls are effective. They operate independently from the rest of the organization and often report directly to senior leadership or the board, ensuring objectivity.
Together, these three lines form a comprehensive system of checks and balances, helping organizations not just comply with regulations, but also stay resilient in the face of evolving risks—especially in areas like cybersecurity where the landscape can change overnight.
In that model, the GRC function is associated with effective risk management. Their goal is to create the highway, the onramp/offramp, and the rules of the road that everyone else will travel on. The road has to be effective (allow you to travel to your desired destination) and efficient (minimum disturbance and energy loss). Ideally, in fulfilling that goal, the GRC function must not create additional uncertainties, i.e., it should not substantially change the risk profile of the organization. In practice, the GRC function can exacerbate existing risks, it can introduce new risks, or transform existing risks. The actual appetite for such changes in the company risk profile is rarely explicitly stated. Very often we receive some indication in the form of comments or feedback that are one-sided (“Do not slow down deployment”, “This process is very cumbersome”). Therefore, we should proactively engage the stakeholders to come to a common understanding and a balanced view which is necessary for effective risk management.
But before we go there, let’s examine what are the most common ways a GRC organization could change the risk profile of an organization.
The GRC Function Creates Risk: Complexity
As the business grows, the risk profile becomes more complex and frameworks more numerous, the GRC team might respond by layering on policies, procedures, checklists, and documentation requirements. The goal, of course, is to ensure that the organization is audit-ready and in full alignment with every standard. But this flood of requirements can overwhelm the very people it's meant to support.
Employees start to experience policy fatigue. They’re swamped by documentation, forced to navigate overlapping rules, and often left wondering which policies actually matter. As a result, compliance becomes a box to tick rather than a tool for managing real risks. Teams may start focusing more on looking good on paper than being secure in practice.
This compliance-heavy approach also tends to slow down operations. Innovation grinds to a halt while teams wait for multiple levels of GRC approvals. Even cybersecurity incident responses can be delayed by rigid procedures that prioritize process over speed. In these moments, the organization becomes less agile—and more vulnerable.
Worse still, this overload can breed risk blindness. With so much attention placed on meeting compliance obligations, the organization may miss emerging threats or prioritize low-risk issues simply because they’re tied to audit findings. The result is a false sense of security.
And when compliance becomes too cumbersome, people start looking for shortcuts. Business units adopt unauthorized tools to stay productive. Developers push changes without the full approval chain. Shadow IT grows. Ironically, these workarounds—triggered by an overbearing GRC function—create the very risks the system was meant to prevent.
Finally, there’s the issue of wasted resources. By focusing too heavily on achieving perfect compliance, organizations often misallocate time and money—investing in controls that check a box, rather than those that actually reduce meaningful risk.
As Oscar Wilde said, “The bureaucracy is expanding to meet the needs of the expanding bureaucracy". More mature teams are aware of the increased complexity of the environment and make conscious efforts to reduce it - simplify policies, optimize processes, reduce the number of controls, exploit overlaps and redundancies, etc. Unfortunately, there is a critical mass of complexity where those efforts become largely ineffective: The team that understands all the requirements and dependencies becomes too large,e and when they come up with a solution the environment has already changed. It is highly unlikely that the organization would spend a lot of effort and, effectively, freeze in order to optimize its compliance.
However, this is a perfect opportunity to consider using AI. While there is no single person who knows everything about the environment or applicable risks and regulations, it is quite feasible to contextualize an AI assistant and even have the ability to keep itself up to date. Interacting with the AI assistant, the stakeholders can define optimization criteria, play “What If?” scenarios, generate project plans, etc.
The GRC Function Creates Risk: Change Management
In today’s fast-moving digital world, the threats facing an organization evolve daily—new attack vectors, emerging technologies, shifting regulations, and changing business models. Yet sometimes, the GRC function—designed to manage risk—falls behind. And when it does, it can quietly become a source of risk itself.
Imagine a GRC team still operating from a playbook written three years ago. Their risk registers list threats that made headlines back then but say little about today’s realities. Their policies focus on data centers, while the organization has long since moved most of its infrastructure to the cloud. Their frameworks rely on quarterly reviews, while attacks now unfold in minutes.
As the external environment shifts, the GRC function begins to drift out of sync. Threats emerge faster than they can assess them. New business initiatives launch without adequate oversight, not because teams are hiding them—but because the GRC process simply isn’t built to keep up.
The result? Blind spots open across the organization. High-risk areas—like third-party SaaS tools, AI integrations, or remote work infrastructure—may not be covered by existing controls. Or, the environment changed quickly, the controls were inadequate, and the GRC team did not have time or awareness to adjust them. Compliance reports still show "green," but only because the controls being tested are no longer relevant.
This disconnect also causes friction between GRC and the rest of the business. Operational teams grow frustrated with outdated policies that don’t reflect how they actually work. Security teams may bypass formal risk assessments because they take too long, choosing speed over procedure. The GRC team, once a partner in enabling safe business, starts to be seen as a blocker.
And perhaps most dangerously, leadership might continue making decisions based on the assumption that risks are being managed—when in reality, the framework used to evaluate those risks is obsolete.
In failing to evolve with the environment, the GRC team doesn’t just fall behind—they leave the door open. Open to cyberattacks the framework doesn’t account for. Open to regulatory violations from new privacy laws they haven’t tracked. Open to strategic missteps, because no one is connecting emerging risks to business goals. Because in a world where change is the only constant, staying static is its own kind of risk.
So, how can we mitigate that risk? Here are some suggestions:
On the people side - create structures that allow GRC to be tightly integrated with the rest of the organization. For example, the “advisory” model, where GRC people are embedded throughout the organization. They participate in operational meetings and strategic planning. They help deploy company-wide GRC initiatives but also act as the voice of the function when changes are considered. The advisors are very attuned to the changes in the environment and can play the “out of bound” check for the “standard” change management process. The challenge with this approach is that it trades visibility and effectiveness for increased team size. In some cases, the “champions” model is used: people within the team are “recruited” by the GRC. It utilizes fewer resources than the advisory model but it is also less effective.
On the technical side - we need to have a system that will adjust itself dynamically, with minimum human involvement. Such a system must be well integrated into the individual departments, processes, and tools. For example, the Product team creates functional requirements that are passed to engineering to start development. If our system becomes aware at this point that something in the environment is about to change, it could start assessing the change and, potentially, generate recommendations (the proverbial “shift left”). In order to effectively do that, the system must be integrated into the Product process, not attached to it.
Again, we have a very good case for considering an AI GRC assistant. It addresses the resource constraints in the “advisory” model. A chat-type advisor could be exposed to the function while the advisor is supervising it. It could also be a helpful assistant to the champions.
An AI GRC assistant could relatively easily integrate with the tools and processes an organization uses, monitor for changes, and assess and provide recommendations right on the spot.
Those are just a couple of examples where the function, tasked with creating and managing a system for risk management, actually changes the risk profile of the organization. There are many more and I would recommend a risk statement to be part of the GRC mission statement and even the OKRs. As an organization grows, complexity will grow and we must make conscious efforts to reduce it. AI could be a great assistant in this effort.