Data Mesh Q&A with Accenture’s Leading Experts💬

Data Mesh Q&A with Accenture’s Leading Experts💬

Organizations today face constant pressure to unlock value from data at scale and boost productivity while keeping costs under control. One of the most talked-about frameworks tackling this challenge is Data Mesh.

To unpack what it really means and how companies can implement it, my colleague Thomas Fey and I sat down for a Q&A with two of Accenture’s top data experts, Matthias Kopp and Nicolas Mori .


🔍Data Mesh is a concept that has sparked much discussion in recent years. Can you tell us briefly what your definition of a Data Mesh is?

Data Mesh is a sociotechnical framework focused on organizational change, accountability, and governance designed to address the growing complexity of data management in established businesses. It emerged in response to the challenges many organizations face when trying to scale data practices within traditional, centralized models.

Data Mesh offers a different way of thinking. Instead of centralizing responsibility for data, it distributes ownership across domain teams, the groups closest to the data and the business processes that generate it. These teams treat their data as a product, responsible for its quality, discoverability, security, and usability. A central team remains to maintain and evolve the infrastructure, but not the data itself.

Article content

🌍What is the current state of Data Mesh adoption and practice?

Today, adoption is still maturing. Most companies apply core principles like domain ownership and federated governance selectively, often within hybrid architectures. In practice, we see organizations experimenting within individual domains, layering new responsibilities on top of existing systems.

What makes the difference is less about tooling and more about cultural readiness aalignment between business and data goals. To truly scale Data Mesh, organizations need translators - people who actively communicate and promote the concept internally, making it understandable and actionable across domains. Without this, benefits remain isolated.

Domain teams bear full responsibility for creating and maintaining data products. This shift in accountability requires clear incentives and sustainable funding models to avoid burnout or misalignment over time.

Finally, central coordination mechanisms remain critical. Structures like a shared data catalog and cross-functional forums, involving data stewards, architects, producers, platform teams, and consumers help ensure alignment, quality, and reuse across the mesh.

🚫 What is the biggest misconception you experienced while talking to clients?

A common misconception about Data Mesh is that it’s just a technical solution. In reality, it represents a fundamental mindset shift. Adopting Data Mesh doesn’t mean replacing existing architectures, it’s about rethinking how data is managed, shared, and evolved.

One effective pattern we’ve seen is to maintain a shared infrastructure, like a central data lake, but slice it into domain-specific areas managed by decentralized teams. Rather than giving everyone unrestricted write access, it's more effective to define common infrastructure and communication standards, which domains then implement autonomously.

Another key aspect often overlooked is the evolution of data products. Managing versioning, release notes, and lifecycle rules should be part of the central governance model to ensure that data products remain usable, discoverable, and reliable over time.

In practice, many organizations continue to use centralized platforms like data lakes or warehouses while applying Data Mesh principles to decentralize ownership, governance, and delivery. This hybrid approach allows teams to retain the strengths of central infrastructure - such as security, consistency, and compliance, while reducing bottlenecks and giving domains greater autonomy.

Some centralized systems remain necessary, especially for sensitive data like customer identifiable information, which must stay within a governed, compliant environment. The key is not choosing between centralization and decentralization but designing for flexibility where it makes sense.

🧭For companies exploring Data Mesh, how can they determine if its the right approach for them?

Data Mesh is best suited for larger, complex organizations where data is generated across many domains and the central data team has become a bottleneck. If business users are waiting weeks for reports or insights, or if your data engineers are overwhelmed trying to serve too many teams at once, that’s a strong signal that Data Mesh might help.

It’s also a good fit when domain teams have deep subject matter expertise and can take ownership of their data, treating it like a product—that is, making it well-documented, reliable, and tailored to meet real business needs. However, successful implementation also requires IT and data engineering expertise within each domain team. This often means building up capabilities locally or providing targeted support from a central team that “infuses” technical know-how where needed.

Smaller organizations with leaner teams and simpler data needs may not see the same benefits. For them, the overhead of decentralized ownership can outweigh the advantages, making a centralized model more efficient.

A good starting point is to identify specific pain points in your current data processes, such as long lead times or inconsistent data quality. Then, apply Data Mesh principles in one domain to test whether it improves delivery speed, ownership, and data value.

💡How do businesses profit from Data Mesh?

The core value lies in agility, autonomy, and alignment. When data ownership is distributed across business domains, teams can act faster and build trust in their data. Data Mesh empowers teams to take control of their own data and treat it as a product. This shift enhances accountability, speeds up access to actionable insights, and fosters stronger cross-functional collaboration.

Unlike centralized data architectures, where knowledge must often be transferred to IT or data engineering teams, Data Mesh allows domain experts to publish data, without relying on a central intermediary. Because they understand the business context, they tend to include only the most relevant elements, attributes, and explanations, resulting in focused, self-describing data products that are easier to understand and use. This improves data quality and usability and also reduces unnecessary processing and infrastructure strain.

The decentralized approach supports incremental rollouts and quick wins, avoiding the pitfalls of costly, high-risk “big bang” integrations. Data Mesh allows integration and reconciliation only when necessary, making it both cost-effective and adaptable. By bridging the gap between technical infrastructure and business needs, it transforms how organizations use data to drive decisions, innovation, and measurable business outcomes.

That said, businesses must also weigh the trade-offs. Data Mesh shifts the cost of ownership, skills, and maintenance to the domains. It introduces complexity in cross-domain integration and reporting, and requires strong governance, incentives, and change management to ensure interoperability and consistent data delivery across the mesh.

🏗️Implementing Data Mesh significantly impacts the organizations operating model across several key dimensions. What changes to the organization are necessary to implement Data Mesh?

Implementing Data Mesh requires a pretty significant shift in both structure and mindset. Ownership of data moves to domain teams, which means they need to embed data experts and take full responsibility for their own data products - from quality and documentation to how the data is published, accessed, and maintained. New roles such as Data Product Owners typically emerge, working closely with a central platform team that provides self-service infrastructure, tooling, and shared capabilities.

From a process standpoint, domains are expected to treat data as a product, versioned, documented, discoverable, and self-describing, including metadata, usage context, and service level agreements. They must expose technical interfaces like APIs or query endpoints to make their data consumable for others in the organization.

Security and access control remain critical, but rather than being managed entirely by each domain, they should adhere to centralized security standards defined within a broader governance model. This ensures consistency, compliance, and reduced risk across the organization.

Culturally, this model encourages teams to adopt automation like CI/CD pipelines for data and to think of data as a strategic asset, not just an operational byproduct. But it’s not without challenges: it takes real investment in change management, training, and incentives to help teams adapt to the new responsibilities and ways of working.

👥What roles and responsibilities are essential in a functioning Data Mesh operating model?

Implementing Data Mesh requires not only a shift in mindset and governance but also a rethinking of roles and financial responsibilities across the organization. Traditional centralized models often obscure the true cost of data management, with expenses and effort bundled in central IT budgets. Data Mesh could make these costs more transparent by distributing ownership and accountability to domain teams.

The data mesh concept introduces new roles such as Data Product Owner, Data Product Developer, and Data Steward within each domain. These roles are responsible for the lifecycle of data products - from development to maintenance, documentation, and continuous improvement. At the same time, the central platform team evolves to become a provider of standardized self-service infrastructure, responsible for enablement, governance tooling, and cross-domain coordination.

With greater autonomy comes the need for clear cost attribution models. Domain teams not only own their data products but also become accountable for the resources they consume, including storage, processing, and platform services. To avoid friction or underfunding, organizations must define funding mechanisms that balance flexibility with fairness. This might include internal chargeback models, capacity-based budgeting, or platform-as-a-service approaches.

Ultimately, successful cost allocation depends on transparency and shared accountability. When done right, this model incentivizes responsible data ownership, discourages overproduction, and aligns investment with business value. But without clear ownership and sustainable funding structures, Data Mesh initiatives risk collapsing under the weight of uneven responsibility and budgetary misalignment.

🔧What are the key steps or dependencies for successful implementation of a Data Mesh infrastructure?

The biggest challenge companies focus on is the IT infrastructure changes required. Integrating Data Mesh with existing systems is one of the key challenges organizations face when adopting this framework. Data Mesh implementations are rarely greenfield developments. The issues Data Mesh addresses are typically a result of the growth of an organization and the increasing demand for business data across multiple, disparate use cases. As such, when building a Data Mesh, organizations need to contend with their existing data infrastructure, tools, and platforms.

The process of implementing Data Mesh is iterative, requiring organizations to gradually improve how data is created, shared, and used across different departments. Rather than completely replacing current systems, Data Mesh integrates into the existing landscape by introducing principles such as decentralized data ownership, federated governance, and treating data as a product. This allows organizations to leverage their current centralized systems while empowering domain teams to manage and control the data they generate.

For example, organizations may start by applying Data Mesh principles to certain domains or use cases, experimenting with decentralized ownership and management, while continuing to use existing data warehouses or platforms for broader data storage and analysis. As these principles prove successful, they can then be scaled and integrated further into the organization, refining both the technical and cultural aspects of data management over time.

Ultimately, integrating Data Mesh with existing systems requires a gradual, thoughtful approach, where new practices are introduced step-by-step while existing systems continue to operate. This ensures that the transition to Data Mesh is smooth and sustainable, without disrupting the data flows, that are critical to business operations.

Article content

🤖How do AI advancements influence the relevance or adoption of Data Mesh?

AI and Data Mesh are highly complementary. As AI advances, it demands more high-quality, domain-specific data, which is exactly where Data Mesh excels. By decentralizing data ownership and bringing it closer to domain teams, Data Mesh enables AI use cases to be developed with better business context and faster iteration.

AI doesn’t just benefit from Data Mesh, it can also enhance it. Capabilities like automated metadata generation, data quality scoring, and intelligent documentation can strengthen Data Mesh implementations. This mutual reinforcement creates a virtuous cycle: Data Mesh improves data availability for AI, and AI helps scale and optimize Data Mesh practices.

🚀What role does Data Mesh play in helping organizations scale AI use cases successfully?

Many organizations can develop promising AI proof-of-concepts (PoCs) but scaling them across the business is where they often fall short. This is rarely just a technical challenge. Traditional setups often silo data and delay delivery.

Data Mesh addresses these issues directly. By aligning data ownership with domain teams and treating data as a product, it ensures data is well-contextualized, higher quality, and more accessible. This decentralized ownership removes friction and makes it easier to deliver production-grade data for AI at scale.

The self-service infrastructure built into a Data Mesh also empowers AI teams to access and experiment with data independently, without waiting on central teams. This autonomy accelerates both model development and deployment, making AI initiatives more scalable, repeatable, and impactful across the organization.

🔮If you project yourself ahead 3 to 5 years, what role do you see Data Mesh playing in the broader data and AI landscape?

From a technological perspective data product and contract definition approaches will converge and (consolidated) SaaS solutions will fill currently present gaps like easy-to-use access patterns across hyperscalers, vendors and corporate networks. From a data management perspective, ontology and shared (and maybe customized) definitions of data objects with common and sector-specific vocabulary will ease data-based collaboration. Instead of fading away, Data Mesh may become the default operating model for organizations managing complex data at scale in the AI era. To summarize, AI and automation will be embedded more deeply. Governance will become smarter. Tooling will be more standardized.


Feel free to reach out to us - We’d love to hear your thoughts.

Dena Karimi , Thomas Fey , Matthias Kopp , Nicolas Mori

To view or add a comment, sign in

Others also viewed

Explore topics