Enterprise Architecture in the AI Era: A 7-Layer Blueprint for Modern EA Practices

Enterprise Architecture in the AI Era: A 7-Layer Blueprint for Modern EA Practices

Artificial Intelligence is no longer an experimental add-on; it’s becoming a core part of enterprise strategy. For enterprise architects (EAs), this means evolving traditional architecture practices to ensure AI systems are planned, integrated, and governed effectively across the business. Scaling AI in an enterprise requires a well-defined architecture and strong governance – frameworks like TOGAF provide structure to align AI initiatives with business goals and avoid siloed, ad-hoc projects. In this article, we use a 7-layer AI architecture model as a guiding framework (from Physical infrastructure up to Applications) to explore how each layer impacts enterprise architecture practices. We’ll examine the enterprise relevance of each layer, implications for architecture methods, governance needs, and how to integrate these considerations into traditional EA frameworks (e.g. TOGAF, BIAN). The goal is to provide EAs with a strategic blueprint – complete with examples and actionable insights – for making their organizations AI-ready across technology and business dimensions.

AI Architecture in seven layers, providing a structured view from hardware to end-user applications (Physical, Data Link, Computation, Knowledge, Learning, Representation, Application). Each layer has distinct roles – from handling data and processing to training and deployment – that enterprise architects must account for in an AI-ready architecture.

Layer 1: Physical Layer – The Hardware Foundation

Enterprise relevance: The Physical Layer represents the hardware and infrastructure where AI models run. It’s the bedrock that provides the computational power and storage needed for AI workloads. In practical terms, this includes GPUs, TPUs, high-performance servers, cloud infrastructure, edge devices, and even emerging hardware like quantum computers. For enterprises, a robust physical layer is essential to handle the intensive processing demands of modern AI (e.g. training large neural networks or running complex models in real time). This layer ensures AI models have the raw horsepower they need to function effectively. In an AI-enabled enterprise, infrastructure considerations expand beyond typical CPU-centric servers to specialized accelerators and scalable cloud services.

Implications for EA practices: Enterprise architects must update technology architecture plans to accommodate AI hardware needs. This is a shift from traditional capacity planning – EAs now need to plan for GPU clusters, distributed computing frameworks, and possibly edge computing deployments if AI inferencing is happening closer to data sources (for example, AI on factory floor cameras or IoT devices). Infrastructure alignment is critical: architects should ensure data center networks, storage systems, and cloud configurations can handle high data throughput and parallel processing for AI. This might involve adopting a hybrid cloud strategy (to leverage cloud GPU elasticity while retaining some on-prem capability for sensitive data), or investing in on-premise AI acceleration hardware for low-latency needs. It also means working closely with IT operations on platform strategy – for instance, providing an internal “AI platform” or cloud environment that data science teams can use on demand, rather than isolated hardware silos.

Governance needs: At the physical layer, governance translates to capacity management, cost control, and reliability. Running clusters of GPUs or using expensive cloud AI services can quickly inflate costs, so EA governance should establish policies for resource usage (e.g. when to use on-prem GPUs vs. cloud instances) and monitoring of utilization. There’s also a sustainability angle: high-performance AI hardware consumes significant power, so architects might include energy efficiency and cooling requirements in standards. Additionally, vendor management and risk come into play – for example, if relying heavily on a cloud provider’s specialized AI chips, ensure contingency plans or multi-cloud options to avoid lock-in. EAs should incorporate these considerations into their Technology Architecture documentation (as per TOGAF’s Technology Architecture domain) and architecture principles (for example, adding a principle on “Efficient Utilization of AI Hardware”).

Integration into EA frameworks: Traditional frameworks already cover infrastructure, but now must explicitly include AI. In TOGAF terms, the Physical layer aligns with the Technology Architecture phase, requiring updates to reflect AI hardware components. TOGAF 10, for instance, has introduced more agile, modular guidance which can accommodate new tech like AI. EAs should ensure that their architecture definitions list AI hardware (GPU farms, etc.) as building blocks and that roadmaps account for scaling this layer as AI demand grows. Industry-specific frameworks like BIAN (Banking Industry Architecture Network) focus mainly on business capabilities and service domains, but even there the physical infrastructure underpins all services. In banking, an AI-driven service (e.g. fraud detection) will rely on robust infrastructure; EAs using BIAN should verify that their service deployment environments have the needed computational resources. For example, if a bank plans AI-based fraud detection, the EA might map that capability to an infrastructure architecture that includes GPU-accelerated servers or cloud services specialized for AI – ensuring the Physical layer is aligned to business needs.

Example: Consider a global insurance company that wants to deploy AI models for real-time risk assessment. Traditional servers proved too slow for the complex models. The EA team introduced GPU-accelerated computing in its data centers and allowed burst capacity on public cloud AI services for peak times. They updated their infrastructure architecture diagrams and TOGAF work products to include “AI Compute Farms” as a component. Governance policies were set so that any new AI project must estimate GPU hours needed and get approval if it would exceed a threshold, ensuring cost and capacity are monitored. This foundation enabled the company’s data scientists to train models 10× faster and deploy advanced analytics that were previously infeasible, illustrating how a well-planned Physical layer supports AI innovation.

Layer 2: Data Link Layer – Integration and Orchestration

Enterprise relevance: The Data Link Layer connects AI models to real-world applications and data pipelines – essentially the integration fabric for AI. It covers data ingestion, API interfaces, messaging systems, and MLOps orchestration that link model outputs to business workflows. In an enterprise context, this layer ensures that AI is not an isolated “black box” but is wired into the IT ecosystem. For example, an AI model might need to pull data from a customer database and send predictions to a CRM system; the Data Link layer handles these connections (often via APIs, ETL processes, or streaming pipelines). It’s analogous to the integration layer in traditional IT, but geared toward the data flows and model serving that AI requires. As one author explains, this layer manages data pipelines and API integrations in real time, and from an EA perspective it demands emphasis on data governance, data quality, and standard integration best practices. In short, the Data Link layer is what bridges the AI logic with the rest of the enterprise architecture.

Implications for EA practices: EAs need to incorporate AI integration points into their Application and Data Architecture designs. This is a shift in capability planning: where earlier an architect focused on connecting transaction systems, now they must design for pipelines that feed data to AI models (for training or inference) and deliver AI results back to applications. That might mean establishing new integration patterns such as “model-as-a-service” APIs – internal endpoints where applications can request an AI prediction. It also involves adopting MLOps (Machine Learning Operations) practices: automated pipelines for data preparation, model training, model deployment, and monitoring. Enterprise architects should work with data engineering teams to standardize how data moves from source systems to AI platforms (e.g. using message queues, streaming platforms like Kafka, or ETL jobs) and how models are exposed (e.g. via REST APIs or microservices). By standardizing data flows across the organization, you build a strong, reliable pipeline for machine learning models – this standardization is key to scaling AI usage consistently.

Additionally, platform strategy comes into play at this layer. Many organizations are creating centralized AI/ML platforms that provide data pipelines, feature stores, and model serving infrastructure as a shared service. Enterprise architects should guide the development of such platforms to avoid each AI team reinventing integration flows. This aligns with the broader EA principle of reuse and shared capabilities – treating the AI integration layer as a common utility for the enterprise (much like an ESB or API gateway in traditional architecture).

Governance needs: The Data Link layer requires data governance and interface governance at a high level. Since this layer deals with moving and transforming data, EAs must ensure proper data quality checks, metadata management, and lineage tracking are in place (so that any model consuming data knows its source and quality). Governance should also cover API management – making sure that any APIs exposing AI functionality are secure, version-controlled, and monitored. For example, if multiple business units start exposing AI models via APIs, architects should enforce an API design standard and use of an API management gateway to track usage. Data privacy and compliance are crucial here: personal or sensitive data flowing into AI models must be handled according to regulations (GDPR, HIPAA, etc.), so the integration pipelines should include anonymization or encryption as needed. In terms of organizational governance, some enterprises establish an AI Integration Review as part of their architecture governance boards – i.e., any new AI system is reviewed for how it interfaces with existing systems and data, ensuring it follows architecture guidelines.

Integration into EA frameworks: In TOGAF’s ADM, the Data Link layer would be addressed mainly in the Data Architecture and Application Architecture phases. EAs should update data architecture artifacts to include AI data pipelines and feature stores as components of the information architecture. The application architecture should include the MLOps and integration services that connect AI into applications (for instance, model serving APIs or orchestration engines). Notably, TOGAF 10 has increased focus on agile integration and even mentions API-driven approaches, which aligns well with what this layer demands. In industry-specific frameworks like BIAN, which provides standardized service domains for banks, the Data Link layer corresponds to ensuring AI functions are built into those services via well-defined interfaces. BIAN emphasizes standard APIs and modular services, enabling seamless integration of new capabilities like AI-driven automation. So a bank using BIAN could, for example, integrate a credit scoring AI model into the “Loan Origination” service domain through a defined API. The structured approach of frameworks like BIAN can facilitate AI adoption by ensuring clean, interoperable data and interfaces, which are exactly what the Data Link layer needs to function.

Example: A retail enterprise implemented a product recommendation AI model to personalize its e-commerce site. The EA team ensured this model was wrapped in an API service and integrated with the e-commerce application via the existing API gateway (instead of the application calling a model file directly). They designed a data pipeline that nightly exports updated product catalog and customer behavior data into a feature store, which the model uses for training and real-time scoring. All API calls to the recommendation service are logged and monitored. By treating the recommender as a modular service in the architecture, it became easier to swap in an improved model later without changing the core app. This example shows the Data Link layer in action – orchestrating data and model integration – and how EAs can standardize it to make AI components plug-and-play across the enterprise.

Layer 3: Computation Layer – AI Execution Engines

Enterprise relevance: The Computation Layer (Execution Layer) is where AI logic actually runs in real time. It provides the runtime environment for inference and other AI computations, whether that’s on a cloud server, an edge device, or a user’s smartphone. In essence, this layer is the “brain” performing the calculations that the AI model requires, leveraging the hardware from the physical layer. In enterprise scenarios, this might manifest as a Kubernetes cluster running containers of AI microservices, a specialized inference server for running large language models, or edge computing devices doing on-site analysis (like an AI camera doing image recognition on-premises). The key enterprise concern here is performance and scalability: ensuring that AI models can execute within the required time frames (e.g. low-latency responses for customer-facing applications) and scale to handle the workload (number of requests, volume of data) as demand grows. This layer makes AI-driven processes possible in day-to-day operations – for instance, an AI model scoring transactions for fraud in real time relies on a well-tuned computation layer.

Implications for EA practices: EAs need to architect the deployment and execution environment for AI with careful attention to throughput, latency, and reliability. This is where infrastructure alignment meets application performance. Architects might have to introduce new components like distributed computing frameworks or inference platforms. For example, if the business plans to use deep learning models widely, an EA might propose an internal GPU-enabled Kubernetes cluster specifically for model serving. If models need to run in multiple regions or at edge locations, the architecture may include content delivery networks or edge servers. In designing the computation layer, scalability and flexibility are paramount: AI workloads can be spiky or grow rapidly as adoption increases. Thus, architects often favor containerization and orchestration (Docker, Kubernetes) to allow flexible scaling, or serverless architectures for AI (where a function can scale out automatically on cloud when triggered by events). One LinkedIn article on the 7-layer model notes that architects should define scalable compute clusters or containerized environments that can handle diverse AI workloads – whether deep learning tasks or simpler ML – and enable resource elasticity for faster time-to-market. This may also involve leveraging cloud services like AWS SageMaker endpoints or Azure Machine Learning endpoints for model hosting as part of the enterprise architecture, especially for organizations that want to avoid managing infrastructure directly.

Furthermore, this layer overlaps with DevOps and IT operations. EAs should ensure that the monitoring and management of AI services are integrated into the enterprise’s IT operations toolchain. That means including AI services in logging, monitoring, alerting systems (for example, using APM tools to watch model latency and failure rates) and in deployment automation (CI/CD pipelines that push new model versions into production). The result is that AI execution isn’t a “black box” but is treated with the same rigor as any mission-critical application component. In terms of capability planning, organizations may need to develop new capabilities such as ModelOps or AIOps – teams or tools specialized in managing the runtime aspects of models (scaling them, optimizing performance, etc.). Enterprise architects might drive the creation of an “AI Operations” function that ensures this computation layer is always optimal and aligned with business SLAs.

Governance needs: Governance at the computation layer is about performance, continuity, and compliance. EAs should set architecture standards for AI deployments – for instance, requiring redundancy for critical AI services (so no single point of failure in inference servers), or specifying latency targets that architectures must meet for given use cases. If an AI service slows down or crashes, it could disrupt business processes, so architects need to plan for failover strategies (like fallback to a simpler rule-based system if the AI service is unavailable). Security is also crucial: AI services running in production must be secured against unauthorized access or misuse (especially if they are accessible via network). This layer should follow the enterprise security architecture guidelines: e.g., ensure that only authorized applications can call the model service, and that data passed in/out is encrypted if needed.

From a compliance perspective, if the AI computations are involved in regulated decisions (like credit scoring, medical diagnoses), governance might require audit logging of each inference and the ability to reproduce results. The EA should include such requirements in the design (for example, logging input and output of models or versioning the models used for each request). Some organizations implement a Model Registry as part of governance – a system that tracks which model version is deployed where, ensuring traceability. This Model Registry ties into the computation layer by feeding the correct models into the runtime environment and keeping a history (useful for governance and debugging). Ensuring transparency and accountability in automated decisions is an emerging challenge, and architects must bake in the necessary oversight mechanisms (like monitoring for model drift or bias at runtime).

Integration into EA frameworks: In TOGAF terms, the computation layer spans Technology and Application Architecture. It deals with technology infrastructure (servers, containers, etc.) but also with the application behavior (the running AI services). EAs should document the design patterns for AI deployment in their architecture repository – e.g., a standard pattern for “Real-time inference service” which details how an AI microservice should be built, deployed, monitored in the enterprise. This becomes part of the EA standards and building blocks library. Frameworks like TOGAF encourage defining such platform services in the technology architecture – here, the platform service would be an “AI model execution environment” that other solutions can reuse. When aligning with something like BIAN, this layer would be an underlying platform that supports various service domains. BIAN doesn’t dictate runtime, but a BIAN-aligned bank could specify that all AI components (fraud detection, customer insight, etc.) use a common execution platform for consistency. This follows the BIAN philosophy of not duplicating capabilities: one scalable AI execution platform rather than each department running their own. In summary, architectural alignment means EAs explicitly plan and include the AI execution layer in their enterprise tech landscape, rather than leaving it as a by-product of individual projects.

Example: A fintech company needed to deploy an AI model that predicts transaction fraud within milliseconds during payment processing. The EA team helped design a high-throughput, low-latency inference architecture: they containerized the model and deployed it on an autoscaling Kubernetes cluster with GPU nodes. They placed instances in multiple geographic regions to be close to users (reducing latency) and used a global load balancer. They also integrated model inference logging into the existing monitoring dashboard. When the model was first rolled out, a surge of usage occurred; thanks to the elastic design, the system scaled out and met the demand without performance degradation. The EA’s foresight in the Computation layer – preparing a flexible, robust execution environment – was critical to meeting business requirements for speed and reliability.

Layer 4: Knowledge Layer – Reasoning and Context

Enterprise relevance: The Knowledge Layer (Reasoning Layer) adds intelligence to AI systems by incorporating contextual knowledge and reasoning capabilities beyond the core model’s training. In practice, this often means connecting AI models with external information sources, knowledge bases, or rules so that the AI can retrieve facts and make informed decisions using more than just its learned parameters. Modern examples include retrieval-augmented generation (RAG) for large language models (where the AI fetches relevant documents from a knowledge base to ground its answers) and the use of knowledge graphs or ontology databases to provide structured context. In an enterprise, this layer is immensely relevant because organizational knowledge is a strategic asset – think of all the databases, documents, and expertise that AI can leverage. A customer service AI, for instance, might use a knowledge layer to look up current inventory or a customer’s purchase history when answering an inquiry. This layer transforms raw data and model outputs into meaningful, context-aware intelligence by retrieving facts, checking against policies, or applying logical rules. It essentially embeds the enterprise’s domain knowledge into AI operations, which can vastly improve accuracy and relevance of AI outputs.

Implications for EA practices: EAs should incorporate knowledge management and retrieval systems as first-class components of their AI architecture. Traditionally, many enterprises have data warehouses, business intelligence, and content management systems – now, architects must consider how these can interface with AI or perhaps be extended into new forms like enterprise knowledge graphs or vector databases for semantic search. For example, an EA might propose building an enterprise knowledge graph that links customer data, product data, and support documents; AI systems (like an AI assistant for employees) could query this graph to answer complex questions. Implementing a knowledge layer often requires interdisciplinary collaboration: architects will work with knowledge management teams or data governance teams to design ontologies and data schemas that AI can easily tap into. As one EA expert noted, teams must ensure the right structures and ontologies are in place to make knowledge accessible and reusable across different business functions. This might involve adopting semantic standards (RDF, OWL for knowledge graphs) or deploying new infrastructure like a vector search engine (e.g. Elasticsearch with vector capabilities, Pinecone, Weaviate) to enable AI to find relevant unstructured information.

Another aspect is business rules and logic. In some cases, the knowledge layer could include rule engines or reasoning engines that apply company policies or constraints on the AI’s output. For instance, if an AI model recommends a financial trade, a rule-based system might check compliance rules before final approval. EAs should ensure that such rule engines or knowledge repositories are part of the architecture blueprint so AI decisions remain within accepted boundaries.

Governance needs: The knowledge layer raises significant governance questions around content management, accuracy, and ethics. First, data governance is crucial: any knowledge base feeding AI must be maintained – information should be up-to-date, accurate, and validated. If the AI is pulling from a knowledge source with outdated or incorrect data, it could automate poor decisions. EAs should advocate for ownership of each knowledge repository (who curates the knowledge graph? who updates the FAQs used by the chatbot?) and set governance processes for regular updates and quality checks. Second, access control is important: not all knowledge should be available to all AI or users. For example, an AI might have access to sensitive HR data when serving an internal HR assistant but not when serving a customer-facing chatbot. Architects should design the knowledge layer with appropriate security and filtering mechanisms, ensuring compliance with data privacy (certain personal data might never be allowed into the AI’s knowledge retrieval due to regulations).

Another governance aspect is traceability and trust. When AI uses external knowledge to make a decision or give an answer, it’s often important to trace where that information came from – especially in regulated fields. The architecture might include audit logs or even AI explanations that reference the source (e.g., “According to policy document XYZ, the answer is…”). This overlaps with the emerging field of XAI (explainable AI). EAs don’t implement AI algorithms, but they can ensure the architecture captures and surfaces the contextual info that led to an AI decision. Additionally, architects should be mindful of bias or gaps in knowledge: if the knowledge base itself is biased or incomplete, the AI’s outputs will be too. A governance committee for AI (which many organizations are forming) should include oversight of knowledge sources – deciding what data is authorized for AI to use and vetting new sources.

Integration into EA frameworks: In traditional EA, knowledge management might have been considered part of the data architecture or even business architecture (as corporate knowledge, process knowledge, etc.). Now, it’s tightly coupled with technology implementation for AI. Within TOGAF’s architecture domains, the Knowledge layer is somewhat cross-cutting: it involves Information Systems Architecture (data architecture – structuring the knowledge, and application architecture – the services that retrieve and reason) as well as tying into Business Architecture (ensuring AI aligns with business knowledge and rules). EAs should incorporate elements like knowledge graphs, enterprise search indexes, or AI reasoning services into their target architectures. For example, during the TOGAF ADM phases, when developing Data Architecture, one should include models for how unstructured data and documents are linked for AI consumption. When developing Application Architecture, one might include a “Knowledge API” or a cognitive search service as a component that other applications (and AI models) can call.

If using a framework like BIAN in banking: BIAN doesn’t explicitly define a “knowledge layer,” but it does define service domains for things like Analytics or reference data management. EAs can map AI knowledge components into those existing structures (e.g., a “Customer Insights” service domain might be partly realized by a knowledge graph that connects customer data across accounts). BIAN’s emphasis on data consistency across service domains complements the knowledge layer concept – consistent, well-governed data is easier to turn into a useful knowledge source for AI reasoning. In summary, architects should ensure that their enterprise architecture documentation explicitly shows how knowledge and context are made available to AI solutions, so it’s not left to individual projects to figure out.

Example: A global consulting firm built an internal AI assistant to help consultants quickly find information (like past project documents, expert contacts, research). The EA team realized that for the assistant to be effective, they needed a robust knowledge layer. They designed an Enterprise Knowledge Hub – a combination of a document repository, a metadata catalog, and a vector search engine that could semantically search through all internal documents. They also integrated a knowledge graph that linked experts to topics and past projects. With this architecture, when a consultant asks the AI assistant a question, the system retrieves relevant past proposals or reports and uses that to formulate an answer. The EA team established governance where content owners for each practice area must regularly tag and update documents in the repository. The outcome was that the AI assistant could provide context-rich answers (citing the specific document it used), greatly improving consultants’ access to the firm’s collective knowledge. This illustrates how a well-architected Knowledge layer can turn corporate data into a powerful asset via AI.

Layer 5: Learning Layer – Model Training and Evolution

Enterprise relevance: The Learning Layer is where AI models are trained, tested, and refined. It encompasses the processes and platforms for machine learning model development – from initial training on historical data to continuous learning with new data. In enterprises, this layer represents the engine of innovation for AI capabilities. It’s how new predictive models, classification systems, or recommendation engines are created and improved over time. While many organizations might start by using pre-trained models (from vendors or open source), eventually they often need to fine-tune those models or train their own to address proprietary business problems. This layer includes activities like data exploration, feature selection, model training (using techniques like neural network backpropagation, decision tree splitting, etc.), hyperparameter tuning, and validation. Modern AI systems (especially ones based on machine learning) are not static – they require an ongoing learning process to stay accurate as conditions change. For example, a fraud detection model needs retraining as new fraud patterns emerge. Thus, the Learning layer is crucial for keeping AI applications relevant and effective. It’s responsible for developing and refining AI models, ensuring they learn from data effectively before those models are deployed into production.

Implications for EA practices: Enterprise architects traditionally haven’t been deeply involved in R&D or software development processes – their focus was on integrating off-the-shelf systems or custom-developed software into a cohesive architecture. But with AI, architecture and development are closely intertwined because the performance of an AI system heavily depends on how it’s trained and updated. EAs should therefore extend their scope to include the AI model development lifecycle as part of the enterprise architecture. This means planning for infrastructure and tools that support data scientists and ML engineers: such as scalable training environments (on-prem GPU clusters or cloud ML platforms), data lakes or warehouses for training data, and software frameworks (TensorFlow, PyTorch, scikit-learn, etc.). Many organizations set up a dedicated AI/ML Platform – EAs might drive the creation of this as a shared environment where models can be developed safely and efficiently.

A significant shift in practice here is embracing MLOps/AIOps principles: integrating model training and deployment into an automated pipeline much like DevOps revolutionized application deployment. An architect’s job is to ensure the learning processes fit into the larger IT development lifecycle. For example, architects should help establish processes where model code and configurations are version-controlled (perhaps integrated with Git), data for training is accessible in a governed way, and once a model is trained and validated, it can automatically proceed to deployment (with appropriate reviews). Continuous integration/continuous deployment (CI/CD) is now joined by continuous training (CI/CT) for AI. This also implies that EAs need to coordinate with data science teams, ensuring that what they build can be reliably moved into production environments (the computation layer) without massive re-engineering.

From a capability planning perspective, the Learning layer often necessitates new roles and capabilities in the enterprise: data scientists, ML engineers, and AI model evaluators. Enterprise architects might contribute by defining these capabilities and ensuring they are recognized in the organization’s operating model. For instance, an EA could highlight the need for a “Model Ops” team as part of the target operating model in a transformation initiative. Additionally, EAs may facilitate knowledge transfer between traditional IT and data science units – helping both sides understand each other’s requirements for successful AI projects (e.g., IT needs models to be packaged in containers; data scientists need flexible data access and compute).

Governance needs: Model training must be governed just as much as model usage. Key governance aspects include: data governance in training (are we using the right data? is it representative and unbiased? do we have consent to use personal data for training?), experiment tracking (what experiments were run, with what parameters, and what results – to ensure reproducibility and accountability), and model validation/approval processes. Many enterprises establish an AI model governance board or incorporate AI into an existing architecture review board. Such a board would review new models for ethical considerations, bias, regulatory compliance, and alignment with business objectives before they move from the lab to production. For example, a bank’s governance might require that any credit decision model is tested for disparate impact on different demographic groups, aligning with fair lending laws.

Documentation and transparency are also a part of governance. EAs can encourage the use of a Model Inventory: a catalog of all models in development and production, including details like the purpose of the model, algorithms used, training data, last training date, metrics, and the owner. This inventory helps track the proliferation of models and ensures oversight. It also ties into risk management – critical models (say those impacting financial reporting or human safety) might need more rigorous checks and periodic retraining schedules mandated by governance.

Another governance facet is resource governance during training. Training large models can use vast compute resources. Policies might be needed to control when and how long large training jobs can run, prioritization of computing resources (so one team doesn’t hog all GPUs), and cost monitoring (especially if using cloud – training can incur hefty bills). EAs, in collaboration with IT and finance, might set up cost allocation models for AI projects, to make business units aware of the expense of model training and encourage efficient practices (like using smaller subsets of data for initial experiments, etc.).

Integration into EA frameworks: Within EA frameworks, the Learning layer corresponds to the development and innovation segment of the architecture. In TOGAF’s Architecture Capability or in the Technology Architecture, one might include the “AI Development Platform” as an architecture component. The enterprise’s Technology Reference Model might be extended with ML-specific components (e.g., a “Data Science Workbench” service, a “Model Training Cluster” service). During implementation phases, TOGAF’s ADM would consider how new projects set up or use these shared services. Moreover, the Architecture Governance part of TOGAF would encompass the model governance processes described above – EAs can extend the governance model to include checkpoints for AI ethics and bias, aligning with the organization’s risk frameworks.

Frameworks like BIAN, being industry-specific, don’t delve into how models are trained, but they expect banks to have capabilities for analytics and innovation. A BIAN-aligned enterprise could map its AI training environment to, say, an “Analytics Platform” that supports various service domains (fraud, customer insight, etc.). The key is that even if frameworks don’t explicitly call out “machine learning training,” the EA should layer it in as a necessary technical capability that supports multiple business capabilities.

Example: A healthcare provider network wanted to create AI models to predict patient readmissions and optimize care. The EA team spearheaded the setup of a Machine Learning Center of Excellence (CoE) environment. They deployed a cloud-based data science platform where clinicians and data scientists could collaborate with de-identified patient data. This platform had governed data access (to comply with HIPAA regulations), pre-configured GPU compute instances for training models, and integrated experiment tracking tools. The EA also defined a workflow: any model reaching a certain accuracy would go through a peer review and an ethics review (to check for biases) before deployment. This approach sped up model development (since teams had a ready-made environment) and kept the process accountable. Over time, the hospital system’s EA repository included this ML platform as a core piece of the tech architecture, and the practice of model governance became part of its TOGAF-aligned governance model. The result was not just one-off models, but a sustainable, governed pipeline for developing many AI solutions in the future.

Layer 6: Representation Layer – Data Preparation and Feature Engineering

Enterprise relevance: The Representation Layer (sometimes called the “Feature” layer) is about preparing and transforming data into the representations that AI models can understand. In other words, it’s the data preprocessing stage that converts raw data (numbers, text, images, etc.) into features, vectors, or other encoded forms for input into models. This layer is hugely important in enterprise AI because “garbage in, garbage out” applies – the quality and format of data fed into AI will directly affect the outcomes. Enterprise data is often messy, siloed, or not immediately suitable for machine learning. The representation layer addresses this by applying processes like data cleaning, normalization, encoding categorical variables, tokenizing text, generating embeddings for words or images, scaling values, etc. For instance, converting a customer’s transaction history into a set of features (total spend last month, number of transactions, categories of purchase) so a churn-prediction model can use it. Or taking unstructured text from support tickets and turning it into numerical vectors via an embedding model so that similarity can be computed. This layer ensures AI models can interpret and analyze input data efficiently.

In many modern AI stacks, this includes the use of feature stores – centralized repositories of features that different models can reuse. It may also involve real-time data transformation pipelines for online inference (e.g., when a live data point comes in, such as a sensor reading, it’s normalized and transformed on the fly before feeding into a model). For enterprise architects, the representation layer underscores the point that having AI-ready data is a prerequisite for successful AI deployment. Organizations need to invest in data readiness or else their fancy AI algorithms will be starved of quality inputs.

Implications for EA practices: The Representation layer is where data architecture meets data science. EAs have always cared about data quality, data integration, and master data management – those concerns now extend to supporting analytics and AI. A shift here is that architects must plan not just for storing data, but for moving and transforming data at scale for analytical purposes. This might mean incorporating big data platforms, stream processing frameworks, and feature engineering pipelines into the enterprise architecture. For example, an EA might include a component for “Streaming Data Processor” (using tools like Apache Spark or Kafka Streams) to handle continuous data feeds that update features for AI models in real time (like updating a customer’s latest website click activity in their feature profile for a recommendation engine).

Another implication is around data lifecycle and lineage. With so many transformations from raw data to features, architects need to ensure there’s traceability (which source data contributed to this feature? how is it calculated?). This is critical for debugging models and for trust – if a model’s output is questioned, being able to trace back and explain the input features is part of an accountable AI practice. EAs should advocate for strong metadata management and perhaps the use of tools that automatically record data lineage through pipelines.

Data readiness is a theme to emphasize: enterprise architects should evaluate the maturity of the organization’s data for AI. Are data silos breaking down? Do we have a unified view of key entities (customer, product) that analytics can draw from? If not, initiatives like data warehouse modernization or data lake implementation might be prerequisites before ambitious AI projects. In many EA roadmaps, enabling AI is an explicit goal that drives data initiatives like creating a data lakehouse or adopting a data mesh approach to make data more accessible. These are architectural decisions at the representation layer level – they determine how easily data can be transformed into features.

Additionally, architects should plan for tools and platforms for feature engineering. This could mean recommending a feature store solution (e.g., Feast, Tecton, or cloud-native ones) to avoid duplication of feature work across teams. A feature store becomes part of the architecture, serving as a bridge between data engineering and model training/serving. It allows teams to consistently use the same definitions for features in training and inference, which is important for correctness. By including such a component in the enterprise architecture, EAs help institutionalize best practices in AI data prep (so each team isn’t cobbling together their own pipeline in isolation).

Governance needs: The representation layer’s governance is essentially data governance evolved. All the classic aspects – data quality, metadata, stewardship, security – are relevant, with additional nuance for AI. Data used for modeling must be accurate, relevant, and free of inappropriate bias. EAs, in collaboration with Chief Data Officers or data governance leads, should ensure there are policies for validating data before it’s used in model training. For example, if using demographic data, ensure compliance with anti-discrimination policies depending on the use case. Consistency is another governance point: if multiple models use a “customer lifetime value” feature, governance should ensure there is one agreed definition of how that is calculated, rather than each team defining it differently. That is precisely the kind of problem a governed feature store can solve by acting as the source of truth for approved features.

Data lineage governance means that any derived data (features) should be traceable back to source systems. This is important for audits and for understanding the impact of source data changes on AI outputs. If a source system field changes definition or quality, one needs to know which downstream features and models are affected. EAs can promote adoption of lineage tracking tools or require that all data pipelines produce lineage metadata.

Security and privacy are crucial at this layer as well. Often, raw data includes personally identifiable information (PII) or sensitive fields that may not all be necessary for modeling. Governance might dictate pseudonymization or aggregation at the representation stage to protect privacy. For instance, an AI model might not need exact usernames or IDs, just some aggregate behavior metrics – the pipeline can strip out direct identifiers in the feature creation step. This aligns with privacy-by-design principles that architects should incorporate.

Finally, consider governance of data changes – models trained on historical data may behave poorly if data distributions shift. Who monitors for data drift or feature drift? It could be part of model governance, but it starts with data. EAs might help define the feedback loop where if significant drift is detected in input data, the Learning layer is triggered to retrain or the model owner is alerted.

Integration into EA frameworks: In TOGAF, the representation layer is squarely in the Data Architecture realm. This is where EAs define data entities, transformations, and flows. To integrate this into EA practice, architects should extend data models to include not just transactional entities but also analytical ones (for example, not just “Customer” table but also “Customer Profile Vector” or aggregated features that are stored for analytics). The architecture definition documents should describe data pipelines alongside data stores. Often, EA repositories now include diagrams of data flows for analytics, showing sources, staging, feature computation, and consumption by models – this would be a deliverable in the Data Architecture phase of ADM.

Frameworks like BIAN implicitly require strong data consistency across the business (as we saw, BIAN promotes a structured data architecture for consistency and accuracy). For a bank following BIAN, ensuring that data about customers or accounts is uniform and well-governed across service domains means AI models (fraud detection, marketing, etc.) will have reliable data to work with. BIAN doesn’t detail feature engineering, but an EA can map BIAN service domain data into a common analytics repository. For instance, all service domains that record customer interactions feed into a unified customer analytics dataset – from which features are engineered. The EA’s role is to design that integration so that data from siloed banking products comes together in the representation layer.

Example: A telecom company undertook an AI project to predict network equipment failures. Initially, the data for training came from various logs and monitoring systems, each with different formats and quality issues. The predictions were poor. The EA team intervened to improve the Representation layer: they oversaw the creation of a unified data pipeline that pulled equipment log data into a central data lake, cleaned and standardized the timestamps and error codes, and engineered features like “error rate last 24h” and “temperature variance”. They also set up a small feature store to serve these features to both the training process and the live monitoring system that would use the model. They enforced data quality checks (e.g., if a device’s data hasn’t been updated in an hour, flag it) to ensure the model wasn’t getting stale or missing data. As a result, the refined data significantly improved the model’s accuracy. This real-world case shows how much the Representation layer – often unseen by end users – can make or break an AI initiative, and why architects need to embed data preparation and governance into the overall enterprise architecture for AI.

Layer 7: Application Layer – AI Deployment and User Interaction

Enterprise relevance: The Application Layer is the interface where AI meets the end-user or business process. It’s the collection of applications, services, or user touchpoints that deliver AI-powered functionality to stakeholders (customers, employees, etc.). This could be a chatbot interface, a recommendation widget on an e-commerce site, an AI-augmented analytics dashboard, or even an entire product that is AI-driven (like a voice assistant). In essence, this layer is what makes AI tangible to the business – it’s where the outcomes of AI are embedded into operations, decision-making, or customer experiences. Without this layer, all the work in previous layers wouldn’t create value; with this layer, we translate model predictions or insights into actions and interactions. For example, an AI model might predict equipment failure (from our previous layer), but it’s the maintenance scheduling application that uses that prediction to alert a technician – thereby delivering business value. The application layer is often what aligns most closely with business capabilities and processes: e.g., “customer support” capability might be realized by an AI-driven chatbot application plus human agents.

From the enterprise perspective, this is also where change management and user adoption come into play. Introducing AI in applications might change how employees do their work or how customers engage. EAs and other leaders must design these applications to be intuitive, trustworthy, and integrated into existing workflows. A poorly integrated AI feature can cause confusion or rejection by users; a well-integrated one can enhance productivity or experience seamlessly.

Implications for EA practices: Enterprise architects need to ensure that AI capabilities are strategically woven into the application portfolio and business architecture. This is a shift from thinking of AI as a separate pilot or tool, to treating it as an integral part of the enterprise’s application landscape. Concretely, when defining target architectures or capability models, EAs should identify where AI can play a role and plan accordingly. For example, if the business has a capability for “Customer Service,” the target architecture might include an AI assistant as one of the enabling components of that capability. This means when projects are initiated to improve customer service systems, building or integrating the chatbot is part of the plan, not an afterthought.

Another implication is that architects should champion user-centric design and integration for AI in applications. The nuances of deploying AI to end-users include considerations like: How will the AI output be presented? Does the application explain or give confidence in AI suggestions (especially important for decisions support systems)? How do users provide feedback or override AI decisions? These are design questions but should be informed by architectural thinking – ensuring, for instance, that the application has the ability to capture feedback data which can loop back to model improvement (closing the data flywheel). In planning an AI-enhanced application, EAs should involve stakeholders early (business users, UX designers, risk managers) to cover these aspects.

The Application layer also raises the need for cross-functional collaboration: architects working with product managers and process owners. A LinkedIn AI architecture article advises that enterprise architects collaborate with product teams to embed AI seamlessly into existing business processes, focusing on user experience, security, and alignment with organizational goals. This highlights that deploying AI isn’t purely a tech insertion; it often requires redesigning processes (business process architecture may change when AI takes on some tasks) and ensuring the AI application aligns with business objectives and policies (e.g., an AI advice tool for finance advisors must still follow compliance rules in the advice it presents).

Governance needs: At the application layer, governance is about ensuring AI-driven applications are reliable, ethical, and delivering value. First, there is the aspect of monitoring and performance: just like any application, an AI application needs SLAs, support, and life-cycle management. If an AI chatbot is now handling customer queries, who “owns” it from a business perspective and who supports it from IT? EAs should help define ownership clearly (perhaps the digital customer experience team owns it, with IT operations supporting). There should be metrics to track its performance (response time, accuracy, customer satisfaction, etc.) and processes to update or retrain it as needed.

Second, ethical and compliance governance is very pertinent here because this layer is where AI impacts people directly. Organizations might set guidelines for AI usage, such as requiring AI systems to disclose that they are AI to users, or ensuring there’s a human escalation path if the AI cannot handle something. If the AI application is making decisions or recommendations, governance might require transparency (for example, showing a summary of why a recommendation was made, drawn from the knowledge layer data). In regulated industries, any customer-facing AI might need approval from compliance/legal teams (e.g., a bank’s AI advisory tool might need to be vetted to ensure it doesn’t inadvertently violate financial advice regulations).

Security is another governance aspect: AI applications must be secure from threats just like any other app. Moreover, if they gather new types of data (like voice recordings in a voice assistant), the architecture must secure and manage that data properly under governance policies.

Finally, alignment with business outcomes should be governed. AI applications should have clear KPIs that tie to business value (e.g., reduced call center volume by X%, increased upsell rate by Y%). EAs and strategy teams can set up periodic reviews to ensure the AI apps are actually contributing to their intended outcomes and not drifting into merely “cool tech” with no ROI. If an AI feature is not proving its value or causes user issues, the governance process should trigger a rethink or improvement.

Integration into EA frameworks: This layer maps to the Application Architecture and Business Architecture domains in EA frameworks. In TOGAF, for instance, the Application Architecture phase is where you define the major applications needed and how they interact – AI-enabled apps should be included here, with all their integration to other systems (which we covered in Data Link). Business Architecture is where you map capabilities and processes – AI might change those maps (perhaps automating certain process steps or enabling new capabilities altogether). EAs should update business process models to show where AI is involved (e.g., a human task replaced or augmented by an AI service), and ensure organizational structures (roles, responsibilities) are adjusted accordingly.

Frameworks like BIAN would treat an AI application as a component realizing a service domain. For example, BIAN has a service domain for “Customer Dialogue” which could be realized by an AI chatbot application in part. The key integration is that AI apps still must connect correctly to core systems – which BIAN’s services architecture facilitates. By following BIAN, a bank could make sure its AI chatbot uses standardized service calls to retrieve account info or execute transactions, rather than bypassing architecture. This ensures the AI application fits in the modular architecture without introducing new silos.

Example: A regional bank rolled out an AI-driven mobile app feature that gives customers personalized financial health tips. The EA team worked closely with the retail banking product team to integrate this AI feature. They made sure the app pulls data through existing APIs (customer transaction data via the core banking API, credit score via a credit bureau API) and that the advice logic was approved by the compliance team. They also updated the customer support process: if a user has a question about the advice, the app provides a button to chat with a human advisor, with context from the AI recommendation passed along. Post-launch, the EA governance board reviewed metrics: the feature was used by 60% of customers and correlated with higher engagement, supporting the business case. Because the architects had embedded the AI into an existing app and workflow thoughtfully, it was adopted smoothly. This demonstrates how application-layer integration and governance by EA ensures AI actually delivers business value in a harmonious way, rather than being a tech demo.

Aligning the 7 Layers with EA Frameworks and Governance

Each of the seven layers above introduces new considerations into the enterprise architecture, but they shouldn’t exist in a vacuum. A key role of enterprise architects is to integrate these AI-focused layers into the broader EA frameworks that guide technology and business strategy. How do these layers map onto something like TOGAF or an industry framework, and how do we govern them collectively?

  • Mapping to TOGAF domains: Roughly, Layers 1 (Physical), 2 (Data Link), and 3 (Computation) correspond to what TOGAF would consider Technology Architecture – they are about infrastructure and technical services that enable systems. Layers 4 (Knowledge), 5 (Learning), and 6 (Representation) deal largely with data, analytics, and logic, aligning with Data Architecture (for data/knowledge) and parts of Application Architecture (for the AI/ML applications and tools used in training). Layer 7 (Application) fits into Application Architecture and Business Architecture, since it’s about delivering capabilities to users and may reshape business processes. Recognizing this mapping helps EAs slot AI considerations into the existing EA process. For example, during Business Architecture development, one should ask: “How will AI capabilities enhance or change this business process or service?” During Data Architecture: “Do we have the data pipelines and governance for the representation and knowledge layers?” and so on.

  • Capability Planning and Business Architecture: AI is both an enabler of existing capabilities and a new capability in itself. EAs might update the organization’s business capability model by either adding an “AI Capability” (e.g. a capability for “Advanced Analytics & AI” under IT or shared services) and/or by embedding AI into each relevant business capability (e.g. Sales capability now includes AI-driven lead scoring, Customer Service includes chatbot support). The shift in capability planning is to explicitly plan for AI readiness in each major business area: what new competencies and platforms will they need? The enterprise may decide to build a central AI Center of Excellence that offers services to all units – that itself is a new capability that the EA should reflect in the target operating model. According to industry observations, companies that embed AI into their strategy and capabilities are pulling ahead of those that don’t. Therefore, architects should use capability-based planning to ensure AI initiatives are not siloed experiments but are tied to building long-term enterprise capabilities.

  • Platform Strategy: Many enterprises are adopting a platform approach to AI – creating common services and infrastructure that can be used across projects (as noted in several layer discussions: shared data pipelines, model platforms, knowledge hubs, etc.). This mirrors what EA has done historically with things like an Enterprise Service Bus or a common data warehouse, but now extended to AI needs. A well-defined platform strategy for AI accelerates projects and prevents duplicate efforts. EAs should outline this platform in architecture roadmaps: for instance, establishing an “AI Platform” as a strategic program. This might involve selecting standard tools (a preferred cloud AI service, a standard data science toolkit), integrating them, and governing their use. The result is an “AI layer” in the enterprise’s tech stack that is as well-managed as its ERP or CRM platforms. By doing so, new AI applications can be onboarded faster and with fewer one-off decisions.

  • Data Readiness and Quality: Across the layers, one of the biggest determinants of success is data. EAs need to work closely with data governance bodies (or help establish them) to raise the organization’s data maturity. This includes cataloguing data assets, improving data quality at the source, breaking down data silos (possibly via data integration projects or adopting data lake architectures), and ensuring metadata is captured. As Gartner has pointed out, “AI-ready data” often means having representative, comprehensive, and well-understood datasets. Enterprise architects may advocate for investing in a data foundation (like a customer 360 database, or IoT data platform) before expecting AI to yield results. This might be a tough sell when there’s excitement about quick AI wins, but it’s necessary for sustainable success. Essentially, the representation and knowledge layers will only be as good as the enterprise data underneath – a point EAs should continuously make in planning meetings.

  • Governance and Ethical AI: The introduction of AI demands augmenting the existing IT governance framework with AI governance. This doesn’t only mean model review boards, but also updating architecture principles and standards to cover AI. For example, an organization might add a principle like “AI solutions must be transparent and accountable,” which EAs then interpret into architecture requirements (audit logs, explainability features, etc.). Responsible AI guidelines from sources like government or industry consortia can be translated into architecture checkpoints. Many companies form an AI Ethics committee – EAs should interface with such groups to make sure ethical considerations (bias, fairness, transparency) are addressed by design in the architecture. A well-governed AI architecture will have oversight across all layers: data is governed (representation layer), models are validated (learning layer), knowledge sources are vetted (knowledge layer), and outcomes are monitored (application layer). By ensuring governance at each layer, the enterprise can avoid pitfalls like biased algorithms causing reputational damage or compliance violations.

Importantly, frameworks like TOGAF 10 have evolved to support AI-centric enhancements, integrating agile methods and acknowledging the use of AI tools in modeling and code generation. This means EA practitioners have more guidance than before on incorporating AI. Meanwhile, industry frameworks like BIAN explicitly list AI adoption and data-driven decision-making as a key driver for modern architecture in banking, pointing to the need for structured data architectures and governance to enable AI. EAs should leverage such guidance to justify and shape their AI architecture strategies.

Conclusion: Forging an AI-Ready Enterprise Architecture

Modern AI systems demand a holistic transformation of enterprise architecture practices. By examining the seven layers – from Physical hardware up to the end-user Application – it’s clear that supporting AI is not just about adding a tool here or there, but about shifting how we plan, design, and govern our IT landscape end-to-end. Enterprise architects are in a unique position to bridge the gap between cutting-edge AI technology and structured, strategic implementation within the business.

To summarize the key shifts and actionable steps for EAs:

  • Invest in New Capabilities: Update capability maps and IT strategy to include AI competencies. This could mean establishing a dedicated AI/ML team (or Center of Excellence) and treating data science and MLOps as core capabilities of the IT organization, not peripheral activities. For each business domain, identify how AI could enhance its capabilities, and plan for those enhancements.

  • Develop a Unified AI Platform: Formulate a platform strategy that provides common infrastructure for AI (data pipelines, model training environments, deployment frameworks). This platform approach avoids fragmented efforts. For example, create a centralized feature store or model registry that all teams use, rather than each team handling raw data and models differently.

  • Align Infrastructure and Architecture: Ensure your Technology Architecture is aligned with AI needs – include GPU clusters, high-speed networks, and scalable cloud resources in your plans. Coordinate with infrastructure and operations teams to incorporate these into IT roadmaps. Also, architect for flexibility; AI tech evolves rapidly, so design patterns (like containerization and modular pipelines) that can accommodate new tools or algorithms with minimal disruption.

  • Prioritize Data Readiness: Work with data governance to improve data quality and availability before AI deployment. This might involve consolidating data sources, cleaning datasets, and instituting data stewardship programs. Encourage the adoption of data catalogs and lineage tracking. Essentially, treat data as a strategic asset – because it is the fuel of AI. For instance, if planning a customer analytics AI, first ensure there’s a unified, cleaned customer dataset accessible to the AI.

  • Embed Governance Throughout: Expand governance processes to cover AI-specific aspects. This includes ethical guidelines (no AI without an ethics review for high-impact use cases), compliance checks (e.g. regulatory sign-off for AI in finance or healthcare contexts), and continuous monitoring of AI outcomes. Consider establishing an AI governance board that works alongside the architecture governance board. At the same time, update EA standards to include things like model documentation, bias testing results, and so on as required artifacts in solution designs.

  • Integrate with EA Frameworks: Use familiar EA frameworks (TOGAF, BIAN, etc.) as scaffolding to incorporate AI considerations. Don’t reinvent governance and modeling approaches from scratch – instead, infuse AI into existing processes. For example, during TOGAF’s ADM cycles, include AI SMEs in architecture definition workshops, and ensure AI elements are part of architecture descriptions. Leverage BIAN’s standardization in data and services to make AI integration in banking smoother (as BIAN has shown, standardized data models help in quickly plugging in AI for things like fraud detection or personalized offers).

  • Foster Collaboration: Perhaps the most practical advice is for EAs to collaborate broadly – with data scientists, IT engineers, business stakeholders, and risk managers. Enterprise architecture, at its best, is a translation layer between business strategy and technology execution. In the age of AI, architects must speak the language of data science (understand concepts of training, features, etc.) and the language of business value (identify where AI can reduce cost, increase revenue, improve customer experience). By doing so, they ensure AI initiatives are not siloed in labs but are cohesively integrated into the enterprise’s fabric.

Adapting enterprise architecture for AI is a journey, not a one-time project. It requires agility, continuous learning, and often iterative refinements of both models and architectures. However, by using a structured 7-layer perspective, EAs can methodically address each aspect of AI systems and avoid blind spots. The payoff for doing this right is significant: an AI-ready architecture can propel an organization into new realms of efficiency, insight, and innovation, while a lack of architecture can lead to disjointed efforts and high failure rates.

In conclusion, enterprise architects should embrace their evolving role – becoming enablers of AI-powered transformation. By ensuring alignment between AI capabilities and enterprise architecture, they champion a future where AI is delivered responsibly, effectively, and in lockstep with business strategy. The 7-layer AI architecture model provides a roadmap for this evolution, helping architects to systematically cover all bases from hardware to human interaction. With thoughtful planning and governance across these layers, enterprise architecture will indeed become the blueprint for AI success in the modern organization.

Fadi Jawdat Hindi

Managing Partner | Founder | Adjunct Professor | Harvard | Stanford | Wharton | NC State Univ.

3mo

Mahmoud AbuFadda don’t ever forget the concept of sbdat. 😁

Jihad AbuAsab BSCP, SAFe Agile Portfolio, PMP, PMI ACP, TOGAF

Chief Digital/IT Strategy Specialist| | Technology Portfolio Manager | Deputy Manager at Roads and Transport Authority - Technology & Strategy Governance

3mo

Insightful Mahmoud, thank you

To view or add a comment, sign in

Others also viewed

Explore topics