“Shouldn’t I model my entire app and data layer?” Short answer: No. • Need integration flows? Use Lucid or IcePanel. • Want AI enabled data lineage? Try Architecture In Motion. • Looking for process flows? Whip out Miro or FigJam. Your EA tool should stay high enough to: • Track application portfolios • Align capability investment • Surface risks, gaps, and opportunities Don’t model what your teams already know. Model what helps you decide.
Why you shouldn't model your entire app and data layer
More Relevant Posts
-
CTOs, Engineering VPs, and CIOs: If you optimize the wrong metrics, how can you measure improvement in developer productivity? You might think, “we measure commits, PRs, and story points.” But hundreds of micro-tasks never appear in your sprint boards: - Email and Slack threads that derail afternoon focus blocks - Manual workflows and context switching between 15+ daily applications - Quick syncs that stretch into hour-long architecture calls Your velocity reports may look stable, but you’re losing 5+ hours per developer—every week—to these engineering blind spots. You need a platform that captures complete developer workflow intelligence across every tool your teams touch. With Skan AI, get end-to-end visibility into work patterns that traditional metrics miss entirely. Unlock 360° visibility into developer work in weeks: https://hubs.ly/Q03GGzGT0
To view or add a comment, sign in
-
-
Are Enterprises bottle necked by their Engineering? Or are Engineering teams not close enough to business to influence priorities? If AI can help business teams on their existing enterprise tools 1. Build dashboards 2. Experiment with user journeys 3. Change schemas That's a massive win !!
To view or add a comment, sign in
-
Last week, I ran into a bug in Power Automate: the “Run a Prompt” step in AI Builder kept adding an unexpected input item, breaking my workflow. The fix? Switching back to the Old Designer instantly resolved the issue, while the New Designer still has this bug. This got me thinking about how companies set priorities and make product decisions, especially when persistent bugs impact real users. I shared my thoughts in a quick article here - https://lnkd.in/gaeaRKE5
To view or add a comment, sign in
-
Want Better Conversions? Go See for Yourself. One of the most powerful principles I’ve applied to CRO comes from the Toyota Production System: Genchi Genbutsu - “go and see for yourself.” In CRO, that means stepping into your users’ shoes. Don’t just look at dashboards - watch real sessions, analyze heatmaps, and talk to users. That’s how you uncover the why behind the what. We explored how this mindset helps identifying the root causes of user frustration - and how it can lead to smarter, more human-centered optimizations. 🔗 Read the full article here: https://lnkd.in/dCPUf5xv
To view or add a comment, sign in
-
Find the reason for user frustration on your website / online store. Read the article below by my SQLI colleague Marinus Ames regarding #CRO strategy.
Want Better Conversions? Go See for Yourself. One of the most powerful principles I’ve applied to CRO comes from the Toyota Production System: Genchi Genbutsu - “go and see for yourself.” In CRO, that means stepping into your users’ shoes. Don’t just look at dashboards - watch real sessions, analyze heatmaps, and talk to users. That’s how you uncover the why behind the what. We explored how this mindset helps identifying the root causes of user frustration - and how it can lead to smarter, more human-centered optimizations. 🔗 Read the full article here: https://lnkd.in/dCPUf5xv
To view or add a comment, sign in
-
My entire automation process in 10 steps: Open a whiteboard tool Map the customer journey step by step Highlight where humans waste the most time Circle decisions that repeat again and again Sketch triggers (signup, payment, demo request) Define outputs that actually matter (SQLs, calls, revenue events) Draw it into a flow diagram Only then open n8n Connect tools + AI logic Test & run That’s how I do it. Most people open the automation tool first, then “figure it out.” That’s why it breaks. Automation is simple once you master flow diagrams first. What about you? P.S. If you struggle with this, I built a Notion template for mapping flows. DM me “map” and I’ll share.
To view or add a comment, sign in
-
-
Karim Wanny from Nasdaq is such a great explainer of automating KPI dashborads. But more importantly, of making meta-prompts for best practises to automate...anything. Case study was using super-scrambled data, analize it with NotebookLM, generate scripts, then run them in google scripts. Search for the errors. Feed again till no errors through multiple platforms. Then visualize the data. Use make.com to connect Google dashboards, with google sheets and with Source, to just have the chart update themselves...nice
To view or add a comment, sign in
-
Question: Why are agentic systems so disruptive? Simple Answer: They turn single-shot model calls into end-to-end, tool-using pipelines that plan, execute, and adapt—so work flows without constant human handoffs. - Orchestration: chain steps (search → read → analyze → act) - Tool use: call APIs, browse, run code, update data - State + feedback: keep context, retry, and self-correct - Autonomy: trigger on events and run continuously - UX shift: users set goals, not clicks and forms - Efficiency: parallelize tasks, cut latency and cost - Integrations: glue across services unlocks new workflows All you need are the tools, and if you don't have the tool, simply ask the system to write you one and run it. If you don't have this ability, you are falling behind.
To view or add a comment, sign in
-
-
“Did our new feature actually move the needle?” It's one of the toughest — and most important — questions in product development. Simple before-and-after comparisons don’t cut it. Metrics shift due to seasonality, market noise, or broader trends. That’s why I built an Interactive Causal Impact Dashboard. ✅ Simulate realistic A/B tests 📊 Run models like Difference-in-Differences & Bayesian Structural Time Series 📈 Visualize treatment effects and segment impact 🧠 Test assumptions automatically This tool helps product and data teams prove causality, not just correlation — so you can make confident, data-driven decisions.
To view or add a comment, sign in
-
🔎 In our last video, we showed how policy creation process can be automated by AI-driven anomaly detection. 𝗧𝗼𝗱𝗮𝘆, 𝗹𝗲𝘁’𝘀 𝗹𝗼𝗼𝗸 𝗮𝘁 𝗮𝗻𝗼𝘁𝗵𝗲𝗿 𝗮𝗽𝗽𝗿𝗼𝗮𝗰𝗵 𝗼𝗳 𝗰𝗿𝗲𝗮𝘁𝗶𝗻𝗴 𝗽𝗼𝗹𝗶𝗰𝗶𝗲𝘀: 𝗯𝘆 𝘀𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗺𝗲𝘁𝗮𝗱𝗮𝘁𝗮 𝗱𝗶𝘀𝗰𝗼𝘃𝗲𝗿𝘆. Most teams still manage policies the hard way: ➡️ Manually writing rules ➡️ Rebuilding logic when schemas change ➡️ Struggling to keep governance aligned as data evolves Semantic metadata changes that. By interpreting schema and field-level meaning in real time, it enables: ✅ 𝘈𝘶𝘵𝘰𝘮𝘢𝘵𝘦𝘥 𝘳𝘶𝘭𝘦 𝘴𝘶𝘨𝘨𝘦𝘴𝘵𝘪𝘰𝘯𝘴 𝘸𝘪𝘵𝘩 𝘣𝘶𝘴𝘪𝘯𝘦𝘴𝘴 𝘤𝘰𝘯𝘵𝘦𝘹𝘵 ✅ 𝘊𝘰𝘯𝘵𝘪𝘯𝘶𝘰𝘶𝘴 𝘶𝘱𝘥𝘢𝘵𝘦𝘴 𝘢𝘴 𝘥𝘢𝘵𝘢 𝘤𝘩𝘢𝘯𝘨𝘦𝘴 ✅ 𝘌𝘯𝘧𝘰𝘳𝘤𝘦𝘮𝘦𝘯𝘵 𝘸𝘪𝘵𝘩𝘰𝘶𝘵 𝘮𝘢𝘯𝘶𝘢𝘭 𝘪𝘯𝘵𝘦𝘳𝘷𝘦𝘯𝘵𝘪𝘰𝘯 The result is a shift from static, one-off rules to adaptive, self-sustaining policies which are—metadata-driven, context-aware, and always evolving. Stay tuned for our next video on automating policy creation process using business context. 👉 Explore product demos (https://lnkd.in/gR-NtQHr) 👉 Sign up for a free trial (https://lnkd.in/gDgakaVv) #AgenticAI #DataManagement #DataQuality #DataObservability #AIReadyData #semanticmetadata #ruleautomation
Automated Policy Creation Driven by Semantic Metadata Discovery
To view or add a comment, sign in