Rewriting the Rules: How AI is Transforming Research, Data Science, and Beyond

Rewriting the Rules: How AI is Transforming Research, Data Science, and Beyond

AI is rewriting the rules for most organizations, including Research and Data Science. Earlier this year, I spent two days with my team at Intercom—our RAD crew (Research, Analytics, Data Science)—digging into what’s working, what’s not, and where we’re headed. Our job? Cut through noise, uncover what matters, and help Intercom build the right things for customers. The big shift: AI isn’t just changing what we do—it’s reshaping how we think, collaborate, and deliver impact. Here’s what we’ve learned and what it might mean for teams like ours.

1. Product expertise isn’t optional

If you want to make an impact in a function like RAD, you need to know the product inside out. Not a surface-level grasp. A deep, granular understanding. For us, that means knowing how our AI agent Fin works across all of its core capabilities. How Actions, Guidance and Workflows fit into Intercom’s ecosystem, and how customers actually use these features. It’s not enough to skim through docs or sit through a demo. You need to understand the product's purpose, how it works, how it’s instrumented, how data flows through it, and where the gaps are. Without that, you cannot move the needle on things customers care about. For us that’s resolution rates.

How? Workflow mapping workshops with customers to map out their top use cases and rebuild their processes with AI in mind. Customer Support Days where we join our CS team to answer real queries. Dogfooding our product, e.g. deploying Fin live to our own website and seeing it in action.  Deep dives on conversation data using LLM-powered tools like topic classification and issue detection.

Takeaway: Get hands-on with your product. Know the thing you’re trying to improve better than anyone else.

Ask yourself: Do you truly understand the products you’re trying to improve?

2. LLMs are table stakes

Deeply understanding Large Language Models (LLMs) isn’t just for Data Scientists—it’s for everyone. We're upskilling fast. Data Scientists are experimenting with frontier models, testing how different LLMs perform in real-world customer support scenarios, and building reusable tools. Researchers are leaning into AI to scale their work—analyzing trends at speed, digging into diverse data sources like Gong call transcripts, and mining secondary research. This isn’t about chasing shiny new tech. It’s about staying ahead of the curve and solving problems smarter.

Real example: we built a tool to assess if a customer's content is more or less ‘ready’ for use with Fin. We use LLMs and embeddings to assess whether a customer’s content aligns with the questions their users are asking. Gaps between questions and available content are flagged, then summarized into a content readiness score for each customer. 

Takeaway: Start small but start now. Experiment, build, and share what you learn. 

Ask yourself: How can you experiment with LLMs to solve real problems today?

3. Customer obsession is everyones job

Researchers used to own the customer lens—digging into their pain points, goals, and behaviors. In an AI world, that’s everyone’s job. Data Scientists on our team are joining customer calls, presenting insights directly to customers, and feeding learnings back to the broader org. Researchers are finding new ways to bring customers closer to the company—setting up product advisory boards, hosting cross-functional insight share-outs, and experimenting with AI-generated podcasts to amplify their findings.

Why? Because AI doesn’t exist in a vacuum. It’s solving real problems for real businesses. If we don’t deeply understand their support headaches, their goals or how their end users feel about AI, we’re building in the dark.

Takeaway: Get out of your silo. Talk to customers. Collaborate with engineers and other less typical stakeholders. The more you (and folks across your company) know about the people buying and using your product, the more impactful your work gets.

Ask yourself: When was the last time you spoke to a customer?

4. Humanizing AI is our edge

AI is great at crunching data, spotting patterns, and automating tasks. But it’s weak at subjectivity, storytelling, or empathy. That’s where we shine. Behind all AI are humans. In our case it’s buyers evaluating our AI agent Fin and making a decision about whether to adopt it or not. It's CS leaders watching Fin handle conversations and deciding if they are good or not. It's support reps deciding when to loop in AI. These folks have needs, motivations, and feelings. Our job is to understand their perceptions and connect the dots—grounding AI’s outputs in what customers actually need and feel, while humanizing the systems we build. We’ve been working on how Fin can deliver responses that don’t just solve problems but feel right to the end user. That’s not just a technical problem—it’s a human one.

Real example: Our AI group have been running a series of A/B tests on simplified versions of Fin’s UI. Data revealed positive signals of change in our core metrics, but we lacked insight into why users were behaving differently. To bridge this gap, we ran observational research to explore whether these design decisions were meaningfully impacting the end-user experience.

Takeaway: Don’t let AI’s capabilities blind you to its limits. Ensure you’re understanding the human side. Empathy and context are your edge.

Ask yourself: Are you measuring both the technical performance and human perception of your AI systems?

5. Synthesis is a superpower

Data is everywhere. Meaning isn’t. We’re doubling down on synthesis. Tying together insights from multiple studies, projects and customers to see the full picture. Instead of always running net-new research, we’re building on what we already know. It’s helping us cut through noise and spot patterns faster.  For example, we recently conducted a meta-analysis of value drivers for AI agents, utilizing transcripts from 50 customer and prospect interviews conducted in the past 6 months. This dataset included new & existing Fin customers as well as prospects actively using or searching for AI solutions. Google’s Notebook LM was used to explore themes.

This isn’t just for insights, either. On the data foundations side, we’re connecting disparate sources to build smarter systems. Tools that triangulate customer feedback with usage, firmographic and revenue data to help us decide what to build.

The common thread here is connection. Taking raw material—scattered, messy, and incomplete—and forging it into something actionable. In a world drowning in information, the ability to distill meaning from the chaos isn’t just a nice-to-have—it’s a competitive edge.

Takeaway: Focus on connection. Stop reinventing the wheel. Pull together what you already know, find the gaps, and build from there. 

Ask yourself: Are you collecting more when you should be connecting or synthesizing what you already have?

6. Scale smart through tools & enablement

We’re not just analyzing data—we’re building with it. We’re creating internal data products and scaled tooling to amplify impact. Some of this is making its way into Intercom’s product, but most of it empower our teams with systems like deep conversation analysis tools, content and action recommendation systems, forecasting models, and AI performance dashboards. These aren’t one-off analyses—they’re reusable systems designed to help customer-facing teams drive better outcomes for our customers, and in turn, our business.

Here’s the catch: building the tools is only 30% of the game. The real work—70% of it—lies in enablement. Success hinges on getting teams to adopt, understand, and wield these tools effectively. 

Real example: We've developed a tool that classifies messy conversations into a clean taxonomy to help us identify gaps and opportunities for our customers in use of Fin. These insights power auto-generated decks for sales, saving time and enabling more strategic, AI-driven conversations with customers. To ensure our sales team are equipped to use these tools, we delivered training as part of a Fin university program for sales and setup a dedicated slack channel with an on-call data scientist who rotates each week so we can continue to help sales navigate these decks and insights.

Takeaway: Building is the easy part. Spend most of your effort on enabling adoption—scale your insights by empowering others to act.

Ask yourself: How will you drive adoption after you build it?

7. Collaboration goes beyond product

The days of Research and Data Science focusing mostly on Product are over. We’re working more with Finance, Sales, CSMs, Marketing—everyone. We’re helping shape the entire customer journey, from first pitch to long-term success. Researchers and Data Scientists who lean into this cross-functional collaboration learn the most, grow the fastest, and drive the biggest impact.

Takeaway: Step out of your lane. The more you collaborate across the org, the more you’ll see—and solve—the real problems.

Ask yourself: Who outside your usual circle could you work with next?

8. Flexibility keeps us nimble

We run a centralized-dedicated partnership model at RAD: a central team with Researchers and Data Scientists dedicated to strategic areas for extended periods. It gives us deep context while letting us focus on what matters most. We ‘follow the work’. We’ve also experimented with fully embedding folks in xfn customer-facing teams. There are trade-offs—less time to push forward horizontal work but tighter alignment with day-to-day needs. We’re still finding the sweet spot, and that’s okay. Flexibility is the goal.

Takeaway: Test and adapt your operating model and ways of working as needs shift.

Ask yourself: Is your current structure holding you back or pushing you forward?

9. Foundations first 

If I could go back and invest more in one area when building data functions, it’d be foundations. Solid data roots unlock everything else—especially in an AI world. We’re doubling down here in the coming months: cleaning pipelines, standardizing taxonomies, and improving instrumentation. It’s the bedrock of impact. The potential of AI only gets realized if the data feeding it is accurate and trustworthy.

Takeaway: Invest in foundations now—it pays off later.

Ask yourself: Are your data foundations strong enough to support your AI goals?

10. Blend voices to stay ahead

AI moves fast. Too fast to guess what people want or need. At Intercom, we’ve adapted our approach to Research in the last 6-12 months—historically, we invested more in understanding our existing customers. Over the past year, we’ve ramped up our understanding of prospective buyers too. The takeaway? You need both. 

We’ve learned important things from taking this dual view:

  • What prospects care about and what our customers care about are different. 

  • How prospective customers and existing customers think are different.

  • Our customers are often at the forefront of what’s to come next with AI. 

That gap—those differences—fuel our thinking and shape what we build as well as how we go to market. Combined, these views hint at what’s possible while also keeping us grounded.

The AI landscape isn’t slowing down—new players, better models, the cusp of an S-curve. Research can’t be a one-off. It should be continuous—an always-on pulse.

Takeaway: Listen to who’s buying now and who might tomorrow. That’s how you lead, not follow.

Ask yourself: Are you hearing from both who’s buying now and who might tomorrow?

What’s next?

The honest truth is we’re still figuring it out. If we’re doing this right, today’s insights become tomorrow’s features. The RAD team is doing some of the best work of their careers—learning, evolving, increasing our speed of delivery, and blurring the lines across Research and Data Science. 

One thing is clear. The teams that adapt—experimenting, upskilling, driving efficiencies with AI, and leaning into what AI can’t do—will own the future.

So, how are you evolving? What does an AI-assisted version of your org look like?


Shoutout to the RAD team — this is our collective story. Thanks to everyone who helped sharpen this. Your ideas and feedback made it stronger.

To view or add a comment, sign in

Others also viewed

Explore topics