As we move into the second half of 2025, we’re seeing more enterprise teams claiming to be “AI-ready.” But being “ready for AI” and being “AI-ready” are not the same thing.
I might be “ready” for a holiday in Italy, but that doesn’t mean my bank account is. Likewise, teams might be “ready” to deploy their AI to production, but that doesn’t mean their data will be reliable enough to support it.
“Ready for AI” is a vibe. “AI-ready” is an operational model—complete with stakeholder alignment, governance, end-to-end observability, and a framework for continuous and programmatic improvement.
A data quality strategy is an AI quality strategy. Read on to learn about implementing your own AI-readiness framework, and find out what’s heating up the data + AI space this summer.
- 📚 Take a deep dive into AI-ready tactics
- 🔥 Read this month’s hot, medium, and mild takes
- 👯 Get info on upcoming events, like Big Data London
- 💡And check out our meme of the month (of course)
What’s new?
What we’re writing about:
- Redefining AI-Ready Data for Production. What does baseline AI-readiness look like? Shane Murray, Head of AI Product Strategy at Monte Carlo, explores why we need to we need to redefine AI-ready data as an ongoing operational model and shares why a reliability loop framework can help teams get there.
- Just Launched: Unstructured Data Monitoring. We just launched unstructured data monitoring, enabling data + AI teams to apply intelligent monitors to text and image fields like reviews, support tickets, descriptions, chat logs, and more—directly within our no-code monitor builder. Check it out and let us know what you think!
- Will GenAI Replace Data Engineers? And a follow up question: Do people still say “GenAI”? The short answer to both questions is no… but AI is reshaping how data engineering might look in the future. Barr Moses, CEO & Co-founder of Monte Carlo, shares her thoughts.
What we’re talking about:
What’s hot? 🌶️
We share one hot take, one medium take, and one mild take on what’s happening in the data space. Can you handle the heat?
- Hot. Grok 4 is “smarter than almost all graduate students in all disciplines." At least that’s what Elon Musk said in his chaotic announcement of the latest model… So far, it’s boasted some impressive results, but it’s not immune to bad training data. Grant Harvey at The Neuron shares everything you need to know about Grok 4, including the good, bad, and really bad.
- Medium: There are only 6 ways to evaluate a RAG system. Jason Liu published an interesting article on the unnecessary complexity of RAG evaluation. He posits that you can simply break evaluation down into its three components – question, context, answer – and examine their conditional relationships to find exactly six possibilities. What do you think?
- Mild. Context engineering is what actually makes AI magical. Grok 4 might be a hot topic right now, but it doesn’t matter how good the latest model is if the context isn’t up to par. As Boris Tane shares, two products could be doing the exact same thing, but one feels magical and the other feels like a cheap demo. The difference? Context. A good reminder!
Let’s meet up!
Every season is data events season – and that’s just the way we like it. Catch us here next:
- Serving Data + AI: Austin. Join us in Austin on July 22nd for dinner, drinks, and a panel with Phil Warner, Director of Data Engineering at PandaDoc. RSVP here!
- Big Data London. We’re crossing the pond for Big Data London on September 24-25th? Will you be there? Stay tuned for the latest sessions and happy hours!
- AWS re:Invent. It’ll be here before we know it! Mark your calendars for AWS re:Invent, happening December 1-5 in Las Vegas.
- Data + AI Observability Technical Live Demo: Interested in data + AI observability but not sure where to start? Join us July 31st or August 14th for an inside look at how to operationalize data quality with automated monitoring, root cause analysis, real-time alerting, and more.
What we’re reading:
Here’s a few articles from across the industry that piqued our interest this month.
- The 2025 AI Engineering Report. This report includes several interesting findings, including the evaluations, accuracy, and reliability being three of the six top pain points for AI engineers.
- Introduction to Model Context Protocol. Anthropic published a couple of new courses, including this one for learning how to build modular AI applications using MCP to connect Claude with external tools and data sources. A good starting place for anyone interested in developing an MCP server.
- AI Assisted Coding with Cursor AI and Opik. Ready to move beyond vibe coding? This article shares how to apply traditional software development best practices to get the most out of AI assisted coding tools when building LLM applications.
Just for Fun
Rest in peace… but can you fix this pipeline first?
Until next month, stay reliable!