INBOX INSIGHTS: Strengthening Your Foundation, Ethics in AI (2025-05-14)
INBOX INSIGHTS: Strengthening Your Foundation, Ethics in AI (2025-05-14) :: View in browser
Watch This Newsletter
Strengthening Your Foundation
I was taking a strength class over the weekend where the instructor kept emphasizing one key point: your foundation is always supporting you. For example, even when you think you’re just working your arms, your core and legs are quietly doing the heavy lifting. Neglect that foundation (aka skipping leg day) and you’re setting yourself up for injury. As she explained this, I couldn’t help but see the perfect parallel to what I’m always telling you about business.
It hit me that this is EXACTLY what happens in organizations when they rush to implement new technology without checking if their foundations can support it.
Yikes. What a setup for a post about the 5Ps. You’re welcome.
The Foundation Isn’t Sexy - But It’s Essential
Let’s be honest. Working on your company’s foundation isn’t the glamorous part of business. Nobody’s rushing to LinkedIn to post about how they spent six months documenting processes or organizing their data architecture. Besides me, that is. We all want to jump to the exciting stuff—implementing AI, launching new products, and announcing big wins.
I get it. I’ve been there too. At my previous company, they pushed to implement a fancy new analytics platform because our competitors were using it. They were so focused on the shiny capabilities that they completely ignored our foundation. The result? We spent a fortune on a system nobody could use properly because our data was disorganized, our team lacked training, and our processes were inconsistent.
It was an expensive lesson.
How can we do better?
Using the 5Ps to Audit Your Foundation
Before you integrate any new technology (especially something as transformative as generative AI), you need to know what you’re working with. This is where the 5Ps framework comes in:
You can get your copy of the 5P Framework here
The Ongoing Work of Foundation Building
Here’s something I’ve learned the hard way: your foundation isn’t something you build once and forget about. It requires ongoing maintenance, especially as you integrate new technologies.
When we first started exploring AI at Trust Insights, we made the mistake of assuming our existing data governance would be sufficient. It wasn’t. We quickly realized we needed to revisit our data quality standards, privacy protocols, and documentation practices.
So what does ongoing foundation work look like when you’re integrating something like generative AI?
The Cost of Skipping Foundation Work
I recently spoke with a marketing director who deployed an AI content generation tool across her team without doing any foundational work. Six months later, they had inconsistent outputs, duplicate content issues, and serious brand voice problems. The technology worked exactly as designed - but without the foundation to support it, the results were chaotic.
The cost wasn’t just financial. Team morale suffered, client deliverables were delayed, and they ultimately had to pause the entire initiative to go back and do the foundational work they should have started with.
Your Action Plan
If you’re considering implementing generative AI (or any new technology), here’s a practical way to approach your foundational work:
Remember that strength instructor I mentioned? She reminded us that even professional athletes still do foundational exercises. The foundational work never stops—it just becomes more integrated into how you operate.
What foundational elements is your organization neglecting? Reply to this email to tell me, or join the conversation in our free Slack group, Analytics for Marketers.
Reply to this email to tell me, or come join the conversation in our free Slack Group, Analytics for Marketers.
- Katie Robbert, CEO
Share With A Colleague
Do you have a colleague or friend who needs this newsletter? Send them this link to help them get their own copy:
Binge Watch and Listen
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the crucial difference between ‘no-code’ and ‘no work’ when using AI tools.
You’ll grasp why seeking easy no-code solutions often leads to mediocre AI outcomes. You’ll learn the vital role critical thinking plays in getting powerful results from generative AI. You’ll discover actionable techniques, like using frameworks and better questions, to guide AI. You’ll understand how investing thought upfront transforms AI from a simple tool into a strategic partner. Watch the full episode to elevate your AI strategy!
Last time on So What? The Marketing Analytics and Insights Livestream, we listened to the state of the art in voice generation. Catch the episode replay here!
In Case You Missed It
Here’s some of our content from recent days that you might have missed. If you read something and enjoy it, please share it with a friend or colleague!
Paid Training Classes
Take your skills to the next level with our premium courses.
Free Training Classes
Get skilled up with an assortment of our free, on-demand classes.
Want to Sponsor This Newsletter?
You could be reaching 32k+ marketers, analysts, data scientists, and executives directly with your ad. Want to learn more? Reach out and contact us.
Data Diaries: Interesting Data We Found
In this week’s Data Diaries, let’s talk about AI ethics. I was teaching at Harvard Business School last week, and one of the students in my guest lecture asked what I thought about the ethics of AI models.
To start, we have to define what ethics even means, generally, then applied to AI. Broadly speaking, there are three branches of ethics, VASTLY oversimplified.
The huge challenge with ethics is that ethics is about right and wrong, and right and wrong are mostly moral judgements, which in turn means they are determined by the culture you live in.
These differing philosophical leanings show up in how cultures approach complex ethical brambles like AI. A culture prioritizing consequentialism might find it ethical or at least defensible for an AI company to use vast amounts of data without permission if the societal benefit is large, even if individual rules about consent (a deontological concern) are bypassed.
Conversely, a culture strong on individual rights might lean on deontological principles to restrict such data use, irrespective of potential collective gains.
Let’s take two AI companies as examples of this challenge, DeepSeek and OpenAI. OpenAI is a Western company based in San Francisco, founded on mostly Western values, such as the individual being more important than the collective.
Hangzhou DeepSeek is a Chinese company based in Hangzhou, founded on Chinese and East Asian values, such as the collective being more important than the individual.
If we examine the ethical question of whether an AI company has the right to infringe on individuals’ content to create a model that could cause potential economic harm to those individuals, in Western cultures, this would largely be seen as unethical. Collective harm is frequently subordinated to the rights of the individual, especially in countries like the USA.
In Eastern cultures, the opposite is often true. The expectation is often that the individual subordinates their rights for the good of society, of the collective, especially in countries like Japan, Korea, and China. An AI company taking individual works to produce a product that benefits the society as a whole would be ethical in this situation.
Where this comes to a head is in AI model performance. The best models are trained on the best data (garbage in, garbage out). For AI model makers, whoever has access to the best, highest quality data will win the AI race, all other factors being equal.
Which means that the ethics about how AI models are made (from one perspective, infringing on individual rights and from another perspective, individual rights being less important than the collective) will be driven in part by the company and the culture that company is embedded in - and a determinant in the capability of those models.
So what? What does this mean for you? It means that practically speaking, until legislation exists in Western nations that prohibits the use of intellectual property for AI training without licensing or consent, there are strong incentives for all AI companies to infringe on IP rights.
It also means that in economies and cultures where such legislation exists, they will eventually be at a technological disadvantage; for example, the EU has access to fewer AI tools because of the EU AI Act. This places EU-based companies at a disadvantage compared to their peers in other markets.
Is there an ethical path forward? Again, the answer depends on your culture.
From a collectivist perspective, there are fewer ethical issues with AI models using your data without your express consent because in those cultures, individuals are expected to contribute to the collective good, sometimes at their own expense.
From an individualistic perspective, the ethical approach would be for AI companies (particularly those in Western cultures) to license and compensate intellectual property owners for use of their data in some fashion.
How this all plays out is less clear, and again is based firmly in our respective countries and cultures. However, one thing is clear: the best models will come from the best data, which in turn means that cultures which favor collective benefit over individual rights might have a greater advantage in the AI race - and that could very well determine, down the road, who the big winners in AI are and whose models you use to get your work done.
Trust Insights In Action
Job Openings
Here’s a roundup of who’s hiring, based on positions shared in the Analytics for Marketers Slack group and other communities.
Join the Slack Group
Are you a member of our free Slack group, Analytics for Marketers? Join 3000+ like-minded marketers who care about data and measuring their success. Membership is free - join today. Members also receive sneak peeks of upcoming data, credible third-party studies we find and like, and much more. Join today!
Blatant Advertisement
Imagine a world where your marketing strategies are supercharged by the most cutting-edge technology available – Generative AI. Generative AI has the potential to save you incredible amounts of time and money, and you have the opportunity to be at the forefront. Get up to speed on using generative AI in your business in a thoughtful way with our workshop offering, Generative AI for Marketers.
Workshops: Offer the Generative AI for Marketers half and full day workshops at your company. These hands-on sessions are packed with exercises, resources and practical tips that you can implement immediately.
Upcoming Events
Where can you find Trust Insights face-to-face?
Stay In Touch, Okay?
First and most obvious - if you want to talk to us about something specific, especially something we can help with, hit up our contact form.
Where do you spend your time online? Chances are, we’re there too, and would enjoy sharing with you. Here’s where we are - see you there?
Read our disclosures statement for more details, but we’re also compensated by our partners if you buy something through us.
Blatant Advertisement
Check out our new AI news roundup newsletter over on Substack! Get the AI news that matters to you - completely free.
Legal Disclosures And Such
Some events and partners have purchased sponsorships in this newsletter and as a result, Trust Insights receives financial compensation for promoting them. Read our full disclosures statement on our website.
Conclusion: Thanks for Reading
Thanks for subscribing and supporting us. Let us know if you want to see something different or have any feedback for us!