It Just Dawned On Me Series No. 4
Empathy Caught It Early -- And Saved Us Later
It just dawned on me that empathy isn’t soft. It’s smart. It’s strategic.
In AI-powered workplaces, like law firms, it might be the only thing standing between a small workaround and a massive risk event.
Here’s how one quiet pause in a training session changed everything.
Empathy Caught It Early
I was advising a midsize professional services firm exploring ChatGPT Teams to help employees draft client reports more efficiently. A smart move, until a small moment shifted everything.
During a training session, I noticed a junior employee looking uneasy. So I paused and asked:
“What’s been working for you so far?”
That single moment of empathy, just giving space, led to a big reveal:
Team members had already been using their own, personal paid ChatGPT accounts to speed things up.
They copied and pasted sensitive client data -- names, dates, financials -- into unsecured tools.
Situational Empathy
Empathy in that moment meant no blame, no shame, only foresight.
It caught the risk early, earned trust, and built stronger guardrails before something broke.
Here’s the Problem
Those accounts weren’t part of the company’s security system:
🚫 No audit trail.
🚫 No control.
🚫 No governance.
The secure ChatGPT Teams platform hadn’t even gone live, but the risks were live.
Imagine one employee clicking a phishing link or having malware on their laptop.
The breach wouldn’t come from hackers. It would come from habit.
Not malicious, just human behavior. A well-intentioned workaround under pressure.
What We Did Next
Empathy did not lead to blame or shame; it led to better governance.
We paused the rollout and built the right foundation:
✅ Clear rules on what can and can’t go into AI tools.
✅ Pre-approved prompt templates.
✅ Systems that track who accesses what, and when.
✅ Secure storage for AI-related documents (not in inboxes or desktops).
Here’s the Truth
If you can’t trace it, you can’t contain it.
Lesson Learned
When something goes wrong -- and it will -- you need to show your work:
✅Who touched the data?
✅What tool was used?
✅When did it happen?
That’s your safety net.
Even Paid GPT Accounts & Settings Pose Risks
1. Device-Level Risk
If your team member is using a personal device (laptop, tablet, or phone):
You cannot verify whether that device is encrypted.
You do not control whether it is infected with malware.
There is no audit trail for data input (what, when, or by whom).
🚨 Risk: Even if OpenAI doesn’t store the data, a keylogger or screen capture malware could compromise it locally.
2. No Enterprise Logging or Oversight
Paid accounts are individual-facing, not enterprise-managed.
No visibility into prompts or outputs.
No version control or documentation.
Can’t fulfill discovery, compliance, or audit requirements.
🚨 Risk: In legal or regulatory contexts, this becomes a "rogue tool use" issue, not aligned with company controls.
3. Data Leakage via Copy-Paste Habits
If an employee:
Pastes sensitive data into ChatGPT.
Copies the AI response.
Saves it into an unsecured location (local drive, desktop, email).
🚨 Risk: You now have data fragments outside any protected system, vulnerable to mishandling or breach.
4. Misunderstanding of the “Do Not Train or Share” Setting
Even if you turn on the setting that says “Don’t use my content to train the model":
It doesn't mean the information is fully protected.
It doesn’t follow your company’s privacy rules.
It doesn’t give you the same legal protection as using ChatGPT Teams or Enterprise.
🚨 Risk: That setting only stops the AI from learning from your input. It doesn’t protect the data you put in, or how it’s handled afterward.
Bottom Line:
Paid is not secure. Personal is not controlled.
Even well-meaning employees using ChatGPT Plus with privacy settings still introduce risk if:
✅They’re handling sensitive or regulated data.
✅They’re working outside company-governed platforms.
✅The organization hasn’t issued guidance, governance, or secure alternatives.
The Impact of Empathy
Empathy sharpens our response.
It brings people in without fear or shame.
It’s not just kindness, it is diligence, trust, and smart leadership in action.
Empathy influences company culture.
Empathy isn’t a nice-to-have. It’s a signal of serious leadership in the age of AI.
It’s how resilient cultures are built—and how risks get caught before they become headlines.
"Empathy isn’t soft. It’s strong. It brings people in, builds trust, and protects what matters before it breaks."
What’s one AI risk or blind spot you’ve caught early, or wish you had?
Let’s discover them together.
#AILeadership #Empathy #Governance #AIRisk #PrivacyRisk #ItJustDawnedOnMe #TheCyberDawn
What’s your go-to question when you're evaluating #AI risk or adoption?
AI Governance & Cybersecurity Advisor | Premium Ghostwriter for LinkedIn & Thought Leadership | JD | Award-Winning Author | Top AI Voice
3moWhat’s another question every leadership team should be asking right now? I’ll start: “If this AI tool makes a mistake — who’s accountable?” Accountability isn’t always obvious in automated systems. But boards and teams need to make it crystal clear before something goes wrong.
AI Governance & Cybersecurity Advisor | Premium Ghostwriter for LinkedIn & Thought Leadership | JD | Award-Winning Author | Top AI Voice
3moI’m also seeing a big shift from simple AI chatbots to more powerful AI agents — tools that can automate tasks, access files, and act on behalf of users. That evolution raises the stakes: more access + complexity = more risk. If your team is exploring AI agents, now is the time to build in governance, role clarity, and safeguards. I’m helping teams do just that — designing agent systems with security, trust, and strategy at the core. Let’s talk if that’s on your radar. 🎯 #AI 🎯 #AIchatbots 🎯 #AIAgents 🎯 #Governance
AI Governance & Cybersecurity Advisor | Premium Ghostwriter for LinkedIn & Thought Leadership | JD | Award-Winning Author | Top AI Voice
3moIf you’re leading AI initiatives and want to discuss risk-proofing your strategy, let’s connect. You can book an initial consultation through my profile link to discuss your specific needs.
What’s one question you’re adding to your next AI or cyber meeting?