The Bias is Coming from Inside the AI.
The Bias Is Coming From Inside the AI
AI has quickly embedded itself into the way HR works—from resume screening to performance evaluations, policy writing to employee communications. For HR teams, the allure is obvious: faster processes, scalable insights, more consistent decision-making. There's also the promise, often uncritically repeated, that AI might reduce bias by removing human subjectivity from the equation.
But as many HR leaders are now realizing, that promise comes with a critical caveat:
AI doesn’t erase bias. It learns it.
It learns from the data we feed it, from the systems we've built, and from the decisions we've made over time and through the training and conversations we continue to have. If that data reflects a pattern of exclusion, the AI will quietly replicate it. If past hiring decisions favored a narrow profile of leadership, (do I hear white and male?) the machine will begin to associate that profile with competence, promotion, and potential. And it will apply that logic consistently and efficiently, without pausing to ask why.
So the challenge in front of us isn’t whether we should adopt AI in HR. We already have. The challenge is how we do it well—and who we’re holding it accountable to.
AI Doesn’t Create Bias. It Reflects It.
What makes AI so effective—its ability to detect patterns at scale—is also what makes it risky in the context of equity. Bias in AI doesn't usually announce itself through offensive language or overt discrimination. Instead, it surfaces in subtler ways: in who gets scored higher, in how language is framed, in what images reflect success, in what kinds of candidates are surfaced, and what kinds are quietly filtered out.
In some cases, the bias stems from the data itself. Hiring models trained on a decade of internal decisions may learn to associate leadership potential with certain schools, certain locations, or certain names. Generative tools might adopt the biases of the text they were trained on, skewing job descriptions or feedback in ways that reinforce outdated stereotypes. Even performance review assistants can subtly reproduce the same patterns we've been trying to correct for years—praising women for being “collaborative” while calling men “strategic.”
These aren’t bugs in the system. They’re reflections of the system.
The risk isn’t just that bias shows up—it’s that it scales. Once embedded, it can replicate faster than any one human ever could.
This Is Our Lane
I know there are so many HR Leaders out there that may be thinking of AI as someone else’s responsibility. It’s technical. It’s new. It feels like something for the IT team, or maybe legal, to figure out. But make no mistake: this is a people issue. And people issues are HR’s domain.
AI is already shaping how we hire, develop, reward, and communicate with employees. That means HR leaders must be at the table—not just during rollout, but from the start, and especially during oversight. The tools we adopt today are shaping employee experience, opportunity, and equity tomorrow.
The good news? AI can support fairness when built and governed with care. It can remove identifying information from applications to reduce unconscious bias. It can identify patterns—like who’s receiving vague feedback or who’s being consistently passed over—and bring them to light. It can help scale fairer, more consistent practices if we teach it what fairness actually looks like.
But that doesn’t happen automatically. That happens through governance. Through training. Through ongoing evaluation and hard conversations. In other words—it happens through leadership.
Building Systems That Reflect Our Values
Organizations that do this well won't be chasing the myth of bias-free AI. They’ll be building processes that center accountability. They’ll treat AI tools not as infallible decision-makers, but as systems that must be shaped, guided, and regularly questioned.
That work includes:
These are not technical tasks. They’re leadership practices.
And they’re exactly the kinds of practices HR is already equipped to lead—if we choose to.
Reframing the Real Question
There’s a statement I hear often: “AI still discriminates against certain groups of people.” And it’s true—when left unchecked, AI systems will replicate and even amplify the very same inequities we’ve spent decades trying to root out. But that fact shouldn’t lead us to reject the tools entirely. It should push us to ask better, deeper questions.
Too often, the conversation stops at “Is AI more or less biased than humans?”—as if that comparison settles the matter. It doesn’t.
The more urgent question is this: Can we build systems that hold us accountable to our best intentions, not our worst assumptions? Can we design tools that help us catch what we’ve historically overlooked—tools that don’t just mirror our past, but challenge us to create a different future?
Because if we approach this right, AI won’t simply be about speed or efficiency. It can become a mechanism for reflection. For redirection. For scaling equity with the same energy we’ve used to scale operations.
But that’s not automatic. Again... that’s leadership. That’s choice.
Will We Write the Future of Work or React?
AI is here. And it’s already reshaping the workplace. The question isn’t whether HR will be affected. It’s whether we’ll lead... or just react.
HR has long been tasked with balancing operational needs with human impact. This is no different. We don’t need to become engineers or data scientists to lead responsibly. But we need to insert ourselves into the discussion. We need to stay curious, ask better questions, and stay close to the people and systems these tools are reshaping.
Because the future of AI in HR won’t be written by the tools themselves. It will be written by the people who decide how those tools are used, who they serve, and who they leave behind.
And for HR leaders who believe in building workplaces rooted in fairness, trust, and belonging, that’s not a risk.
It’s an invitation.
Theresa Fesinstine is the Founder of peoplepower.ai and a 25+ year Executive Leader in People and Culture. She is a LinkedIn Top Voice in Artificial Intelligence, received her Certificate in AI for Business Strategy from MIT in early 2023, and is a proud Adjunct Professor of AI in Business and HR Management at CUNY's City College of New York.
She works with companies of all sizes and industries, in fact, being agnostic allows her to widen the berth of her mission.
Theresa's first book, People Powered by AI: A Playbook for HR Leaders Ready to Shape the New World of Work is available on Amazon for pre-orders and will be available in April 2025. Link to pre-order: People, Powered by AI
Join our FREE monthly peoplepower.ai Learning Clinics: Register to say in the loop here: https://tally.so/r/3EdpWr
Interested in working with me directly? Looking for conference speakers that can speak AI to HR without the jargon? Let's talk now about 2025!
Email - theresa@peoplepower.ai
HR Business Partner & Consultant | Employee Relations & Talent Strategy | Leadership Coach | Strategic Advisor | Builder of Cultures & Connections
4moExcellent perspective on the role HR must have in monitoring AI for bias by the data input. Thank you Theresa Fesinstine !
Very informative Theresa. I agree, that like any other powerful tool it must be regularly managed and monitored to ensure it is functioning as intended.
Fractional Head of People | Building People Ops at Startups
4moLoved this read, Theresa! Thanks for taking such a thoughtful approach with this topic.
I use AI to help organizations conquer culture, people, product, process, and tech challenges. Fractional CHRO, HR Innovation Consultant, HRTech Product Manager, Remote work expert. productizehr.substack.com
4moThis is a great article on the HR responsibility in anchoring systems on values, Theresa Fesinstine! AI is the obvious case, but it also applies to all our HR tech stack: are we living up to our values and our employer branding promises with our ATS, our HRIS, our LMS, etc.? On a more technical and granular level of understanding bias in AI, Anthropic recently published an fascinating paper on the basic exploration of how the LLM creates the responses: https://www.linkedin.com/posts/hernanchiosso_llm-activity-7311379425810399233-0DjV?utm_source=share&utm_medium=member_desktop&rcm=ACoAAADlgJ0BFe54M8JahqZymX4FGbSZkWp_fjo
Executive Vice President, Head of Organizational Effectiveness | PhD Candidate in Organizational Leadership
4moLove all of this!