Recognizing and Overcoming Risks of AI for HR

Recognizing and Overcoming Risks of AI for HR

AI’s readily accessed, screened, curated, and analyzed information informs decisions that transform where, when, what, and how work is done. The pervasiveness of genAI (e.g., ChatGPT, Google Gemini, Microsoft Copilot, Perplexity AI) shows up in student papers, work applications, management presentations and reports, articles, books, and even LinkedIn posts and comments. AgenticAI impacts every human capability investment in talent + leadership + organization + HR function as algorithms and bots turn HR processes into products

The positive outcomes of AI-enabled work have been articulated both in managing costs with more efficient operations and increasing revenue with more targeted customer engagement.

As the AI (r)evolution continues through waves of change, remembering an old adage might be helpful: “There is no such thing as a free lunch.” AI use does come with its costs; but rather than proclaim the downsides of AI, let me discuss eight risks that, once identified, might be managed.

Eight Risks of AI

1.     Information parity. Relying on AI information reduces variance. When an organization wants to invest in an HR initiative (e.g., criteria for talent acquisition, leadership development, etc.), AI can and is often used to share what has been done by others. To upgrade plant managers, a manufacturing company used AI to quickly define seven key competencies of effective plant managers in their industry. My simple question was, “How many of your competitors have done the same exercise and identified the same seven competencies?” All of them!

AI reduces variance by sharing information (often with clever prompts) that anyone, anywhere, anytime can access.   Benchmarking that might have taken a team weeks or months, can now be done by an individual in minutes. But remember that a primary lesson of benchmarking is not to do what others have done or are doing. The point of benchmarking is to go beyond, seeking “next practices.”  If everyone had the same competencies for example, why should customers choose one organization over another for the product or service they want?"

The risk of information parity can be overcome by focusing on differentiation that leads to advantage by using AI as a starting point and foundation. The manufacturing company identified additional unique plant manager characteristics consistent with their desired identity and culture.

2.    Cognitive decline. Muscles grow with exercise and atrophy with disuse. Research has shown that students who rely on AI to prepare their papers experience cognitive decline. Cognitive decline risk increases when depending on AI to do a paper, report, presentation, or document. Mitigating cognitive decline comes when AI information couples with human creativity and insights that lead to innovations. One firm sourced AI answers to problems, then they had groups discover how to move beyond, tailor, and implement AI reported answers. 

3.    Wrong or misleading information. Most have used AI to generate information on a topic of personal expertise and discovered inaccurate or incomplete information. For example, have AI write your obituary or resume to see how accurate it is, especially in the details. In addition, AI may also provide misleading information. For example, we have studied HR competencies with over 100,000 respondents over 35 years while others have done so with a convenience sample of friends on LinkedIn. Too often AI equates the two studies, which is misleading. Or, for another example, I have consistently defined competence as individual ability and capability as organizational ability in a number of books (since 1990) and articles. Yet when I ask chatGPT to report my work, it misrepresents my thinking. Further, when others use AI generated information, they cite the AI misrepresentation (not knowing or reading original work), which further obfuscates ideas. Overcoming the risk of flawed information requires analytical thinking to vet information provided.   

4.    (False) emotion. AI often feigns emotional connection by asking questions to further discussion, using active listening to engage, and offering affirming responses to queries. Researchers have shown the risk of AI-exclusive counseling where clients form an emotional connection to chatbot therapists (Wdebot, Wysa, Tess). To avoid false emotion risk, Artificial Intelligence needs to be coupled with “Authentic Intimacy” to ensure that people experience from real other humans emotional support with compassion, care, and concern.

5.    Privacy. Information gained though engaging with AI can be and is stored. Just like Amazon knows a person’s lifestyle and habits by their purchases and Google by their searches, AI becoggmes a deep source of personal information about the user’s thinking through queries and engagement. The risk of data security needs to be managed by policies around confidentiality, integrity, access, and availability. 

6.    Fake vs. real. AI can now produce reports, videos, images, and comments that appear real even when they are fake. The percent of bot-driven posts and comments is increasing dramatically on both X and Linkedin (estimated up to 50 percent). Reduce risk by using AI content detection tools (e.g., WinstonAI, GPTzero, Grammarly AI Detector) to determine bot versus person comment. For example, I have discovered that some of the comments on my posts are “19% human” and more likely AI/bot generated, which informs my response (or lack thereof).

7.     Living backward and recycling. AI does an incredible job curating the past, but the past is not always a good prologue to the future. Because something worked or did not in the past does not mean it will be effective going forward (like in the example of benchmarking above). Most genAI reports on HR processes summarize what has been done, and agenticAI (bots) put these legacy processes into proposed solutions. Replacing the past with the future means knowing the past in order to not repeat or repackage it. Overcoming recycling risk and spiraling forward means creating new solutions that advance what has been done based on changing business context by coupling human and artificial intelligence.

8.    Accountability diffusion. Using AI to improve decision making is a shared responsibility and includes experts in technology, finance, HR, legal, strategy, and marketing. These participants should form an AI governance committee to shape AI strategy, allocate resources, and set policy. However, they may lack clear ownership for progress. The accountability diffusion risk is less when this committee sets clear AI objectives, investments, and standards with metrics to ensure responsibility.

These eight AI risks can be identified by using the assessment in figure 1. Once the top risks are identified, they can be managed.

Manage AI Risks to Make Progress

As I coach HR leaders on how they can contribute to AI impact going forward, I suggest the following:

  • Be an advocate of AI-enabled work. Have and create in others a positive mindset about how AI provides information to improve decisions to deliver stakeholder value. Be an active contributor to groups assigned to AI governance.

  • Envision AI as an enabler for work and not just a replacement of people. Replace fear of loss with opportunity for progress.

  • Help navigate the paradoxes of AI by engaging the right people in the right conversations.

  • Model the proper use of AI for yourself and encourage its correct use for all HR work.

  • Include discussion of AI risks (like the ones I’ve identified above) as part of enterprise risk management efforts.

  • Continually integrate technology and people. Don’t let AI replace but amplify IQ (intellectual quotient), EQ (emotional quotient), or SQ (social quotient) by continually encouraging human emotion, energy, and empathy as central to how work gets done.

  • Embed AI as an ongoing and integrated part of work, not a separate agenda.

  • Your add?

Replacing AI risks with opportunity becomes an agenda worth pursuing.

Note: AI was not used to conceive or draft this post, but it was used to clarify ideas (~10 percent) with real person editor (thanks Jess!) and real social media advisor (thanks Macy!). On WinstonAI (an AI detection system), this post is rated 100 percent human!


Dave Ulrich is Rensis Likert Professor Emeritus at the Ross School of Business, University of Michigan, and a partner at The RBL Group, a consulting firm focused on helping organizations and leaders deliver value.

Jerry Fluney, CHRP

Human Resources Leader | 10+ Years Empowering Teams & Transforming Workplaces | People & Culture Strategist

1d

I was all ready to comment on the risks of AI and then in the first point, you mentioned, “Next practices”, which says in two words what I’ve been trying (and mostly failing) to get across for years. Apparently, I'm 20 years behind the times...ha ha! Thank you for introducing me to it though!

Like
Reply
Max Yelisyeyev

Co-founder at Devstark | Business operations automation | Trucking and logistics nerd at heart

2d

Recycling the past happens when policies live as PDFs instead of decision logic. We’ve had better luck encoding policies as versioned decision trees with sandboxes to trial future rules side-by-side.

Funmi Agbolade, Assoc CIPD, Certified ICF Coach

Leadership Development, Talent Management & Culture, Learning & Development; HR Manager, HR Projects Manager, HR Business Partner; Strategic HR, Mobility; Servant Leadership; Authentic and Empathetic Leader +8 years L&D

6d

Thanks

Like
Reply
Lucia Valerio, HRPM

Head of Human Resources Lexar International | People Solutions: Strategy Through Implementation | Empowering Start-Ups & Accelerating Growth 10X | HR Technology Transformation | Leadership & Organizational Effectiveness

1w

Couldn’t agree more. AI is amazing for efficiency, but the moment we stop prioritizing human connection, we lose what makes work meaningful. Tech should free us to focus on people, not replace them.

Like
Reply

Excelente professor, its a very important to guide the use of AI not only for HR professionals.

To view or add a comment, sign in

Explore content categories