Nine Criteria to Evaluate the Impact of Human Capability Research
By Dave Ulrich, Rensis Likert Professor, Ross School of Business, University of Michigan, Partner, The RBL Group (dou@umich.edu) & Mike Ulrich, Assistant Professor of Management, Huntsman School of Business, Utah State University (mike.ulrich@usu.edu)
We like and appreciate research. Dave’s training included the Ph.D. statistics series in psychology, sociology, economics, and education and a dissertation on numerical taxonomy. As editor of Human Resource Management for ten years, he worked to bridge research and practice. Mike has a master’s in statistics with a thesis on Bayesian Hierarchical models, has a Ph.D. with an analytics focus, and teaches analytics to graduate students.
With our training, early in both our careers we often read and produced research that had no real impact on the world, including a numerical taxonomy of Japanese electronics firms or meta-analysis using Bayesian hierarchical modeling (extra credit to the reader who knows or cares what either of those studies mean; we’re not sure we even do). These early impact-less studies have refocused our efforts to do research with sustainable impact
Today, research on human capability has flourished (called people, predictive, or business analytics; evidence-based management
Impactful human capability research:
1. Passes the relevance test.
Statisticians refer to type 1 (false positive) and type 2 (false negative) errors, but we think type 3 errors too often occur when analytics explores the wrong questions. Sometimes this occurs with obvious studies (“toxic cultures create negative outcomes”; “working remotely leads to loneliness”) or with studies on somewhat trivial topics (“using a four- or five-point Likert scale in rating employees”). When looking at a research report, relevance questions include: “Is this interesting?” or “Is this something worth learning more about?” Relevance comes by observing phenomena that are interesting, unusual, challenging, or impactful which then make knowledge productive with research questions that will inform theory (why things happen) and practice (how things happen).
2. Cites and builds on previous work.
Google searches have proven that almost any research question has been studied somewhere by someone. Research that does not build on the past often rediscovers and repackages rather than advances ideas. Knowledge evolves and effective research builds on the past to create future insights. With OpenAI tools like chatGPT, there is little excuse for not honoring and recognizing previous work and then building on it.
3. Offers prescriptions by identifying independent and dependent variables.
Impactful research has a host of independent variables (human capability practices) that lead to dependent variables (outcomes that matter) so that we go beyond descriptions to predictions. Descriptions (independent variables) benchmark others; prescriptions link HR practices to outcomes that matter to guide what could be done to reach an outcome. One might ask, “What is the impact of an HR practice on employee, business, strategy, customer, or investor result?”
4. Collects data with confidence by avoiding question biases.
Push polls where data is collected to sway a point of view are not legitimate analytics. This means we must pay attention to the questions that are being posed to avoid obvious bias in data collected. Research reports should share research methodology, allowing readers to evaluate data collection and survey questions asked for possible biases.
5. Adopts the appropriate data collection method .
Multiple data collection methods enable impactful research depending on the primary purpose of the research. Each data collection method has rigorous criteria; for example, surveys with only LinkedIn followers, friends, clients, or a convenience sample would reduce (or remove) confidence in the reported findings. Analytics should report why the method was selected and confirm the accurate use of the data collection method.
6. Relies on the latest statistical procedures to discover themes in data.
Like all management processes, statistics have evolved to offer more refined analysis methods, allowing us to explore new questions. Ensuring the right analytic technique generally requires advanced training to have confidence in the results (e.g., managing multicollinearity in survey responses). In a research report, acknowledging the analytical tools used is helpful. Sometimes statistical significance does not reflect the importance of findings.
7. Shares data in a way that is easy to understand, that is interpretable, and in language that people relate to.
Turning complex data into information that can be used becomes a critical part of the analytics process. Sometimes, research reports share data in charts, figures, or graphs, but these diagrams are not readily interpretable. We have found that information shared should be tied to the research question. As noted in suggestion 1, these questions are generally about relevant decisions or choices leaders make and offer insights on how the information can inform the decision. We also discovered that overstating the findings is dangerous. Good research should build on previous work (suggestion 2) and offers a percent of impact (on the dependent variable in suggestion 3). We have found that a key to interpreting data is to tell a story that emerges from the data, often with analogies to decisions made in daily life. For example, “Human capability portfolio choices are like any investment portfolio that requires figuring out which investment will likely have the desired impact.”
8. Inspires action.
Research without action is like reading how to do a hobby book but never doing it. Mike once told a friend he should learn how to win a video game by watching Mike play rather than playing the game himself. This was a poor example of friendship but also an ineffective way for the friend to learn how to win the game. Research reports have impact when the data encourages debate and dialogue about implications for choices in allocating resources (money, time, attention) or creating policies that have impact. The actions do not have to be definitive, but good research should propose implications.
9. Recognizes limitations and future studies.
All research is inherently incomplete and evolving (clearly included in our research). Sometimes that means the methods are not flawless or the findings are not definitive (testing a hypothesis often has ambiguity). Recognizing limitations builds confidence. More important, impactful research builds on previous research (suggestion 2) and then sparks questions about future research. Ending research reports on what’s next helps promote future research.
Your additional criteria?
Seeing more human capability research being produced and shared is exciting. But even more encouraging is when the research follows analytic disciplines. When you look at a research report, we suggest using the questions in Figure 1 to help you evaluate the quality of the research to have confidence in the findings. What additional criteria do you use to validate the credibility of research?
..………
Dave Ulrich is the Rensis Likert Professor at the Ross School of Business, University of Michigan, and a partner at The RBL Group, a consulting firm focused on helping organizations and leaders deliver value.
C- Suite Professional | CHRO | Global HR Tech & Transformation Leader | Certified Independent Director | LinkedIn Top Voice | Building Future-Ready Workforces with Data, AI, Culture & Care
1moDave Ulrich Thanks for the wonderful article. Any framework based on strong research fundamentals has the ability to inspire and instill adaptions and build capabilities. Thanks for sharing the nine criterias.
Marketing Executive
2moAbsolutely enlightening post, Dave and Mike. Your nine criteria for evaluating human capability research provide not only a practical lens for separating insight from noise but also a timely reminder of what real impact in HR looks like—rigorous, actionable, and grounded in relevance. I’m excited to share that we’re gearing up for the GSDC Certified Learning HR Excellence Series 2025, a global initiative that delves into transformative HR practices, leadership development, and evidence-based strategies. https://shorturl.at/2cv7s
HR and Labour Consultant, President African HR Confederation
1yDear Dave Ulrich, your HR Masterclass sessions at our WFPMA World 19th World HR Congress in Singapore was out of this world; the personal interaction with your audience on the subject matter left a long and deep understanding. Thank you.
Great article, Dave! Your framework for evaluating the impact of human capability research is comprehensive and insightful. It provides valuable guidance for assessing effectiveness in this field. Additionally, considering aspects such as interdisciplinary collaboration, long-term impact, cultural context, and ethical implications further enriches the educational value of this research. Well done!
General Manager at INTEG Consulting - Tunisia
2yAbsolutely brilliant in many ways,. First, in being humble. Secondly, the emphasis on impact of research on human capabilities is a key aspect of knowledge isn'it, as Mr dave ulrich recognizes after many years of research. Another key idea is the proposal of nine criteria for impact evaluation. Last and the best, in my opinion, is leaving the reader to decide on additional criteria !