Effective HR Strategies:
Practice Informed By Science: Science Informed By Practice

Effective HR Strategies: Practice Informed By Science: Science Informed By Practice

Practitioner: “That may work in theory, but will it work in practice?” Researcher: “That may have worked in practice, but will it work in theory?” Bystander: “Shouldn’t it work in both?”

The turnover rate among critical skill personnel is unacceptably high and the HR Director believes it is due to pay levels that are not competitive. The CFO resists increasing pay. How should HR support the attribution of high turnover to low pay?

Two articles recently read by the HR Director contradict each other. One claims that conscientiousness has more impact on performance than intelligence… the other claims the opposite. How should the relative importance of the personal characteristics be weighted in the selection process when the articles do not provide a clear answer?

A popular book claims that rewarding people with money will reduce the intrinsic motivation they derive from their work. A review of numerous field research studies based on scientific principles seems to firmly establish that rewards can motivate performance, if they are linked to performance. How should the HR Director reconcile these conflicting claims?

Human resource management practitioners must make critical decisions relating to how their organizations manage their workforces.  There is a substantial body of research that is relevant to making workforce management decisions.  These research findings have been based on both laboratory and field studies and can be used to predict with greater accuracy how effective alternative strategies are likely to be.  This body of research can be a valuable tool for practitioners, informing their decisions and increasing the probability that their decisions will have a positive impact on the organization.  

The level of understanding about what research has found is inadequate in the practitioner community.  In a study of over 1,000 HR practitioners it was found that they believed things to be true that were not supported by research. The study consisted of a 35 question True-False test about what research has found. The median score was 20, which indicates that practitioner beliefs are not well aligned with research findings (guessing would result in a 17.5 average score).  Some of the misconceptions held by practitioners that were discovered in the study can negatively impact the quality of decisions relating to employee selection, development and motivation. Some of the questions with low scores by practitioners were:

Q: “Companies that screen job applicants for values have higher performance than those that screen for intelligence.” 

A: false.  16% answered correctly.

Q: Asking employees to participate in decision-making is more effective in improving organizational performance than setting goals.

A: false.  18% answered correctly.

Q: Conscientiousness is a better predictor of job performance than is intelligence.” 

A: false. 18% answered correctly.

Q: “Surveys that directly ask employees how important pay is to them are likely to overestimate pay’s true performance in actual decisions.”

A: false.  35% answered correctly.

When interpreting the results studies it should be kept in mind that those responding to them were very likely to be those who were the most knowledgeable.  If a potential respondent discovers that (s)he has no clue as to the answer to the first several questions that person is more apt to fail to respond.  Unless the survey responses were mandatory (i.e., when professors assign the task to students) this differential fallout is going to cause the scores to be higher than they would have been if all those sent the materials had responded.  Because of the manner in which the two studies were conducted it was also possible for respondents to search for the correct answers to the questions.  The most diligent are the most likely to respond and they are also most likely to take the time to search for evidence that will inform their answers.

What Do Practitioners Need To Know & How Do They Get To Know It?

It is easy to be critical of practitioners if they do not know what the research evidence suggests is true.  After all, if they are making decisions on critical issues related to workforce management shouldn’t they be aware of what research could tell them? Shouldn’t decision-makers utilize all relevant evidence when they attempt to fashion strategies and programs that will be effective?  The answer is clearly “yes.”  But often management decisions are complex, obscuring where evidence regarding alternative strategies might be found.  For example, if HR is at odds with the head of IT about whether “innovative” work can be measured and evaluated in a formal performance appraisal there is no obvious source for identifying and applying relevant evidence.

There are two types of obstacles that get in the way of research being incorporated into practice: 1) research results may conflict with deeply held beliefs, and 2) for research to be useable practitioners need to be aware of the existence of research findings, understand how this evidence is relevant to their decisions and know how to apply it appropriately.  

Existing beliefs count.  Everyone is prone to cognitive distortions of reality.  We more readily notice and accept information that is consistent with our beliefs.  We discount evidence that our instincts tell us is not true.  We sometimes don’t care about what is rationally true if it violates what we think is “right.”  

There are realities that contribute to the gap between research and practice.  The gap is caused by several realities:

  1. practitioners don’t read the journals where most research findings are published,

  2. the research is often not clearly related to the issues practitioners face,

  3. the research tends to be highly theoretical, with little guidance as to how and where it can be applied,

  4. the research is presented in a form that is not easily accessible to untrained practitioners, and

  5. there is limited communication between the research and practitioner communities. 

So when practitioners are criticized for not utilizing the available evidence it must be understood that there are obstacles in their way, some created by them and others created by the way research is done.

Part of the disconnect between the two communities is attributable to what HR practitioners read.  It is clear that what they read is for the most part not where rigorous research is published.  Researchers are primarily in academia or in consulting organizations.  Academics have a rewards structure that positively values publications in “A journals” (Journal of Applied Psychology, Academy of Management Journal and the like).  Publishing in these journals requires that their research is focused on theory development and often that it is heavily based on quantitative analysis.  Academic promotions are based on contributing to these publications and often no credit is given for trying to reach practitioners by contributing to the publications they read.  

Practitioners most often rely on books and articles in practice-oriented publications.  Unfortunately the popular literature is littered with fads and claims that have little or no support in the form of responsible research findings.  For example, a recent best seller claims that research shows: 1) that you cannot motivate someone else… they must motivate themselves, and 2) that rewarding performance extrinsically diminishes their intrinsic rewards derived from doing the work, which can decrease their performance. These contentions can lead rewards practitioners to make decisions that are clearly wrong, but that is only evident if they know they were based on lab research that should not be generalized to actual work settings.  Those trained in research methodology are taught that in order for research to be valid it must be based on studies that are internally valid (designed in accordance with established protocols) and externally valid (the findings must be generalizable to the setting in which they are to be applied).  The book just cited was based on research that was conducted in settings unlike those found in organizations and therefore should not be relied upon to make decisions in real work settings.

Popular books can be helpful and illuminating and deriving lessons from books and articles in professional publications can be helpful to decision makers.  But there is often no rigorous oversight exercised to be sure claims made in books are warranted, opening the door to widespread acceptance of questionable claims.  Best seller lists often create fads which later prove to be ill-founded.  But the “wisdom” provided in a book everyone seems to be reading is hard to resist or accept cautiously.  Rejecting the claims made is even harder when one’s CEO cites claims made in a book.

Practitioners also often rely on “benchmarking,” which consists of evaluating the practices of successful organizations.  This certainly can be a valuable source of guidance, although caution should be exercised for several reasons.  First, emulating others precludes gaining a competitive advantage… at best it only allows the emulator to come up to par.  Second, the information available on what other organizations do is almost always incomplete. At best it enables the emulator to know what others say they did and what the results were, without indicating why the results were what they were.  Finally, in order for what worked “there” to work the same way “here” it is necessary that the two contexts are very much alike.  This is a problem even when an organization compares itself to other organizations of similar size in the same industry.  Culture and internal realities have a major impact on what works in an organization.  Rarely will the benchmarking organization have adequate information available to be able to identify cultural differences and internal forces and to assess their magnitude and their impact.  

Yet benchmarking can be extremely valuable.  For example, compensation surveys are tools for determining an organization’s competitive position.  Knowing what type of rewards programs are being used by competitors and the compensation levels that prevail in the relevant labor market enables rewards practitioners to make more confident recommendations to management.  Attempting to operate in the dark is likely to result in over- or under-compensating employees and/or compensating them in an inappropriate manner.  Unfortunately the consequences are discoverable only after bad things have happened (valued employees have exited or workforce costs have become non-competitive).  Survey makes it possible for organizations to compare their actions to those of competitors, which would not be possible without the survey.  Asking competitors directly how big their pay adjustments will be next year would produce little in the way of responses and surveys conducted by third parties can overcome this obstacle.

Information about what made strategies successful or unsuccessful in other organizations can be available through professional association conferences, as well as articles in professional publications.  There is, however, a strong built-in bias in these presentations and in the literature.  Very few people will publicly detail their failures.  As a result if one reads the available articles on the adoption of a particular strategy they are apt to be positive… the failures will go unpublished.  This is one of the causes of the frequent fad outbreaks, most of which fade away after organizations attempt to implement the strategies and make them work in contexts unlike those in which they succeeded. 

What Researchers Research & How

One of the criticisms of research is that it has no relationship to the pressing issues being faced by the practitioner community.  In addition, the research addresses very narrow issues.  And, as previously discussed, when it is reported out it appears in journals that are not read by practitioners.  Even when attempts are made by practitioners to access these journals it is found that the results are presented in a way that makes them inaccessible to those without formal training in research methodology.  

To be fair to researchers they are required to adhere to strict guidelines as to how research must be conducted and reported.  And journal editors are not tasked with ensuring that people lacking advanced training in quantitative analysis techniques can access the information.  It would severely compromise the usefulness of research if researchers were to deviate from the accepted research methodologies.  To be credible research design must be internally valid… it must result in measuring what is intended.  The findings must also be reliable, meaning repeated trials would result in similar results.  To be of value outside of the context within which a study was conducted it must be externally valid, or generalizable.  Regrettably, there is far less attention paid to generalizability by journal editors than necessary if practical application is an important objective.

When researchers are asked to provide relevant and accessible findings to practitioners it should be understood that the integrity of that research cannot be compromised by failing to apply sound methodology to better serve the practitioner community.  No one is well served by compromised research.

How Can The Research – Practice Gap Be Narrowed?

Increasingly professional associations and consulting organizations are attempting to act as intermediaries, to identify research that is relevant to practitioners and to translate the findings, so they are accessible to practitioners.  In addition, several professional associations have developed certification programs and courses that prepare people for the certification examinations.  There has been criticism that these courses inadequately present relevant research, but it is difficult to balance theory and practice when the available time is limited.  And despite Kurt Lewin’s observation that “there is nothing more practical than a good theory” practitioners will often show little patience for attempting to understand theory, being more interested in being told what they should do to address their problems (and to pass the examination).

The consulting community can play a liaison role, between the research and practitioner communities.  Consultants tend to hold more advanced degrees and to have access to multiple organizations.  Consulting firms are often willing to invest in research if it is it makes economic sense and/or if it results in valued intellectual property that provides a competitive edge relative to other consulting firms.  But much of the work done by consultants is focused on application, with contributions to theory being a secondary concern.  Their clients rarely are willing to underwrite the development of theory.  Although consultants publish articles and white papers these products are generally intended to promote consulting services.  One of the biggest contributions made by consultants is their ability to gather confidential data, to aggregate and analyze it and then report it out in a manner that retains each organization’s confidentiality.  Surveys relating to practice are a big business and serve a real need.  

Professional associations, universities and consulting firms can add to the research-practice dialogue by holding joint meetings designed to enable free exchange between the academics and the practitioners on matters of current interest. Most major professional associations have conferences, but a review of the audience discloses many practitioners and few researchers.  Inviting researchers to these conferences could increase their participation but a case must be made that this would in some way benefit them.  Many universities give credit to faculty for “public service” activities and certainly contributing to the knowledge of the practitioner community could be viewed as such a service if Deans and tenure committees are willing to do so.

Conclusion

It is increasingly argued that HR needs to be reshaped into a decision science, much as Accounting was reshaped into Finance and Sales into Marketing.  To measure the effectiveness of strategies and programs better metrics are needed, and investments in people must be justified in business terms. But decision sciences use scientific methods and practitioners must be able to understand and to use these methods if practice is to align better with science.  

A bridge must be built between the research community and the practitioner community.  And the traffic across that bridge must flow both ways.  Practitioners must carry their needs to researchers so they can consider conducting research that has relevance.  Researchers must make their findings more accessible to practitioners. And practitioners must make the effort to understand what the research says and how to apply it.  

All parties… researchers, practitioners, professional associations, consultants, educational institutions… can contribute to closing the research – practice gap.  But it is necessary to recognize the challenges and to mutually seek constructive solutions, rather than blaming those believed to be delinquent in fulfilling their mission.  Both evidence-based practice and practice-based evidence are required.  It takes the entire community to raise the quality of practice.

About the Author:

Dr. Robert J. Greene is an expert in human resource management, serving as the CEO of Reward Systems, Inc., Consulting Principal at Pontifex, and faculty member for DePaul University’s MSHR and MBA programs. A well-respected global speaker and educator, Greene has a passion for empowering organizations to thrive by unlocking the full potential of their people. He has authored four books and hundreds of influential articles and is an advocate of using scientifically valid research to make the business literature more responsible and less tainted by opinions that are only speculative and that encourage fad adoption.


Books:

To view or add a comment, sign in

Explore topics