Closing the Analytics-to-Insights Gap: Create a Measurement Strategy for the Solution
The opinions expressed here are my own and not those of PwC. john.r.mattox.2@gmail.com
This is the third of eight articles designed to explore and close the analytics-to-insights gap that L&D professionals face in a data-rich world.
Effective measurement requires planning.
As with any project, the more planning done at the beginning, the greater the rewards at the end. Some projects do not need much planning, but as the complexity of the intervention, the scope of the project, and the diversity of the audience increase, so does the complexity of the measurement strategy.
A measurement strategy is a detailed plan which outlines the tools and processes to gather, analyze and report information about the outcome of an intervention
The strategy can cover one program, or it can expand and scale to cover a curriculum or even an entire corporate university. The goal for a single program is to demonstrate that a desired outcome is achieved because of the intervention (e.g., A caused B). The strategy is designed to be as rigorous as the business environment will allow to show cause and effect. Ideally, an experimental design can be planned and implemented where the target audience is randomly assigned to either attend or not attend the learning solution. However, this ideal design rarely gets implemented. That being said, there are organizations like Harrah’s Casino and Capital One which frequently use experimental designs. See Thomas Davenport’s book, Competing on Analytics, for examples.
This is where the ivory tower meets the practical business environment. Less rigorous, but still valuable designs, such as A-B-A experiments and naturally occurring comparison groups provide evidence of relationships and imply causation.
The measurement strategy should include the design, the metrics, data collection instruments (including business systems), and timelines, as well as an analysis and reporting plan.
The strategy is just an idea until it is shared with the project stakeholders who will decide what will and will not work.
Plans rarely survive first contact, so expect to negotiate—always keeping in mind the need to implement the most rigorous design with the most sensitive and accurate measures. If the strategy does not integrate well with the training program—if it does not seem like it is a natural part of the intervention—it is less likely to succeed.
Measurement planning at the front end of a learning intervention is ideal. It can accommodate pre-course measures and often affords time to develop new data gathering tools like surveys or focus group protocols. However, the common reality is that measurement experts get involved long after the intervention has been implemented. L&D leaders who are looking back on their efforts want help answering the question, “Did it work?”
This retrospective approach is flawed, but it is not necessarily fatally flawed.
The chance to gather pre-training measures is usually not available with retrospective studies. Notably, it is sometimes better to gather pre-training measures after the program because learners have a better sense of the content and can more accurately rate their pre-program skills. A question like, “Please rate how much your skills have improved because of training” assesses the gap between pre- and post-training without asking for both measures individually.
That being said, some data are collected continuously. Business systems track a variety of metrics like number of widgets created, number of sales proposals sent, number of client meetings held, etc. In this way, before and after measures are available, which is why a retrospective plan is not always fatally flawed.
What does a simple measurement strategy look like?
Here is an example that builds upon the company case study shared in the previous article, a boutique consulting firm that focuses on the automotive industry. HR is trying to upskill the workforce on AI and integrate it into current practices.
Measurement strategy
· Design—An experimental design is too rigorous for this environment. Leadership will not agree to training only half of the workforce to accommodate a control group with the other half. Instead, a simple A-B design will suffice where the A condition is the current state, and the B condition is the new state after more than 50% of the targeted learners has been trained. This simple before / after approach should show changes in metrics if the tools and processes are implemented as planned.
· Logic model—A logic model describes the underlying causal relationship between the learning event, knowledge and skills transfer, and performance improvement. If upskilling occurs, learners should be able to apply their new AI skills to internal processes and client projects, making efforts more efficient and increasing project quality. If these positive changes occur, then clients should be more satisfied and will likely buy services again, possibly even more services.
· KPIs and measurement processes--The strategy also defines what will be measured, how it will be measured and when it will be measured. In this case, there are several important measures to gather: attendance, completion of each learning event, program effectiveness, intent to apply, performance improvement, actual application of AI, improvements in efficiency, increases in project quality, monetary benefits associated with project efficiency and quality improvements. The measures themselves are relatively easy to define as are the processes for gathering the data (e.g., surveys, interviews, focus groups, business systems, etc.). Other steps, like crafting communications, setting up focus groups, or establishing distribution channels for the survey could pose major roadblocks. Additional care is often needed if data are coming from financial systems, HR systems or sales systems because of the protected nature of some of the data. Planning will help keep the project flowing when barriers arise. Notably, an ROI estimate can also be computed and shared at this point. (See this ROI article for more detail). Both benefits and costs will be estimates, but they often provide eye-opening numbers that help stakeholders connect measurement efforts to value-based outcomes.
· Roadmap—Once the details of the project have been defined and agreed to by the project team and stakeholders, a road map can be developed. It is more metaphorical than actual, but a creative project manager can build an outline of the actions, deliverables and timelines that resembles a map for the team to use. Figure 3.1 provides a simple, non-creative an example.
Figure 3.1. Roadmap Example
In the end the measurement strategy is the project roadmap that shows what leadership has agreed to measure and how those measurements will provide critical answers to the question, “Is training improving employee performance and helping achieve business goals?”
After establishing a measurement strategy, the next steps are to gather the data and begin monitoring the implementation of the solution. Article four in this series covers that topic.
Please share your thoughts in the comments section. What recommendations do you have for creating a measurement strategy?