Part 2- Tell me how you measure me and I will tell you how I will behave - The systemic approach to performance management
Photo by William Warby on Unsplash

Part 2- Tell me how you measure me and I will tell you how I will behave - The systemic approach to performance management

Read in Italian

This is the second article in the series:

  1. In the first part we explained the rationale behind traditional performance measurement models and how they can be flawed in guiding behaviour rationally towards business objectives, as well as failing to represent the effects of local decisions on global results.
  2. In this second part we will explain the foundations on which to build an effective performance management model and the importance of the deductive model, which is based on management theory, to produce information
  3. in the third part (yet to be published) we will present a practical example of how to build the information and performance measurement system using the principles of TOC

Summary of Part 1

In the first part of the article, we saw that local performance measurement systems are the consequence of the adoption of hierarchical/functional organizational models, and are explained by the following logical branch.

complexity tree - partial view

The various sub-elements into which the organisation is broken down are subject to a command/control model that is implemented through the performance measurement system of each element of the organization.

At the end of the article we highlighted the undesirable effects of this model: a series of side effects that push the managers of each individual element to implement behaviors that are:

  • perfectly consistent if analysed from the point of view of local performance (the single product, the single project, the single function),
  • extremely illogical when analyzed from a global perspective (the company's ability to satisfy its customers and maximize global profits).
  • and from which undesirable effects arise: ambiguities of action, frustrations, tensions that distract efforts from the core of the action.

Again in the first part of the article we left our Project Manager with a big headache: pushed by the "local" measurement system, focused on project margins, our PM acted consistently with the local requirements of the controlling model, but created a "global" damage: 1) he did not make wise use of company resources; 2) he used accounting "cosmetics" to mask the actual costs of the project; 3) he created frustration in the team and resignation of important resources; 4) he lengthened the project time and 5) he did not satisfy the customer; 6) delayed the cash flow.

We ended up with the question: is there a better way to measure performance and control the organization's behavior?

The systemic approach to performance management

We have seen that breaking the organization down into its parts and adopting a local performance measurement model meets the need to manage complexity. This model brings the complexity of the organization back to the articulation of its parts but does not capture the aspect of interdependence between the parts. Let's analyze the problem again, adding a new branch.

systemic solution to complexity

Our goal is always to "Manage the complexity of the organization", and if "Complexity of the organization is dictated by the relationships between its elements", then it is necessary to "Understand and control the interdependencies between the elements of the organization".

Let's come to the prerequisites: if we have to "Understand and control the interdependencies between the elements of the organization", and if "Interdependent events presents dynamics that influence the company results", then it is necessary to "Focus on interdependencies and adopt systemic performance measurement models".

Now, let's put together the hierarchical/functional branch with the systemic branch.

No alt text provided for this image

We have a clear conflict between the two branches:

  1. On the one hand, to manage complexity we are suggested to break-down the organization and control the performance of each individual element with local performance measurement systems (Branch A-B-D)
  2. The second branch, on the other hand, suggests that the complexity derives from the interdependencies between the various parts of the organization. To understand interdependencies, it is necessary to understand the dynamics of dependent events and to adopt systemic performance measurement models (Branch A-C-D').

It is necessary to resolve the conflict and find the assumptions invalidating one of the two models

The first branch A-B-D: is based on the following main assumptions

  • By adding hierarchical levels we increase the capacity for control;
  • Interactions between the parties do not create new properties and do not affect global results;
  • The model is "additive": global performances are the sum of the local performances of the single parts;

The assumptions underlying the second branch A-B-D' are as follows

  • Business processes are by their nature cross-functional and are dominated by variability.
  • The hierarchical/functional breakdown does not allow to fully understand relationships and interdependencies and creates barriers to the execution of inter-functional processes.
  • The type and quality of the interactions between the parties determine the overall outcome of the organization.
  • The model in "non-additive": the performance of the organization are dictated by its weakest link, "The Constraint" and how the interdependencies between the various elements and the constraint are managed.

Clearly the assumptions behind the A-B-D model are not true. How many processes starts and ends within the walls of a single function? Or, if a process requires the interdependent involvement of 4 resources and one of them has half the capacity of the others, even if I improve the performance of the strongest resources, do you think we could increase the overall performance? Try to maximize the OEE of each individual resource and then look at the inventory and cash flow at the end. Then, we can resolve the conflict.

No alt text provided for this image

In short: our performance measurement system must be able to drive local decisions by measuring their impact on the global result with a systemic approach since the organization's outcome depends on the interactions and interdependencies in the processes between the elements of the organization.

How to build a systemic model to measure the global impact of local decisions

Define the objective clearly

First of all, it is necessary to clearly define what The Goal of the organization is, and the necessary conditions set by the various "power groups" in order to reach this goal. Power groups can be internal (managers) or external (e.g. customers and suppliers). If we think of an organisation that is set up with the aim of making a profit, then we need to be extremely clear and remove false modesty: the aim is to make money now and in the future.

We often hear statements such as "our goal is to provide an excellent level of customer service" or "our goal is a steady growth" or "our goal is to have satisfied employees and associates to provide the best possible service to our customers". We are confusing the means to achieve the Goal with the Goal itself. Good customer service, good employee relations, are means, not goals.

Define the set of measures

To help achieve the overall result - making money now and in the future - we need a coherent set of measures and a decision-making process that links local decisions to the overall goal.

  • The measures are a consequence of the choice of goal
  • Any company produces products and services to make money. So the measures must be money-related.
  • The measures must be related to a period of time, i.e. a "rate":  imagine we were told that a certain decision will produce $10 million and another decision will only produce $5 million, we would obviously be in favor of implementing the first decision. But if we were told that the first decision will earn $10 million in three years and then stop, while the second decision will earn $5 million in one year and will sustain this earnnig rate for five years in a row, the point of view would change radically.

Since The Goal is to make money, the main measures must be financial. We can think of the company as a "money-making machine". How to evaluate it?

  • We define "Throughput" (T), the rate at which the organization generates units of its goal per unit of time (money through sales for a profit-oriented organization; service units for a non-profit organization).
  • We define "Inventory" (I) as the amount of money that the system "holds" (invested), per unit of time, to generate its own Throughput units. The faster the cash-conversion-cycle, the lower the level of inventory and vice versa.
  • We define "Operating Expenses" (OE) as the amount of money that is put in each period to make the system work; the gasoline to run the machine and to transform the inventory (I) into Throughput (T).

Given these measures, we saw in the first article how Net Profit, a global measure of company performance, is measurable as T - OE. Another global measure of the company's performance - ROI, i.e. return on investment - is equal to (T-OE) /I.

This set of three measures captures the global effects of all local decisions. Each operational decision in this way can be transformed into one of the three reference measures. Some examples:

  • If we improve the customer service level by one x% (the OTIF): How much additional Throughput? Do we need to increase Inventory to achieve this improvement or can we achieve it without increasing Inventory?
  • If customer orders are processed late, how much Throughput (measurable for example as "throughput dollars-days") are we losing?
  • If we automate the invoicing process, what effect do we have on OE and what variation do we have in terms of Throughput speed?

The decision-making process must be embedded within the Information System

Having the set of measures is a necessary but not sufficient condition to produce a good information system.

We have defined information as the answer to a question. We give now a set of definitions according to the logic provided by Dr. Goldratt itself (The Haystack Syndrome):

  • Data: a string of characters describing a certain reality;
  • Erroneous data: a string of characters that incorrectly describes a certain reality
  • Required data: the data that the decision procedures needs to build an information
  • Invalid data: data that is not needed to deduce the requested information
  • Information: the answer to a question asked
  • Erroneous information: a wrong answer to the question asked

As we can see, the availability of the necessary and correct data is a necessary but not sufficient condition to produce information. The difference between producing information and erroneous information lies in the deductive process applied to the data and which is the basis of the decision-making process.

Therefore, our information system, to be called as such, must incorporate the correct deductive and decision-making process.

At the foundations of the deductive process is knowledge, the management theory that provides us with the right filter and guidance to deduce information from data.

We should therefore ask ourselves a serious question: why, over all these years, a fortune has been invested in data: 1) to increase data availability; 2) to increase accuracy and precision; 3) to increase processing capacity... while very little / nothing has been invested in the cognitive model underlying the deductive/decisional process to produce the information?

The importance of management theory at the basis of the deductive process

We saw the problem with our Project Manager: he had all the required data available; he knew that the load to complete the project was 1,600 hours. He could act in two ways:

  • enter the additional resources, but this would have entailed additional costs to his project (his project ... we have seen that increasing the costs of a project does not necessarily mean increasing the costs of the organisation as a whole), or
  • trying to manage the additional load of hours with the current team, making some "cosmetics" on the accounts to try to minimize the impact on project costs.

Since the control model was set on local efficiency measures, he opted for the second way, probably unaware that it would have caused greater damage, or even if aware, in any case in line with the company's control system that dispenses rewards and punishments.

The deductive model adopted by our PM is the one highlighted by the A-B-D decision tree: control cost and efficiency of each part to maximize profits.

No alt text provided for this image

As we can see from the Conflict Cloud, TOC teaches us that there is an alternative branch A-C-D', whose approach is to maximize profits by maximizing Throughput.

The pre-requisites D and D' are clearly in conflict. TOC teaches us that, if two assumptions are in conflict, either they are both wrong, or, if one is true, the other is necessarily wrong (principle of consistency and coherence in nature).

  • D expresses again an "additive" model: by maximizing the efficiency of all resources, we get better performance at system level.
  • D' expresses a systemic approach: the performance of a system is dictated by its weakest link (The Constraint) and to maximize Throughput we must protect the constraint from process variability by maintaining protective capacity (this is in conflict with the objective of squeezing maximum efficiency from each resource).

The new Management Theory formulated by TOC

We have seen that wiring the model of pure efficiency within the information system has led us to irrational behavior and sub-optimization of the global result. So we now try to understand if the alternative approach of TOC leads us to a better result. Let's learn what the principles behind this new management theory are.

  • Enterprises are not additive systems. An Enterprise is a system of interdependent events characterized by variability and statistical fluctuations. Do we agree that the execution of a process goes beyond the walls of a department and that the result is the interaction of "n" subjects? Do we agree that no matter how hard we strive to establish standard procedures, and no matter how hard we strive to eliminate all internal causes of variability, there will always remain a certain external variability that may affect the process? (a machine that, despite being regularly maintained, breaks down)
  • In such a system, the performance of the entire system is dictated by its weakest link, the Constraint. If we imagine a production chain as a sequence of operations, do we agree that the overall flow of that chain is dictated by the production capacity of the resource with the least capacity?
production chain with a constraint

This production chain has a maximum capacity of 10 units per period of time, dictated by the resource C with the least capacity. What do we get if, applying the logic of the cost world, we try to get the most output out of each resource? High utilization, high efficiency...and a mountain of Inventory in front of resource C.

Again applying the cost world logic, what should we do to avoid producing Inventory and maintaining high efficiency ? Balance Capacity by cutting off excess capacity (unused resources are a waste).

balanced factory

The logic behind the "Cost World" therefore suggests that we move towards the concept of the "Balanced Factory", a productive environment where all capacity is leveled around the average demand, in order to limit the waste of resources and therefore achieve highest cost efficiencies. What do we forget to manage in the cost world model? The statistical variability of processes and Murphy.

Such a system is in fact highly inefficient and ineffective, as it obviously lacks protective capacity to recover from problems: unless we keep a mountain of stock in front of each operation to decouple processes and make them "de facto" independent, when the process is hit by Murphy there is no-way to recover the problem and performance will carry on deteriorating exponentially.

In other words: the logic behind the Cost World left us with a paradigm that is based on a utopian decision-making algorithm: maximum efficiency. For managing statistical variability and Murphy... we will see... for the moment live with the trade-off... and do your best to manage such conflict.

It is exactly this poor understanding of reality, ignoring the impact of variability, that leads to a series of conflicts and irrational quest for management, from which stems all the irrational and illogical behaviors and decisions that are evident in many companies.

The logic behind the "Throughput World" developed by TOC, on the contrary, teaches us a different model, whose goal is to maximize Throughput

  • to maximize throughput the strategy is to balance the flow (not the capacity);
  • a perfectly balanced factory cannot balance the flow, we've seen it before; if all resources have equal capacity, it automatically lacks resilience and ability to recover
  • thus, to balance the flow we need to have resources with more capacity than others , and therefore automatically we must have a constraint in the system
production chain with a constraint

Who finds a constraints find a treasure.

  • A system with a constraint is in fact extremely easier to be managed: compared to a system in which all resources become independent (as they are decoupled from the inventory that acts as a shock-absorber), a system with a constraint becomes a system with only one degree of freedom: by controlling the performance of the constraint, the performance of the entire company is controlled.
  • With the strategic management of the constraint, it is possible to obtain higher performance of effectiveness and efficiency compared to a non constrained system. In a strategic management the question shall not be "where is my constraint"; rather the strategic question becomes "where do I want to strategically position the constraint and leverage it to manage the performance of the organization".

Since we have verified that the assumptions underlying the cost world, the hierarchical/functional model and the additive performance model are wrong, we need a new algorithm, a new management model based on strategic constraint management.

Fortunately we do not have to invent it from scratch: the model is provided to us by TOC; a powerful model, as simple and intuitive as it is, which is based on the Five Focusing Steps.

  1. First step: identify (chose) the Constraint, the weakest link in the chain that limits the growth of Throughput. As said, the constraint can be identified among the current resources or it can be chosen strategically.
  2. Second step: maximize the performance of the constraint. Run the constraint at his maximum productivity per unit of time (its OEE) in order to maximize the Throughput of the entire system. From this point of view, it becomes evident how improvement interventions on other resources can have marginal effects if they do not also have a direct impact on the performance of the constraint.
  3. Third step: subordinate all the other resources to the previous decision. The rhythm of the other resources must be synchronized and subordinated to maintaining and protecting the performance of the constraint. Subordinate means maintaining protections (of capacity and time) with respect to the constraint's plan, to make it work at its maximum capacity, and it means subordinating the scheduling of the other resources to the constraint's scheduling. Because of this mechanism of "subordination" the model of maximization of the local performance of each resource decays, and the global performance of the system through the performance of the constraint becomes relevant: we have seen that forcing the utilization on the resources that have more capacity leads to a useless result (creation of inventory).
  4. Fourth step: Elevate the constraint. Only after all 3 previous actions have been implemented, if there is still demand to be met, it makes sense to raise the capacity of the constraint. Often the companies pass immediately to this fourth step forgetting to implement steps 1 to 3. Implementing steps 1 to 3 frees up a lot of capacity on the constraint, so going directly to step 4 often means creating excess capacity, to cope with a non-optimal management of the constraint itself.
  5. Step five: Start over again. If as a result of previous actions a constraint is broken and moves to another resource, you need to start again and never allow inertia to slow down the growth of Throughput. This is an approach to continuous improvement.

We have therefore summarized the Management Theory at the foundations of TOC, the cognitive model that must be implemented in the information system in order to present the deductive process on data to produce information.

Summary

The information system and the performance measurement system must meet the following requirements:

  1. It must have a clearly defined Goal;
  2. It must have a set of global measures related to the objective;
  3. It must be able to determine the impact of local actions and decisions on global measures.
  4. It requires a systemic approach as organizations are not additive
  5. It must produce information, correct answers to the questions that are asked
  6. It must be supported by the necessary data, i.e. the data that is really needed to produce the information
  7. It must incorporate a deductive process (the decision-making process)

As we have seen, the deductive process derives from the underlying management model and theory, thus an incorrect management model and theory will lead to the production of erroneous information and wrong decisions, even when the necessary and correct data are available, creating conflicting tensions towards the objectives from which irrational behaviors arise.

For now we stop here and in the next and last chapter we will analyze how to implement this cognitive model within the information system.

About WeeonD

No alt text provided for this image

We help companies improving their performance with solutions based on TOC principles in order to optimize the Flow of Operations, reduce Inventory, improve Cash Flow and Profitability. Find out more on our website www.weeond.com

Gianluca Davico

President & Co-Founder @ Real Throughput | Business Strategy, Innovation, Supply Chain, Theory of Constraints

4y

Grazie Alex per il feedback sull’articolo

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore topics