Automation Bias & Game Theory

Automation Bias & Game Theory

Introduction

My thoughts wandered towards a time when automation will make the primal gears of society. Below is a connection of how automation errors can be contained using game theory.

As the revolution advances in the age of technology, the world is systematically moving towards a dependency on automation of our daily jobs. Those jobs vary from picking up our coffee, automating the temperature in the room, till the level where we want autonomous launch of satellites. It is great to perform redundant, manual, and monotonous tasks and only allow human operators' active and flexible participation in uncertain events. Its great for decision-making where there is a lower probability of system failure. However, in time critical environments with multiple changing variables like complex surgeries , air traffic control, military command it cannot be perfectly reliable (as yet).

Automation Bias

It is the inclination for humans to favor suggestions from automated decision making systems and ignore contradictory information made without automation, even if it is correct. Humans are effective in naturalistic decision making scenarios in which they leverage experience to solve real world ill-structured problems. They are heavily influenced by experience, and presentation of information. Automation bias errors occur when humans fail to notice problems because the automation does not alert them. A decision making system fails when human disregards or does not search for contradictory information in light of a computer-generated solution which is accepted as correct.

Types of automation bias

  1. Confirmation bias : It takes place when people seek out information to confirm a prior belief and discount information that does not support this belief .
  2. Assimilation bias : Its a decision bias that occurs when a person who is presented with new information that contradicts a preexisting mental model, assimilates the new information to fit into that mental model.

Influence of automation bias in intelligent decision support systems

Automation should best support human decision makers, based on the level of automation introduced into a decision support system. Human supervisory control is the process by which a human operator intermittently interacts with a computer, receiving feedback from and providing commands to a controlled process or task environment, which is connected to that computer. Refer Table 1 (Fitts List). Intelligent decision support systems use various levels of automation , from fully automated, (where the operator is completely left out of the decision process) to minimal levels of automation (where the automation only makes recommendations and the operator has the final say), refer table 2.

Areas of decision making where it occurs

  1. Automation bias in computer-assisted route planning : Automating resource allocation, scheduling, and route planning uses computational power of computers that allows them to quickly solve through complex optimization algorithms. However, computer generated solutions are not always truly optimal and in some cases, not even correct. For example, in a study examining commercial pilot interaction with automation in an en route flight planning tool, pilots, when given a computer-generated plan, showed reliance on automation causing them to accept flight plans that were significantly sub-optimal. 40% of pilots reasoned less or none at all when confronted with uncertainty in the problem space and deferred to erroneous automation recommendations, even though they were provided with tools with which to explore the automation space.
  2. Automation bias in critical event diagnosis and action : The use of automation to provide assistance can be also found in medical, process control, and any other domain that requires monitoring of system and sub-system operation. For example, in a study comparing decision-making with and without an automated monitoring aid (AMA), simulating monitoring tasks of pilots flying en-route, it was supposed to notify subjects when certain geographic points were passed, as well as alert the pilot to systems that were not operating in their normal ranges. When automated monitoring aids operated reliably, they led to improved human performance and fewer errors as opposed to not having an aid. However, when the automation failed to detect and notify operators of an event, or incorrectly recommended action despite the availability of reliable confirmatory evidence, human error rates increased.
  3. Automation bias in time-sensitive resource allocation : A resource allocation optimization problem becomes more susceptible to automation bias in time-pressure situations. For example, an interface was designed for supervision and resource allocation of in-flight GPS guided Tomahawk missiles by providing decision support for a human operator redirecting missiles in real time. Operators were required to determine which missile out of a pool of 8-16 missiles would be the correct one to redirect to a time critical emergent target such as a mobile surface-air missile site. In 1972, an Eastern L-1011crashed into the Florida Everglades, most likely in part due to an automation bias omission error. Upon execution of the landing checklist, the nose gear indicated unsafe. After engaging the autopilot, the crew intently focused on the unsafe nose gear, failing to notice several minutes later a gradual descent in altitude, which was likely caused by one of the pilots inadvertently bumping the control stick and disengaging the autopilot. The crew mistakenly relied on the automation to both keep the plane at the correct altitude and to warn them if the autopilot failed, leading to deaths of 101 crew and passengers.

Game Theory

Its the theory of social situations of how groups of people interact. There are two main branches of game theory: cooperative and non-cooperative game theory. Non-cooperative game theory deals largely with how intelligent individuals interact with one another in an effort to achieve their own goals. The focus of game theory is the game, which serves as a model of an interactive situation among rational players. The key to game theory is that one player's payoff is contingent on the strategy implemented by the other player. The game identifies the players' identities, preferences, and available strategies and how these strategies affect the outcome. Depending on the model, various other requirements or assumptions may be necessary. 

  1. Decision theory can be viewed as a theory of a single player against nature. The focus is on preferences and the formation of beliefs. It is often used in the form of decision analysis, which shows how best to acquire information before making a decision.
  2. General equilibrium theory is a branch of game theory that deals with trade and production, and typically with a relatively large number of individual consumers and producers.
  3. Mechanism design theory differs from game theory in that game theory takes the rules of the game as given, while mechanism design theory asks about the consequences of different types of rules. Questions addressed by mechanism design theory include the design of compensation and wage agreements that effectively spread risk while maintaining incentives, and the design of auctions to maximize revenue, or achieve other goals.

Let us start with the famous Prisoner's Dilemma game. In this game the two players are partners in a crime who have been captured by the police. Each suspect is placed in a separate cell, and offered the opportunity to confess to the crime. As the image is quite clear, two players in the same situation can behave differently. The condition here is to confess to the crime they have done together. This condition is an example, and can change as per the real world scenario.

How Game Theory Can help

Depending upon the type of situation presented in the automation bias lists, game theories (general equilibrium and mechanism theories) can help in designing a better intelligent system that can use ML and AI in combination to reduce chances of error. Example : in the aviation situation : to implement it in the design systems, the two partners of the game will be : the actual pilot / operator vs. the design system. If we go up more complex situations and extend the prisoner's dilemma concept, we can root out the erroneous cases. If the model increases the accuracy based on where operator and system suggestions are in harmony, those cases can help take decisions better.

Reema Majumdar

Technical Product Consulting & Management | Certified Scrum Product Owner (CSPO)® | IIM Lucknow | Formerly - Accenture, Adobe, Infosys | Blockchain, AI, IoT, ML

6y

Thank you!

Like
Reply
Vikas Jain

Vice President - Digital Enablement and Automation

6y

A succinct explanation of Automation bias and Game Theory ; both beautifully threaded together!

To view or add a comment, sign in

Others also viewed

Explore content categories