Feeling served or exploited by AI?

Feeling served or exploited by AI?

AI now drives personal consumption patterns, citizen-government interaction, political (dis)information, and healthcare interactions. But how do people respond to different types of AI? Profs Matilda, Emanuela, and Luk answer this question in the best international journal in marketing.


Article content

They define commercial AI as those AI applications implemented by companies, typically directed toward personal use (e.g., virtual assistants like Alexa, wearable devices). In contrast, public AI is directed toward public administration or the infrastructure and implemented by government. Next, their framework combines the type of benefits (personal vs social) and costs (low versus high threats to autonomy in choice):


Article content

In commercial-like AI, the direct, personal benefits become more salient than concerns about the use of personal data. Therefore, the feeling of being served should dominate over concerns about being exploited.

Diametrically opposite, surveillance AI has stronger societal than personal benefits and high personal costs of autonomy. Think about smart CCTV crowd surveillance, sound-of-movements monitoring, or the use of police robots. While they increase public security, these societal benefits clash with personal freedoms.

The authors predict that citizens undervalue the benefits of surveillance AI (to serve them), due to their lower personal attribution, and emphasize the potential costs (feeling exploited). Because governments are bound by accountability and proportionality requirements, these contextual effects should make surveillance more acceptable in public relative to commercial applications. 

Social personal AI includes public AI applications directed toward the community rather than the infrastructure. They offer personal benefits to community members. For example, autonomous vehicles for public transport (self-driven buses) or digital access to health services in remote areas represent meaningful public services that provide direct personal benefits to users.

Social impersonal AI aims for societal benefits, which typically entail low personal costs, because they are directed at public infrastructure. Examples include smart traffic lights that improve traffic flows, optimizing electricity use in public buildings, or using AI image recognition of past earthquakes to predict new ones The benefits usually pertain to improvements in the costs and efficiency of public infrastructure, with limited personal gratification. For example, for smart electrical grids, perceived behavioral control and attitudes toward energy savings are powerful determinants of adoption.

What is the empirical evidence for this framework? First, the authors conducted a multidimensional preference analysis, resulting in below perceptual map:

Article content

To falsify their hypotheses, the authors designed an experiment:

We selected four AI technologies, representative of each quadrant in our conceptual framework: chatbot (commercial-like AI), self-driven vehicles (social personal AI), air quality monitoring (social impersonal AI), and surveillance cameras (surveillance AI). To manipulate the setting, we told participants that the technology had been implemented by a government or public institution or a commercial company


Article content

The results validate the higher perceived societal benefits of public implementation of AI technologies, whether chatbots or self-driving vehicles.

As to marketing’s own method, the authors conclude with a choice-based conjoint design to quantify the causal effects of the trade-offs people make when they evaluate an AI implementation.


Article content

How to read the results:  the choice to support the AI application increases by 0.13  for the disease surveillance solution, relative to the voice virtual assistant.

So what are key takeaways for decision makers?

(1)     users evaluate specific AI applications according to the extent to which they serve them with perceived benefits like efficiency or personalization, as well as the extent to which they exploit them regarding privacy, civil freedoms, or job losses ;

(2)     feeling served, rather than exploited, reaches the highest level for AI applications that exhibit societal rather than personal benefits and costs (e.g., air quality monitoring); support for their adoption is greater than that for familiar commercial applications in the public sector (e.g., chatbots);

(3)     AI-related fears extend beyond privacy concerns. Especially when data collection and ownership are transparent, people prefer government implementation  for AI applications that evoke strong risks to privacy or civil freedoms;

(4)     For public AI applications, don’t emphasize the benefits to society, but the personal benefits and low personal costs.


🌐 Beat Meyer

Hindsight · Insight · Foresight · Sensemaking - navigating complexity in health, tech & society | #Sensemaking #PatientKnowledge #StrategicIntelligence

4mo

💡#FYInspiration 🚀 Amir Hartman

Like
Reply
Leigh Cowan

Helping C-suite & Board directors with post-MBA methods to achieve breakthrough performance improvement, easier decision-making and workforce efficiency, advancing corporate governance, strategy, and planning outcomes.

5mo

Neither- I'm just annoyed... AI is being heralded as some new and infallible breakthrough when it often just does mundane, ordinary tasks and does them at a very mediocre standard, often inadequate.

Bruce Clark

Associate Professor of Marketing Emeritus (Retired) at D'Amore-McKim School of Business at Northeastern University

5mo

I am struck by the control variable results in Table 1. More politically conservative, less supportive. More trust in government, more supportive. Live outside the US (EU/UK) less supportive. I would have loved some interactions here.

To view or add a comment, sign in

Others also viewed

Explore topics