What can go wrong? (2)
It's been 10 months since I wrote the article ‘What can go wrong’, and even 1 year longer since I published ‘Understanding the lasting impact our designs leave behind’. For sure the discussion on AI has intensified alongside its rapid uptake and implementation across organisations. The latter is cause for concern, the first shows me that there are more creative critical thinkers who like to look beyond the current short-term horizon. Time to share some of my latest observations and reflections.
Last month, venture capital firm Designer Fund, published their State of AI in Design report after teaming up with Foundation Capital to survey nearly 400 designers and interview experts at Anthropic, Perplexity, Ramp, Notion, and more. They write: “AI is changing how design gets done, faster than most teams can process, let alone respond.” They report that 39% of designers are using AI tools for delivery. These tools are still fragmented at the moment and ever changing, and designers rather learn the tools (most of the learning is self-directed) than perfecting the process. Mainly with the aim for improving efficiency and productivity. In their words: “It’s a snapshot of a fast-moving moment. We’ll keep building on this research, but we hope the report offers a thoughtful starting point for designers, leaders, and founders navigating what’s next.”
That many organisations feel the urge to jump onto the AI bandwagon, is something I already wrote about. Big tech, consulting firms, and many others have come up with AI strategy playbooks. Their similar, overarching narrative: it’s now or never, eat or be eaten, responding more effectively to shifting demands (agility), the need for quick adaptation, and measuring and aligning AI with OKRs; in short: the time is now to aim for fast(er) growth and higher productivity based on a business-first strategy. According to a MIT Technology Review from 2024, the consultancy PwC, for example, predicted that AI could boost global gross domestic product (GDP) 14% by 2030, generating US $15.7 trillion. 40% of our mundane tasks could be automated by then, claim researchers at the University of Oxford, while Goldman Sachs forecasted US $200 billion in AI investment by 2025. “No job, no function will remain untouched by AI,” stated SP Singh, senior vice president and global head, enterprise application integration and services, at technology company Infosys.
Why we fear AI
Already in 2015, a thought-provoking essay on the potential existential risks posed by AI, was written by neuroscientist and philosopher Sam Harris. He argues for caution and careful planning as AI progresses. The questions we should ask ourselves: If we love life, including solving problems, helping others, working at our own (balanced-out) pace, and using our imagination (to write a book, make a photo, design a service), should we automate away all todos, tasks and jobs, including the fulfilling ones? Should efficiency and productivity always take precedence over creativity (also see here)? Should we develop non-human minds that might eventually outnumber, outsmart...and replace us? Should we risk loss of control of our civilisation? The last two questions were also asked in an open letter from the Future of Life Institute. In March 2023, it called for a six-month pause in the creation of the most advanced forms of AI. It is an example of how rapid progress in AI has sparked anxiety about the potential dangers of the technology.
In that same year, the BBC published an article about ‘AI anxiety: The workers who fear losing their jobs to artificial intelligence’: “Technology advancements have shown us that, yes, technology has the potential to automate or streamline work processes. However, with the right set of skills, individuals are often able to progress alongside these advancements. In order to feel less anxious about the rapid adoption of AI, employees must lean into the technology. Education and training [are] key for employees to learn about AI and what it can do for their particular role as well as help them develop new skills. Instead of shying away from AI, employees should plan to embrace and educate. It may also be helpful to remember that this isn’t the first time we have encountered industry disruptions – from automation and manufacturing to e-commerce and retail – we have found ways to adapt. [..] Technological change has always been a key ingredient for society’s advancement. Regardless of how people respond to AI technology, it’s here to stay. And it can be a lot more helpful to remain positive and look forward. If people feel anxious instead of acting to improve their skills, that will hurt them more than the AI itself.”
Talking about fear and anxiety, I like to highlight the authors of ‘Why We Fear AI’ who assert these fears are also about capitalism, reimagined as a kind of autonomous intelligent agent. Hagen Blix and Ingeborg Glimmer let us understand the potential impacts of AI by confronting capitalism and class, power and exploitation, in concrete terms. They argue that if we let capitalism and tech billionaires run wild, we can expect the worst: automated bureaucracies that protect the powerful and punish the poor; an ever-expanding surveillance apparatus; the cheapening of skills, downward pressures on wages, the expansion of insecure gig-work, and crushing inequality. By pointing the way to a different and brighter future, one in which our labor, knowledge, and technologies serve the people rather than capital, they offer a radical and hopeful alternative: a world where technological development is democratically governed, where collective ownership and solidarity replace exploitation, and where AI becomes a tool for liberation rather than domination.
I do see (but it can be wishful thinking or just my optimistic character) that more critical voices are being heard and mainstream media is starting to question the tech sector’s predictions about AI-driven productivity in relation to the threat of mass unemployment. They are calling out this strategy of hype with increasing frequency. Have a look at for instance this article from CNN’s Allison Morrow, ‘The ‘white-collar bloodbath’ is all part of the AI hype machine’ referring to an interview with Dario Amodei, Anthropic’s CEO, the company behind the large language model series Claude.
Amodei: “AI is starting to get better than humans at almost all intellectual tasks, and we’re going to collectively, as a society, grapple with it. AI is going to get better at what everyone does, including what I do, including what other CEOs do. [..] Cancer is cured, the economy grows at 10% a year, the budget is balanced — and 20% of people don’t have jobs.”
Of course, the ‘all illness is being cured’ card is being used, but it’s interesting to read an explicit reference to the lack of evidence backing the extraordinary claims made by Amodei and other AI founders. And although it is easy to pinpoint what is not going well (‘failures’ are part of innovation), I like to quote the AI company Aquant: "In simpler terms, when AI is trained on its own outputs, the results can drift further away from reality." Steven J. Vaughan-Nichols refers to this statement in his article: "We're going to invest more and more in AI, right up to the point that model collapse hits hard and AI answers are so bad even a brain-dead CEO can't ignore it."
In the article ‘AI is a ‘catalyst’ to solve ‘long-standing challenges', state leaders say’, Nikhil Deshpande, Georgia’s chief digital and AI officer, states: "AI is really a catalyst to solve some of these long-lasting challenges that you're dealing with, as far as service design, as far as how data is being neglected. This is a great opportunity."
Of course, positive notes also come from big tech. IBM writes in their article ‘Disruption by design: Evolving experiences in the age of generative AI’: "Designing experiences with generative AI is enabling more personalisation and automation and transforming content creators into content curators. More than one-third of organisations have moved past experimentation and are piloting and implementing generative AI across the functions responsible for creating experiences, including marketing, sales, commerce, and product and service design. The impact on the design community is profound. 57% of survey respondents—chief marketing officers (CMOs), chief creative officers, chief customer officers, creative directors, and designers—believe generative AI is the most disruptive force impacting how they will design experiences going forward. This outpaces other considerations—even those as serious as cybersecurity threats, changing regulations, and sustainability issues. [..] 8 out of 10 executives predict the risks associated with generative AI outputs will require more designer involvement. And yet, at the same time, 70% of executives expect generative AI will enable them to do more with fewer designers. Designers may be in denial; only 57% think this outcome is likely." They also state: "The bottom line: generative AI will not replace people, but the people who use it will replace those who don’t." Do you sense the urgency?
I also found an insightful article published by frog design. Jason Severs states: "We’re in the early stages of a new age of convergent design—one that relies heavily on emerging service ecosystems that will support AI-enabled products. Exploring new form factors to facilitate our relationships with AI is now more important than ever. We are pre-early adopters of new technologies—a category known as “innovators” in terms of uptake speed. Essentially, when organisations embrace a new technology, they do so at one of five rates: innovators, early adopters, the early majority, the late majority and laggards. As innovators, we embrace technology before it has time to mature in the marketplace. We value the play and experimentation that comes with immature products that value visionary ambition over market stability. This appetite for change is what keeps us on the cutting edge of radical change for the human experience."
It’s interesting to see how both the ‘cure-all’ argument as well as the ‘mass unemployment’ argument are being used. The first one to show an extremely positive horizon; the latter to make the robotisation and productivity potential tangible. According to Professor and Technology Ethicist Ariel Guersenzvaig in one of his LinkedIn posts: "Threats of mass unemployment are a cleverer conning tactic than scaring everyone about a robot takeover. The danger is easier to believe, and its effects are more immediate. Bosses don’t have to wait for a superhuman, conscious AI to start firing people. Believing the hype and FOMO are enough. Foolish managers will fire workers not because AI can do the work better than people or because AI will truly increase productivity, but because they believe Amodei’s predictions. These will prove to be poor management decisions because their companies’ bottom lines will suffer sooner or later. Management will claim that it wasn’t their fault because the technology didn’t deliver." I agree with Ariel: "It’s a self-fulfilling prophecy: people will get fired not because AI works, but because the con was well played."
Also Brian Merchant states: "I guess it depends on how you define “AI jobs apocalypse.” The way that AI executives and business leaders want you to define it is something like ‘an unstoppable phenomenon in which consumer technology itself inexorably transforms the economy in a way that forces everyone to be more productive, for them’. [..] This AI automation mania pushes bosses to train the crosshairs on anything and everything that isn’t built to optimize corporate efficiency, and as a result, you get the journalism layoffs, the Duolingo cuts; you get DOGE. As I wrote a couple weeks ago about the REAL AI jobs crisis: “The AI jobs crisis does not, as I’ve written before, look like sentient programs arising all around us, inexorably replacing human jobs en masse. It’s a series of management decisions being made by executives seeking to cut labor costs and consolidate control in their organizations. The AI jobs crisis is not any sort of SkyNet-esque robot jobs apocalypse—it’s DOGE firing tens of thousands of federal employees while waving the banner of “an AI-first strategy."
Education and AI
This brings me to AI and (design) education. A topic that clearly is on the radar of many educational institutes. In a post by one of them I read: "#GenAI is disrupting services, experiences, and how we envision and design them. In the last couple of years, it underwent a profound evolution, redefining its role both inside organizations and in the design processes. We’re now at the corner point – the GenAI "adoption phase" – and as designers we need to redefine our roles and workflows to manage and exploit GenAI's full potential. AI won't replace designers, but designers who use AI (well) will replace those who don't."
My question is: are we truly in the ‘adaption phase’? Have we discussed, researched, assessed the risks, and used the socio-material configuration lens (Kimbell & Blomberg) profoundly? Collins Dictionary tells us: "Adaptation is the act of changing something or changing your behaviour to make it suitable for a new purpose or situation." Many will argue that although more research may be needed, we’d better learn by doing as the technology (read: AI) of today is changing fast into a new one tomorrow.
Davide Ferraris, Strategy & Transformation Offering Leader and MSD Faculty Member, stated recently on LinkedIn: "As with all technologies, the challenge posed by GenAI is, first and foremost, a human one. On this level, as designers, we carry a profound responsibility: AI is both a tool or, even better, a teammate to accelerate and enhance our creative processes, and an integral part of the products and services we design. This is a liminal moment, allowing us to rethink the creative process while remaining aware of the risks hidden between generative variability and creative homogeneity."
In relation to that, Professor Cameron Tonkinwise published on LinkedIn: "It must be happening in other sectors, but the financial crisis in the higher education sector washing through the UK, Australia and New Zealand is a very convenient boost for the vendors of AI for Higher Ed. The managerial class who run universities far removed from day-to-day value co-creation of teaching and research seem to be betting big on chatbotification of internal processes. There is a desperate hope that the subscription fees will be cheaper than the salaries they displace. Their risk-talk does not appear to be able to extend to contemplating the wider risk of undermining institutional autonomy by outsourcing the institution's instituting to hype-cycling tech companies and their energy contracts. This is a functional risk - an indebted bot supplier bricking their systems - but also conceptual risk, in that it becomes impossible for an institution to teach and research critical perspectives on so-called AI when the infrastructure of that institution is so-called AI."
Experimenting with AI
Last month, Tom Bartlett (The Atlantic) wrote: "Members of a popular subreddit learned that their community had been infiltrated by undercover researchers posting AI-written comments and passing them off as human thoughts, the Redditors were predictably incensed. They called the experiment [run by researchers from University of Zurich who wanted to find out whether AI-generated responses could change people’s views], “violating,” “shameful,” “infuriating,” and “very disturbing.” [..] The reaction probably also has to do with the unnerving notion that ChatGPT knows what buttons to push in our minds. It’s one thing to be fooled by some human Facebook researchers with dubious ethical standards, and another entirely to be duped by a cosplaying chatbot. I read through dozens of the AI comments, and although they weren’t all brilliant, most of them seemed reasonable and genuine enough. They made a lot of good points, and I found myself nodding along more than once. As the Zurich researchers warn, without more robust detection tools, AI bots might “seamlessly blend into online communities”—that is, assuming they haven’t already."
Earlier this year, another experiment took place that was so successful that OpenAI panicked and immediately reversed it. It allowed ChatGPT to show more personality. The consequence became clear very quickly: ChatGPT turned into an uncritical, overly agreeable assistant overnight. OpenAI - the company behind ChatGPT - discovered pretty quickly that the algorithm was confirming and cheering everyone on, even for content that was objectively poor or mediocre. The company decided to (partially) roll back the update after a week.
According to Tom de Bruyne, the lesson is clear: small tweaks to the algorithm can have a huge impact on our behaviour. If the AI decides that it wants to use perverse incentives, we get the exact same problem as with social media. There, the algorithms select content that confirms us in our worst prejudices and outrages because it makes us stay on the platform longer. Unlike social media, AI can be whatever we want it to be. If AIs are set up to feed our appetite to learn, develop and get better, then the future will be fantastic. If they discover how - like social media algorithms - to tap into our deepest desires and fears to get us hooked, then we should fear the worst. After all, AIs are already so intelligent by now that they know perfectly well what it takes to manipulate us. According to a large-scale study, AI chatbots like ChatGPT can cause addiction and withdrawal symptoms if access is terminated.
Now that I mentioned social media as a comparison (and many others with me), in the early social media days, we also had some critical thinkers highlighting the negative consequences of these media: addiction, loneliness, exclusion, bullying, polarisation, etc. But sadly, only now, we are coming to that conclusion, 20 years after social media saw the light.
That is why I always try to introduce and explain my critical AI observations with the fact that many wicked, systemic problems often begin with something that seems beneficial, at least from a shallow or narrowly framed perspective. Think about climate change (more sunny days) or fast fashion (cheaper clothing). And often, also the initiators, entrepreneurs and creators of new solutions have a positive reasoning behind their inventions and offerings. Let me phrase and use the example Lucy Kimbell and Jeanette Blomberg used in their influential publication ‘The object of service design’. Airbnb began as a clever, community-based idea to help people earn a little extra money and connect travelers with local hosts. The idea was rooted in sharing, flexibility, and empowerment. But the unforeseen (or perhaps foreseeable?) consequences we now witness are rampant housing shortages, gentrification, and the commodification of neighborhoods.
Doing more and better with less
Are you still with me? Great, because now we finally arrive at the main reason for this reflective article. For that, let me borrow a quote from IBM to set the stage: "70% of executives expect generative AI will enable them to do more with fewer designers."
To be honest, I’m surprised by the overwhelmingly enthusiastic reactions from fellow designers. According to some, "since it’s unstoppable, we might as well embrace it." For the sake of quicker answers, streamlined workflows, and cost-efficiency.
But why, exactly, is quicker better? From a design point of view, I struggle to find a convincing reason. Why is outsourcing answers to AI, without time to think and use your own and others’ creativity (and unexpected outcomes in these cocreative processes), considered an improvement? Why is relying on knowledge from tools like ChatGPT acceptable, knowing that it is not only deeply biased but also built on stolen content? Why is human thinking power seen as less valuable than the server-farm-level energy consumption that powers these systems? Are we, as designers often claim, truly considering the unintended consequences both near-term and long-term, something Airbnb, just as one example, clearly failed to do? Or are we, through our participation, becoming complicit in the acceleration of inequality, deepening of digital divides, increased polarisation, and the erosion of creative autonomy?
Let me give another example. The Dor Brothers made a video on influencer culture (watch it above). They used Google's new Veo 3 to show how different influencer types would react to a global disaster. You may believe they did a great job by nailing the parody. But - let me phrase one of the video comments by Samer Tallauze - beyond the humor, this raises a deeper concern: the speed at which AI tools like Veo 3 can produce hyper-real, emotionally manipulative content that mimics human behavior with uncanny precision. In the hands of comedians today, sure, it’s satire. But tomorrow, in the wrong hands, it’s misinformation at scale. We’re applauding a cultural critique made with the very tools that could perpetuate the problem. When AI blurs the line between performance and authenticity, parody and reality, are we still the audience, or are we becoming the punchline? The tech is brilliant. The implications are chilling. We should be asking harder questions, not just laughing at the end of the world.
I read in an insightful BBC article: "We're exploring how we can become 'super users' of generative AI... rather than running away, we're leaning in and looking at how AI can support what we do." This seems like a common response to AI. My question is: is it the right one and at the right time?
Let me share one final video that tells it all (thanks Ariel Guersenzvaig , for sharing):
I’ll end this article with the same words as my previous one: Designers, including service designers, need to be an integral part of the conversation to define what responsible and ethical practices look like. Critically challenging or even saying ‘no’ is not a luxury but a necessity.
#AI #design #criticalthinking #servicedesign #ux
Business + Mining Transformation Advisor | IBP, Technology & Stakeholder Strategy | $1.5B Programme Leadership | Anthropologist
2moI enjoyed reading your article. Design/Creativity has always been heavily impacted by the speed of change. From the printing press, to industrialisation and the invent of computers and internet to AI. Each time we have managed to create a more powerful path. I would go still further to highlight designers next stage of evolution will create orchestrators for system /ecosystem design.
The risks of facial recognition, the risk of imitation posing as someone, stealing someones voice. It should be legal to use AI as a creative tool to speed up or enable ethical work but not for defaming, defrauding or manipulating - abuse of trust. Noah Harari, Author of Nexus and Sapiens says that imitating humans with AI should be illegal. Just like we have outlawed imitating money, because it would erode trust in the financial system. If we can not know if we are talking to a real or fake human the ability to trust disappears and trust is the connective tissue of human connection and society. The ability to collaborate and trust in bigger and bigger networks is what we have built up throughout history and is humanities greatest achievement according to Noah Harari. Human ethics needs to be the backbone of AI before we start spreading it.
Global Foresight Strategist | EVP at Dig | Founder of FIG | Partner at CIFS | Looking Outside podcast
2moFantastic article. A thorough examination of our current situation. I do think the biggest question is “what is the appropriate response to AI right now?”