Can AI cause Tech Trauma? – Part Two

Can AI cause Tech Trauma? – Part Two

Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data we missed and are yet to collect. In this newsletter, we will talk about everything the raw data is capable of – from simple strategies of building equity into research+analytics processes to how we can make a better community through purpose-driven analysis.

 

In the last edition, we stopped at the question: "Can we truly try to heal from this oncoming (AI-caused) tech trauma?"

And there I said, "with all the optimism I know, I believe we can. We just have to."

Since then, I have been exploring that optimism (or whatever close to it I could find in me in the current state of the world) to find some actions or plans that, if not heal, at least create a path of care and repair from this trauma.

I am not going to claim my "optimism" (and a personality tendency to not stop obsessing about such big picture questions) led to a roadmap necessarily. But whatever this edition offers, that does start with a few acknowledgments I want to share with you.


Let us acknowledge and remember:

  • You and I always have an influence, a voice, a power around technology (AI or not) through our choices.
  • Any work of change (for good), resistance (against harm), and justice happens in more ways than we know – sometimes out of sight, unrecognized, and silently. Because this trauma we speak of is a shared, real pain (regardless of whether or not we have found common words to describe the ailment). But how we individually respond and react to this pain – for change, resistance, and justice – is unique and indeed happening. So, you and I must place some trust in the power of our everyday choices in our everyday lives.
  • How you, I, and our community members choose to co-exist with this technology should be designed through us, our joys, strengths, and dreams of a future (where good exists for humans, planet, and non-human beings).



Here is a recap our last edition quickly: tech trauma, at its core, results from the way technology—say, artificial intelligence—creates a sense of inauthenticity (or, similar strains) in our relationships, decision-making, and even our understanding of ourselves. This can show up in forms such as:

  1. Mistrust: As AI becomes more integrated into daily life, many people feel uneasy about its capabilities and the lack of transparency around how these systems work. High-profile failures, such as biased AI algorithms or breaches of privacy, fuel this mistrust.
  2. Inauthentic Interactions: The rise of AI-driven interactions, from chatbots to recommendation engines, has shifted the way we communicate with brands, institutions, and each other. While AI can offer efficiency, it often lacks the authenticity of human connection, leading to feelings of disconnection.
  3. Data Exploitation: Many people feel powerless in the face of data harvesting and surveillance. Knowing that personal data is constantly being collected and used for purposes beyond their control erodes the sense of privacy and security, contributing to tech trauma.
  4. Moral and Ethical Dilemmas: AI and automation raise difficult questions about fairness, bias, and accountability. As AI makes decisions that impact our lives, from loan approvals to job screening, we may experience frustration or a sense of injustice, especially when these systems are opaque.

These experiences are amplified for communities historically excluded from technological decision-making, thus widening existing inequalities and inequities.


Here are some back-to-basics starting points to address this trauma in our workplace:

1. Recognize the Signs of AI-caused Tech Trauma

The first step in managing this tech trauma is to recognize its symptoms. These may manifest as emotional, psychological, or even physical responses, including:

  • Anxiety or fear about engaging with AI systems.
  • Feelings of inadequacy or imposter syndrome in AI-integrated workplaces.
  • Distrust in technology due to past experiences with biased algorithms.
  • Fatigue from constant adaptation to new AI tools and platforms.

Start acknowledging, accepting, and normalizing conversations about these feelings. By creating and engaging in spaces for such conversations, we will begin to address the trauma instead of letting sources of our AI overwhelm go unchecked.


2. Commit to Collective AI Literacy

Take a look around the number of webinars, conferences, and retreats with AI themes. Yes, they have grown (and that's great), but that's more ad-hoc-ish than collective. For example, two members of the X department go to conference A (with a session on AI ethics), and two members of the Y department go to virtual webinar B (on Prompting skills). Yes, four staff members learned something about AI in those spaces, but did they learn together and enough to turn it all into action? Probably not. Building AI literacy at all levels—individual, organizational, and societal—can help people better understand the capabilities and limitations of AI, thereby reducing uncertainty and mistrust.

As representatives of our organizations, you and I should bring our teams together for upskilling initiatives that focus on AI's practical applications, ethical implications, and impact on roles. This can alleviate fears of obsoletion and equip us collectively with tools to adapt.


3. Create an Ecosystem for Ethical AI

At the heart of AI tech trauma lies distrust in how these systems are developed and deployed. Pushing the creation of ethical AI ecosystems  - like AI policy, Governance structure, Evaluation framework, etc.- prioritizes transparency, accountability, and fairness, addressing the root causes.

  • Transparency: For example, asking our tech vendors to provide clear and accessible documentation for AI-based tools can help our staff understand how the underlying systems work, what data is used, how critical decisions are made, etc.
  • Accountability: For example, establishing mechanisms, in tandem with tech vendors, to address and own any harms caused by AI usage, can create a path for that trust.
  • Inclusivity: For example, tech developers must engage end-users in designing and deploying AI systems to ensure they serve all communities equitably. Participatory design methods can bring marginalized perspectives into focus.


4. Build Psychological Safety in AI Adoption

Psychological safety—the belief that one can voice concerns without fear of retribution—is crucial in environments where AI is being adopted.

Leaders should create forums for staff to share their fears or frustrations about AI. For example, a team might hold "AI feedback sessions" where staff discuss how tools are impacting their roles or "tech healing circles" where they share their experiences with AI and learn from one another.

When repeatedly offered and reminded as safe spaces for staff to express concerns about AI's societal impacts (say, around surveillance or data privacy), such spaces can validate people's emotions and concerns – thus turning apprehensions into collective problem-solving.


5. Promote Human-Centric AI Practices in all AI Activities (purchase, build, sell, use).

One of the most effective ways to manage AI tech trauma is to collectively realize that to survive with AI, we must build a partnership-like relationship with it. Remember our earlier editions on the 7 tenets of human-centric AI? Here is a quick refresher.


Article content
7 tenets of community-centric AI

To manage our tech trauma, we must commit to actively promoting such truths in all AI activities (purchase, build, sell, use) to prioritize our communities' needs and well-being.


************************************

I do not deny that the anger, pain, and hurt from unclear use and living in new and old AI algorithms are real.

It is.

But we need a different understanding to repair and heal from those complicated feelings and behaviors we are collecting through repeated use of (and being used in) algorithms.

For starters, you and I need to talk about this more frequently beyond a few webinars and conferences. We need to hold this conversation in our communities, with the friendly connections we make on the train and with the colleagues we go to for midday coffee.

We need to find those little pockets of time—where having this conversation enough times guarantees that we understand the power of our human network.

And we need to understand that this repair and healing will take time. There is no "we are done with AI tech trauma" statement or golden badge when we do these conversations. There is no "I am good" say after the exhausted, rejected, isolated feelings when you watch news about how AI deepfakes lead to immigration hate and violence.

No, this work will definitely take time.

But that's where my optimism kicks in, hard. I believe in the good you and I are capable of. I believe in small and silent actions, just as I believe in big and loud policies.

So, to answer our original question, " Can we truly try to heal from this oncoming (AI-caused) tech trauma?" Yes, we can.

Last I checked, Rome was not built in a day, but it is very much on the world map.

❤️


*** So, what do I want from you today (my readers)?

  • Share with us: what actions you take/might want to take to manage the mistrust and overwhelm you may have experienced because of AI/technology?

To view or add a comment, sign in

More articles by Meenakshi (Meena) Das

  • Finding "We" in Personalized World.

    Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data…

    4 Comments
  • Our Rest and Choice When AI Does the Work

    Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data…

    5 Comments
  • Collecting Data in a World on Fire.

    Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data…

    7 Comments
  • Tech Is Doing It's Job. Are We?

    Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data…

    2 Comments
  • "Human In The Loop" Is Not Enough

    Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data…

    11 Comments
  • To See AI Do Good, Build a Better World Around It. Full Stop.

    Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data…

    11 Comments
  • We Are The Data.

    Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data…

    6 Comments
  • The Bridge Between Anger and Kindness? That’s Uncertainty

    If I have to describe my community, my circle of trusted people, my tribe in a single sentence – they are all humans…

    8 Comments
  • Data, Joyful Resistance, and Progressive Philanthropy

    Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data…

    8 Comments
  • Can AI cause Tech Trauma? – Part One

    Welcome to data uncollected, a newsletter designed to enable nonprofits to listen, think, reflect, and talk about data…

    4 Comments

Explore content categories